What is artificial intelligence, how it works and applications 2022

What is artificial intelligence?  We could define artificial intelligence as the ability of a technological system to solve problems or perform tasks and activities typical of the human mind and ability.
Looking at the IT sector, we could identify AI – artificial intelligence as the discipline that deals with creating machines (hardware and software) capable of “acting” autonomously (solving problems, performing actions, etc.).

What is artificial intelligence (AI), definition

Artificial intelligence is a computer science discipline that deals with creating machines capable of  mimicking the capabilities of human intelligence  through the development of algorithms that allow intelligent activity to be shown.

The current ferment around this discipline is explained by the technological maturity achieved both in computational calculation (today there are very powerful hardware systems, of small size and with low energy consumption), and in the ability to analyze in real-time and in a short time. of huge amounts of data and of any form (Analytics).

In its purely informatic sense, artificial intelligence could be classified as the discipline that encompasses the theories and practical techniques for the development of algorithms that allow machines (in particular ‘computers’) to show intelligent activity, at least in specific domains and application areas.

Already from this first attempt at definition it is evident that it would be necessary to draw on a formal classification of the synthetic / abstract functions of human reasoning, meta-reasoning and learning in order to be able to build on them computational models capable of concretizing these forms of reasoning and learning (a difficult task since the real functioning of the human brain is still not fully known).

Not only that, when it comes to reasoning skills and automatic learning based on observation, we often run into the bed of Cognitive Computing which, however, must be understood as the set of technological platforms based on the scientific disciplines of artificial intelligence (including Machine Learning and Deep Learning) and Signal Processing (the ability to process signals).

The history of artificial intelligence: from the neural networks of the 1950s to today

The interest of the scientific community in artificial intelligence, however, began from far away: the first real project of artificial intelligence (now known by the acronym AI) dates back to 1943 when the two researchers Warren McCulloch and Walter Pitt proposed to the scientific world the first artificial neuron which was followed in 1949 by the book by Donald Olding Hebb, Canadian psychologist, thanks to which the connections between artificial neurons and complex models of the human brain were analyzed in detail.

The first working prototypes of neural networks [ i.e. mathematical / computer models developed to reproduce the functioning of biological neurons to solve problems of artificial intelligence understood, in those years, as the ability of a machine to perform functions and reasoning like a human mind – ed] then arrived towards the end of the 1950s and the interest of the public increased thanks to the young Alan Turing who, already in 1950, was trying to explain how a computer can behave like a human being.

The term artificial intelligence actually “officially” starts from the American mathematician John McCarthy (in 1956) and with it the “launch” of the first programming languages ​​(Lisp in 1958 and Prolog in 1973) specific for AI. From then on, the history of artificial intelligence has been quite fluctuating characterized by significant advances from the point of view of mathematical models (increasingly sophisticated modeled to “imitate” some brain functions such as pattern recognition) but with ups and downs from the point of view view of research on hardware and neural networks.

The first major turning point on this last front came in the 90s with the entry on the “enlarged” market (reaching the general public) of graphics processors, the GPU – graphics processing unit (data processing chips much faster than the CPUs, coming from the gaming world and able to support complex processes much more quickly, also operating at lower frequencies and consuming less energy than the “old” CPUs).

The most recent wave has come in the last decade with the development of the so-called “neuromorphic chips” , that is microchips that integrate data processing and storage in a single micro component (thanks to the acceleration that also research in the field of nanotechnologies has had) to emulate the sensory and cognitive functions of the human brain (the latter area where many startups are also concentrating).

Looking a little at past history, the first neural network model dates back to the late 1950s : it was the so-called “perceptron” , proposed in 1958 by Frank Rosenblatt (well-known American psychologist and computer scientist), a network with an input and an output layer and an intermediate learning rule based on the ‘error back-propagation’ algorithm (minimization of errors); the mathematical function, in essence, based on the evaluation of the actual output data – with respect to a given input – alters the weights of the connections (synapses) causing a difference between the actual output and the desired one.

Some experts in the sector trace the birth of cybernetics and artificial intelligence back to Rosenblatt’s perceptron [ Artificial Intelligence – AI: the term was actually coined in 1956 by the American mathematician John McCarthy, and Alan Turing’s first assumption is in 1950 in which he explains how a computer can behave like a human being – ed], although in the years immediately following the two mathematicians Marvin Minsky and Seymour Papert demonstrated the limits of the Rosenblatt neural network model: the perceptron was able to recognize, after appropriate “training” only linearly separable functions (through the training set – the learning algorithm – in the vector space of inputs, it is possible to separate those that require a positive output from those that require a negative output); moreover, the computational capabilities of a single perceptor were limited and the performance strongly dependent both on the choice of inputs and on the choice of algorithms through which to ‘modify’ the synapses and therefore the outputs.

The two mathematicians Minsky and Papert realized that building a multi-level network of perceptors could solve more complex problems but in those years the growing computational complexity required by training networks using algorithms had not yet found an answer on the infrastructural level (not there were hardware systems capable of ‘holding’ these operations).

The first important turning point from the technological point of view comes between the end of the 70s and the decade of the 80s with the development of GPUs which have considerably reduced the training times of the networks, lowering them by 10/20 times.

Artificial intelligence film, driving

Talking about application cases for artificial intelligence is impossible without mentioning the world of cinema and the great filmography that exists right around robots, AI and Machine Learning. A cinematography that is worth studying in depth precisely because it often heralds visions and precious advances of the world and the market which in many cases have then materialized within a few years. With this spirit of monitoring and analysis, our “special guide” called Artificial Intelligence Film was born, constantly updated thanks to the contribution of readers.

Types of artificial intelligence 

Already from this very rapid “historical journey” it is understood that giving an exact definition of artificial intelligence is a difficult task but, by analyzing its evolutions, we are able to trace its contours and therefore to make some important classifications.

Weak and strong artificial intelligence: what they are and how they stand out

Taking as a starting point the functioning of the human brain (even knowing that even today the exact mechanism is not yet fully understood), an artificial intelligence should be able to perform some typical human actions / functions:

  • act humanly (i.e. indistinctly with respect to a human being);
  • think humanly (solving a problem with cognitive functions);
  • think rationally (that is, using logic as a human being does);
  • act rationally (starting a process to obtain the best expected result based on the information available, which is what a human being, often unconsciously, usually does).

These considerations are of absolute importance because they allow us to classify AI into two large “lines” of investigation / research / development in which the scientific community has agreed, that of weak AI and strong AI:

Weak AI

Identifies technological systems capable of simulating certain cognitive functions of man without however reaching the real intellectual abilities typical of man (we are talking about mathematical problem-solving programs with which functionalities are developed for solving problems or to allow machines to take decisions);

Strong (strong AI)

In this case we speak of “wise systems” (some scientists even go so far as to say “self-conscious”) which can therefore develop their own intelligence without emulating thought processes or cognitive abilities similar to humans but developing their own in an autonomous way .

Machine Learning and Deep Learning, what they are

The weak AI and strong AI classification underlies the distinction between Machine Learning and Deep Learning, two fields of study that fall within the broader discipline of artificial intelligence that deserve some clarity, as we will hear more and more about them in the next years.

After the appropriate clarifications, we can now go on to define artificial intelligence as the ability of machines to perform tasks and actions typical of human intelligence (planning, language understanding, image and sound recognition, problem solving, pattern recognition, etc.), distinguishable in weak AI and strong AI.

What characterizes artificial intelligence from a technological and methodological point of view is the learning method / model with which intelligence becomes proficient in a task or action. These learning models are what distinguish Machine Learning and Deep Learning.

How artificial intelligence works

What we have seen so far is the technological workings of artificial intelligence (AI). From the point of view of intellectual abilities, the functioning of an AI is substantiated mainly through four different functional levels:

  • understanding : through the simulation of cognitive abilities of correlation of data and events, the AI ​​(artificial intelligence) is able to recognize texts, images, tables, videos, voice and extract information from them;
  • reasoning : through logic the systems are able to connect the multiple information collected (through precise mathematical algorithms and in an automated way);
  • learning : in this case we are talking about systems with specific functions for the analysis of data inputs and for their “correct” return in output (it is the classic example of Machine Learning systems that with automatic learning techniques lead AI to learn and to perform various functions);
  • interaction (Human Machine Interaction): in this case we refer to the mode of operation of AI in relation to its interaction with humans. This is where NLP – Natural Language Processing systems are strongly advancing, technologies that allow humans to interact with machines (and vice versa) by exploiting natural language.

Examples of artificial intelligence

The Over The Top companies such as Facebook , Google , Amazon , Apple and Microsoft are battling not only to bring innovative startups in the field of AI into their own but also to start and feed research projects of which we are already seeing some fruits (such as the recognition images, faces, voice applications, language translations, etc.).

Today, technological maturity has meant that artificial intelligence left the riverbed of research to actually enter daily life. If as consumers we have important “tastes” of it above all thanks to Google and Facebook, in the business world the maturity (and availability) of technological solutions has brought the potential of AI to many segments.


Artificial intelligence applied to sales has already shown important results, in particular thanks to the use of expert systems [applications that fall within the branch of artificial intelligence because they reproduce the performance of an expert in a specific domain of knowledge or field of activity – ed ].

The solutions that integrate expert systems within them allow users (even non-experts) to solve particularly complex problems for which the intervention of a human being expert in the specific sector, activity or domain of knowledge where the problem arises would necessarily be necessary.

In simple terms, they are systems that allow people to find a solution to a problem even without requiring the intervention of an expert.

From the technological point of view, the expert systems allow to implement, in an automatic way, some inference procedures (i.e. of logic: with an inductive or deductive process a conclusion is reached following the analysis of a series of facts or circumstances).

In particular, the so -called rule-based expert systems exploit the well-known principles in IF-THEN computer science where If is the condition and Then the action (if a certain condition occurs, then a certain action occurs).

The reason why expert systems fall within the branch of artificial intelligence rather than in the context of normal software programs lies in the fact that given a series of facts, expert systems , thanks to the rules of which they are composed, are able to deduce new facts .

These systems are particularly performing and suitable for commercial configurators (solutions used by Sales in business models where the proposal is particularly complex due to the nature of the products marketed, for the possible combinations of solutions, for the variables that affect the final result and, therefore, , on the realization of the product itself and its price).

In general, a product configurator must perform the task of simplifying the choice of an asset to purchase; process not always immediate when the variables involved are numerous (sizing, large number of components, use of particular materials, combination of raw materials and various materials with consequent impacts on physical, mechanical or chemical properties, etc.).

When products that need to be configured are to be included in complex projects (we are thinking of manufacturing plants or systems and machinery that must operate in particular climatic conditions or “critical” industrial environments), the product configurators must be “expert and intelligent” to the point of enabling users to independently search, identify, evaluate and request what they need, without resorting to the technical expert.

This is where expert systems – such as those developed by Myti – best express their potential.

DECLARO , for example, is a “rule engine” that allows the product configurator to propose the right questions to the non-expert user, followed by other correct questions.

The accumulation of experiences (between questions and answers) not only accelerates and makes the configuration of the solution suited to your needs more effective, but also becomes a company knowledge base system that is continuously enriched.

In the solution developed by Myti, the “engine” of questions and answers is presented as a common web interface. The If-Than rules are built upstream by the domain expert but the system is then able to ask questions to a non-expert user and based on the answers – by accessing the knowledge of the expert who defined the rules – do other questions that help the user (for example the seller) to choose and then to configure a complex product or an articulated commercial proposal.


Voice / virtual assistants ( chatbot, Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa ) that use artificial intelligence both for natural language recognition and for learning and analyzing user habits and behaviors; real-time analysis of large amounts of data for understanding the “sentiment” and needs of people to improve customer care, user experience, assistance and support services but also to create and refine sophisticated engagement mechanisms with activities that push up to the forecast of purchasing behaviors from which to derive communication strategies and / or service proposals.

The AI ​​in Marketing has been showing all its maximum power for some years and the largest area of ​​use is certainly that of managing the relationship with users.

Artificial Intelligence Marketing (AIM), algorithms to persuade people

For several years a real discipline has been born, Artificial Intelligence Marketing (AIM) , a branch of Marketing that uses the most modern technologies that fall within the field of Artificial Intelligence, such as Machine Learning and Nlp – Natural Language Processing, integrated with mathematical / statistical techniques (such as those of Bayesian networks) and behavioral marketing (behavioral targeting).

In concrete terms, it is the use of artificial intelligence and Machine Learning algorithms with the aim of persuading people to take an action, buy a product or access a service (in other words, respond to a “call to action “)

Aggregation and analysis of data (even those unstructured and based on natural language) in a continuous process of learning and improvement to identify from time to time the probably most effective actions, strategies and communication and sales techniques (those that have the potential higher efficacy / success for individual target users). This is essentially what AIM does


AI has had the advantage of improving many technological systems already in use by people with disabilities (for example, the voice systems have improved to the point of allowing a completely natural relationship / communication even to those who are unable to speak) but it is on the front of the diagnosis and treatment of tumors and rare diseases, the new capabilities of AI will be seen.

Already today, cognitive systems are available on the market capable of drawing, analyzing and learning from an infinite pool of data (scientific publications, research, medical records, data on drugs, etc.) at a speed unimaginable for humans, accelerating processes of often very critical diagnoses for rare diseases or suggesting optimal treatment paths in the case of tumors or particular diseases.

Not only that, AI-based virtual assistants are starting to see each other more frequently in operating theaters, in support of reception staff or those who offer first aid services.


Fraud prevention is one of the most mature applications where artificial intelligence materializes with what are technically called “advanced analytics”, very sophisticated analyzes that correlate data, events, behaviors and habits to understand in advance any fraudulent activity (such as cloning a credit card or executing an unauthorized transaction); these systems can actually also find application within other business contexts, for example for the mitigation of risks, the protection of information and data, the fight against cybercrime.

Supply chain

The optimization and management of the supply and distribution chain now requires sophisticated analyzes and, in this case, AI is the effective system that allows you to connect and monitor the entire supply chain and all the players involved; a very significant case of application of artificial intelligence to the Supply chain management sector is related to order management (in this case the technologies that exploit artificial intelligence not only aim at the simplification of processes but also at the total integration of them, from purchases up to inventory, from warehouse to sales up to even integration with marketing for the preventive management of supplies according to promotional activities or communication campaigns).

Public security

The ability to analyze huge amounts of data in real time and to “deduce” through correlations of events, habits, behaviors, attitudes, systems and data for geo-localization and monitoring of the movement of things and people offers enormous potential for the improvement of efficiency and effectiveness of public safety.

For example, for the safety and prevention of crimes in airports, railway stations and metropolitan cities or for the prevention and management of the crisis in cases of natural disasters such as earthquakes and tsunamis.

Artificial intelligence : 2022 update

GPT-3, the AI ​​that writes texts

During 2020, the  OpenAI company presented the GPT-3 system , a powerful “intelligent tool” for producing texts. Based on unsupervised pre-training techniques in the development of Natural Language Processing systems. GPT-3 is a “language generator” and is able to write articles and essays in total autonomy.

GPT-3 was preceded by GPT-2, which was already capable of writing lyrics in a range of different styles depending on the phrase entered as input. To understand the difference between the two systems, just think that GPT-3 has 175 billion “parameters”, ie the values ​​that the neural network used in the model optimizes during training), while GPT-2 has “just” 1, 5 billion.

DALL-E, from words to images

Again the OpenAI company released a new artificial intelligence model in the second half of 2020: DALL-E . The system is capable of producing images from textual descriptions that can be expressed in natural language, on the basis of a text or text + image input, obtaining an artificial image at the output. The name originates from the painter Salvador Dalì and the WALL-E cinema robot and is a variant of the GPT-3 model.

LaMDA from Google and Wav2vec-U from Facebook

OpenAI’s GPT systems were followed by Google’s LaMDA and Facebook’s Wav2vec-U natural language models . LaMDA (acronym for “Language Model for Dialogue Applications”), is based (like BERT and GPT-3) on Transformer technology and was born from Google’s intent to better understand users’ intentions when they search the web.

Wav2vec-U is instead a method to create  speech recognition systems without the need to have transcripts on which to train the model.

Deep face: “this person does not exist”

ThisPersonDoesNotExist.com is a site that, using neural networks, generate false faces, that is, completely invented . Created by Philip Wang, author of Uber’s software, the site uses a StyleGAN algorithm developed by NVIDIA Corporation, a GAN (Generative Adversarial Network) neural network.

Deep Nostalgia, revive the past

In the wake of deep face, in early 2021, a new AI-based system called “Deep Nostalgia”, the work of the My Heritage company, appeared. It is a software that “animates” photos of people, making them “relive”. Deep Nostalgia uses a computer vision technique called Face Alignment, based on deep neural networks.

Copilot, the software that writes software

GitHub (Microsoft) software developers developed Copilot , a software that helps developers manage and store code – a program that uses artificial intelligence to assist the developers themselves. For example, you type a command query and Copilot guesses the programmer’s intent, writing the rest.

Shopping assistance robot

The use of robots in the sales sector is becoming increasingly popular. Their functions include: assisting the customer at the checkout, answering buyers’ questions about the location of items and assisting them in their choice. Also, clean the floors and deliver the products to your doorstep.

Artificial Intelligence and Digital Agenda in Italy

Artificial intelligence has long been on the AgID work tables  and is one of the widely debated and studied topics within the Italian Digital Agenda to understand how the diffusion of new AI tools and technologies can affect the  construction of a new relationship between the State and citizens  and analyze the  consequent social implications  relating to the creation of further possibilities for simplification, information and interaction.

Following this “trend”, a Task Force was created in Italy, within AgID, whose members have the task of:

–  study and analyze the main applications relating to the creation of new services for citizens , defining the strategies for managing opportunities for the Public Administration;

–  to map at the Italian level the main centers – university and non-university – operating in the artificial intelligence sector  with reference to the operational application in citizen services;

–  mapping the work already started by some central and local administrations by  proposing actions to be taken for the elaboration of strategic policies;

–  highlight and study the social implications linked to the introduction of artificial intelligence technologies in public services .

FOLLOW THE DIGITAL AGENDA PAGE AND FIND OUT MORE on the value of Artificial Intelligence within the Digital Agenda of Italy

The state of the art of regulations on artificial intelligence 2022


In October 2020, the Italian government published the draft National Strategy for Artificial Intelligence , based on the proposals made in July by a group of experts. The main point of the Strategy is training. The Italian AI ecosystem is based on:

  • research and technology transfer;
  • production;
  • adoption.

Italy is also part of the Global Partnership on AI (GPAI) , an international initiative that aims to encourage the responsible development of artificial intelligence to ensure respect for human rights, inclusion and diversity.


On April 21, 2021, the European Commission presented a draft regulation on artificial intelligence, rules that will be applied directly and in the same way in all Member States and which follow a risk-based approach : the greater the risk, the greater the rules . For companies that do not respect these rules, fines of up to 6% of turnover will be imposed.

Proposal for regulation of artificial intelligence in Europe, an in-depth study

The European regulation on artificial intelligence and its relationship with the GDPR

AI regulation: the recommendations of the Center for Information Policy Leadership

In March 2022 the Commission for the Internal Market and Consumer Protection of the European Parliament, as well as the Commission for Civil Liberties, Justice and Home Affairs, published a joint report containing recommendations relating to the proposed  Regulation on Intelligence . Artificial .

The AI ​​Act makes progress: the EU publishes new recommendations – AI4Business

Work and artificial intelligence: present and future

When it comes to artificial intelligence, it is impossible not to touch ethical and social aspects such as those related to work and employment as fears in the global community are growing.

Fears justified if you think that half of today’s jobs could be automated by 2055.

Any type of work is subject to  partial automation  and it is starting from this consideration that in the report  A Future That Works: Automation, Employment and Productivity , produced by  McKinsey Global Institute  –  MGI  (a 148-page report, available on the  World Economic website Davos Forum  , where it was officially presented last January), it is estimated that about half of the current workforce can be impacted by automation thanks to technologies already known and in use today.

In reality, several studies arrive to put a stop to the fears that have been depopulating for months via the web and social networks on the responsibility of artificial intelligence in “destroying” jobs. Here are the most significant ones:

  • according to the Capgemini study entitled “Turning AI into concrete value: the successful implementers’ toolkit” 83% of the companies interviewed confirm the creation of new positions within the company, moreover, three quarters of the companies interviewed recorded a 10% increase in sales following the implementation of artificial intelligence;
  • a recent report by The Boston Consulting Group and MIT Sloan Management Review shows that the reduction of the workforce is feared only by less than half of managers (47%), convinced instead of the potential (85% of respondents think it will allow companies to gain and maintain a competitive advantage).

The risks of artificial intelligence

Economists have been wondering for some time about which tools to activate to prevent the evolution of society towards an economy with ever less labor intensity – whose evolution is now accelerated by artificial intelligence – does not result in an impoverishment of the population. which would require a “redistribution” of wealth considering that most of this will be produced by machines.

Social issues are flanked by ethical questions on the development and evolution of artificial intelligence and new technologies.

We have been wondering for some time about the “power of algorithms” and big data, wondering if these will mark the superiority of the brain of machines over that of man.

The fears (fed online by well-known personalities such as Stephen Hawking and Elon Musk ) may seem excessive but underestimating the impacts of artificial intelligence could represent the number one risk.

The well-known Stephen Hawking warned against the risks of artificial intelligence: “we are unable to predict what we will be able to do when our minds are amplified by artificial intelligence – said the physical during the last Web Summit in Lisbon -.

Perhaps, with new tools, we will also be able to remedy all the damage we are causing to nature, and perhaps we will also be able to find definitive solutions to poverty and disease. But… it is also possible that with the destruction of millions of jobs our economy and our society will be destroyed ».

«Artificial intelligence could be the worst event in the history of our civilization – it is the dramatic vision of the astrophysicist -. It brings with it dangers, such as powerful automatic weapons, nuclear or biological, even enabling new ways to allow a few individuals and organizations to oppress and control multitudes of men (and things).

We must prepare ourselves to manage it to prevent these potential risks from taking shape and becoming reality ».

It is also surprising that the latest warning came from a successful entrepreneur like Elon Musk. “Artificial intelligence is the greatest risk our civilization faces,” he warned.

In particular, he highlighted the risks of a war unleashed by computers or an occupational catastrophe due to decisions based only on the processing of artificial intelligence, the only true dominant pillar of the economy of the future capable of reserving thousands, perhaps millions, of jobs for machines today. still managed to men.

Decentralized artificial intelligence: what it is and why it can be the answer to ethical problems

The international scientific community has been working for some time on the so-called  superintelligence , a general artificial intelligence [the research in this field aims to create an AI – artificial intelligence capable of completely replicating human intelligence; refers to the branch of research of strong artificial intelligence according to which it is possible for machines to become wise or self-conscious, without necessarily showing thought processes similar to human ones – ed].

However, the risks are very high, especially if the research is carried out by few companies capable of dedicating huge resources (economic and skills) to the most innovative projects.

Decentralizing artificial intelligence  and making it so that it can be  designed, developed and controlled by a large international network through open source programming  is for many researchers and scientists the surest approach to creating not only superintelligence but democratizing access to artificial intelligence, reducing the risks of monopoly and therefore solving ethical and safety problems.

Today, one of the major concerns in terms of artificial intelligence concerns the use of data and the trust with which AI exploits data and information to reach certain decisions and / or carry out specific actions.

The human mind, especially when it comes to  Deep Learning  (for which we refer you to reading the service ” What is Machine Learning, how it works and what are its applications ” to get a more detailed picture), is not able to interpret the steps taken by an artificial intelligence through a deep neural network and must therefore “trust” the result achieved by an AI without understanding and knowing how it came to this conclusion.

In this scenario, blockchain seems to be the most reassuring answer: the use of blockchain technology allows immutable records of all data, all variables and all processes used by artificial intelligences to arrive at their conclusions / decisions. And that’s exactly what you need to easily control the entire AI decision-making process.

Previous articleWhat does it mean to be a carrier of a genetic disease?
Next articleBrad Pitt responds to Angelina Jolie’s accusations: “She is reinventing something that happened six years ago”