EP3815001A1 - Procede de modelisation pour le controle des resultats fournis par un reseau de neurones artificiels et autres procedes associes - Google Patents
Procede de modelisation pour le controle des resultats fournis par un reseau de neurones artificiels et autres procedes associesInfo
- Publication number
- EP3815001A1 EP3815001A1 EP19733051.7A EP19733051A EP3815001A1 EP 3815001 A1 EP3815001 A1 EP 3815001A1 EP 19733051 A EP19733051 A EP 19733051A EP 3815001 A1 EP3815001 A1 EP 3815001A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- variables
- neural network
- function
- variable
- artificial neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/045—Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
Definitions
- the technical field of the invention is that of artificial neural networks.
- the present invention relates to a method for monitoring the results provided by an artificial neural network and more particularly to a modeling process for monitoring the results provided by an artificial neural network.
- the present invention also relates to a method for monitoring the results supplied by an artificial neural network, a method for comparing the performance of two artificial neural networks, a method for analyzing a decision-making of an artificial neural network , a device and a computer program product implementing such methods and a recording medium for the computer program product.
- Neural networks or artificial neural networks constitute the main tool of deep learning or deep learning which tries to model data in order to be able thereafter to perform specific tasks with new data, such as classification or detection.
- a neural network goes through a training phase or learning phase during which it learns by browsing over several iterations a training database and then through a generalization phase during which it performs, on a basis of generalization data, the task for which he was trained.
- a neural network is a complex algorithm, involving several thousand - even millions of parameters in its decision-making. If this complexity is necessary for the neural network to have the capacity to detecting structures in data, it limits the interpretation that can be made of the results by a user, preventing him from checking their relevance.
- an image is supplied as input to the neural network and the latter ideally provides the same image as output in which it has framed the people.
- the neural network can output the image in which it will have properly framed all those present - which will suggest to the user that the neural network is efficient - without however the parameters that he used to detect people are all relevant. For example, if all the images which were supplied to the neural network during its learning represent a person on a blue sky background, the neural network could have chosen to base its result notably on the color of the background and not only on the characteristics of a person. The neural network then detects people very well on a blue background but will be unable to detect a person on a red background. In this case, the neural network is not suitable for detecting people. However, the user could have concluded the opposite based on the results provided by the neural network on images with a blue background.
- the preferred variables are, for example, more easily interpretable variables.
- polar bear and grizzly bear from data including for example the color of the coat, the type of feeding, the age of the animal, the size of the animal etc.
- a user preferred variable could be the color of the coat since this is the most obvious difference between the two species.
- the two neural networks can both present the same performances and correctly classify the data but the user will prefer to use in his application the first neural network which mainly uses the color of the coat and whose functioning is therefore more easily comprehensible than the second neural network which also uses the animal's age and size to conclude.
- the invention offers a solution to the problems mentioned above, by making it possible to control the relevance of the data used in decision-making of an artificial neural network.
- a first aspect of the invention relates to a modeling method for controlling the results provided by an artificial neural network comprising the following steps implemented by a computer: - Generate a network of artificial neurons;
- an operating model of the neural network is generated for each data item tested, each operating model depending on a reduced number of variables which are the variables having the most weight in the decision making of the neurons. It is thus possible to monitor the results of the neural network in order, for example, to be able to diagnose a training database, compare the performance of two neural networks or even analyze a decision made by a neural network.
- the modeling process thus defined is deterministic and reproducible, that is to say that the operating model generated is the same as long as we keep the same neural network, the same training database and the same data tested.
- the modeling method according to a first aspect of the invention may have one or more complementary characteristics among the following, considered individually or in all technically possible combinations.
- the first function Fi is an unbounded function.
- the linear approximation of the first function Fi is more relevant since a linear function is unbounded.
- the first function Fi is defined by:
- the result R can be obtained by applying to the function Fi the sigmoid function which is used in logistic regression, one of the simplest algorithms used in machine learning.
- the second function F 2 is the development limited to the first order of the first function F in the vicinity of a datum. Thus, it suffices to calculate the gradient of the first function F with respect to the variables v, to obtain the second function F 2 .
- the second function F 2 is expressed as the sum of a coefficient of ordinate at the origin b and of the sum of the variables v, each multiplied by a directing coefficient a,:
- the second function F 2 is a linear approximation of the first function Fi with respect to the set of variables v, on which the result depends.
- a first variable vi correlated with a second variable v 2 is expressed as a function of the second variable v 2 as the sum of an uncorrelated variable Si and of a correlation coefficient Ci 2 multiplied with the second variable v 2 :
- the simplification step comprises the following sub-steps:
- the third function F 3 is expressed as the sum of the coefficient of ordinate at the origin b and of the sum of the remaining variables vr p each multiplied by its director coefficient of remaining variable ar p :
- the third function F 3 depends on a smaller number of variables than the result, which makes it easier to control this result.
- the method according to a first aspect of the invention comprises a step of synthesis of the operating models obtained.
- a step of synthesis of the operating models obtained it is possible to check the consistency of the results of the neural network.
- a second aspect of the invention relates to a method for controlling the results provided by an artificial neural network characterized in that it comprises all the steps of the modeling method according to a first aspect of the invention and an additional step of evaluation of the training database from at least one operating model.
- a third aspect of the invention relates to a method for comparing the performance of a first network of artificial neurons and a second network of artificial neurons, characterized in that it comprises the following steps:
- a fourth aspect of the invention relates to a method for analyzing a decision-making of an artificial neural network, the decision having been taken on the basis of at least one test datum, characterized in that it comprises the steps of the modeling method according to any one of claims 1 to 5 followed by a step of generating an explanatory report of the decision-making from the operating model of the artificial neural network corresponding to the data of test.
- a fifth aspect of the invention relates to a computer characterized in that it is suitable for implementing the modeling method according to a first aspect of the invention and / or the control method according to a second aspect of the invention and / or the comparison method according to a third aspect of the invention.
- a sixth aspect of the invention relates to a computer program product comprising instructions which, when the program is executed by computer, lead the latter to implement the steps of the modeling method according to a first aspect of the invention and / or of the control method according to a second aspect of the invention and / or the comparison method according to a third aspect of the invention.
- a seventh aspect of the invention relates to a computer-readable recording medium on which the computer program product according to a fifth aspect of the invention is recorded.
- FIG. 1 shows a block diagram of the modeling method according to a first aspect of the invention.
- FIG. 2 shows a block diagram of the control method according to a second aspect of the invention.
- FIG. 3 shows a block diagram of the comparison method according to a third aspect of the invention.
- FIG. 4 shows a block diagram of the analysis method according to a fourth aspect of the invention.
- a first aspect of the invention relates to a modeling method 100 for controlling the results provided by an artificial neural network.
- neural network In the rest of the application, the terms “neuron” and “artificial neuron” will be used interchangeably.
- a neural network has a plurality of layers, each comprising a plurality of neurons.
- a neural network has between 2 and 20 layers and each layer of the neural network has between 10 and 2000 neurons.
- each neuron in each layer is connected to each neuron in the previous layer and to each neuron in the next layer through an artificial synapse.
- a connection between two neurons is assigned a synaptic weight or coefficient and each neuron is assigned a bias coefficient.
- the coefficient of bias of a neuron is its default value, that is to say its value when the neurons of the previous layer to which it is connected do not send it any signal.
- the objective of the modeling method 100 is to generate a simplified model for each result R generated by the neural network.
- the term “result generated by a neural network” is understood to mean an output datum associated with the decision-making of the neural network concerning an input datum.
- the neural network is trained on a training database or training database to be adapted to a predefined task. Learning can be supervised or unsupervised. In supervised learning, learning is constrained by the learning database. Indeed, the learning database is annotated to signal to the neural network the structures that it must locate. In contrast, in unsupervised learning, the neural network itself finds underlying structures from the raw data in the training database.
- the predefined task is for example detection, classification or even recognition.
- Classifying data consists in separating them into several classes, that is, to classify them, and to identify each of the classes. For example, in a sample containing black data and white data, classifying the data corresponds to separating it into two classes while classifying the data corresponds to separating it into two classes and assigning one the name of “black class And to the other the name of "white class”.
- a neural network having received supervised training is capable of classifying data whereas a neural network having received unsupervised training is only capable of classifying data.
- the neural network is then tested on a test database or generalization database. For each test data item in the test database, the neural network then supplies a result R illustrating its decision concerning the test data item. For example, if the task for which the neural network was trained is for classification and the neural network has made the decision that the test data belonged to class C, the result R provided by the neural network is the probability associated with class C.
- training database and the test database can be two separate databases or two separate parts of the same database.
- the data used in the training database and in the test database are for example biological data, data relating to the carrying out of a process or a product, images, audio data or else electrical signals.
- a datum has a plurality of variables v, and each datum used has the same number of variables v ,.
- a data item has between 10 and 10,000 variables v ,.
- the variables v can be of numeric, binary, categorical type such as for example a nationality or a profession or even dates.
- the variables v are for example information on a patient such as his age, his symptoms, his weight as well as information on the results of exams he has taken such as blood tests or MRI scans.
- the variables v are for example information on the product such as its name, its composition as well as information on its manufacturing process such as its manufacturing time, the name of the assembly line on which it was made.
- the variables v are for example the variance and the average of the gray levels.
- the data used can be tabular data comprising a plurality of examples, each example depending on a plurality of variables v ,.
- a tabular type of data used comprises for example between 1,000 and 1,000,000 examples, each comprising between 10 and 10,000 variables v ,.
- the expression h +1) of the neuron k of layer 1 + 1 is expressed as a function of the N neurons i of layer I in the following way:
- f (z) max (z, 0)
- the expression of the neuron k of a layer is therefore expressed as a function of the expressions of the neurons of the previous layer and the expression h® ) of the neuron k of the layer 1 is expressed as a function of the variables v, of the data input as follows:
- the probability p k associated with the class k is then expressed as follows:
- the result R then corresponds to the maximum probability p k .
- the result R generated by a neural network is therefore a function of the set of variables v, of the test data for which the result R is generated, parameterized by the synaptic coefficients P ki assigned to the connections of the neural network and by the bias coefficients b k assigned to each neuron.
- the modeling method 100 provides a model which is an approximation of the result R generated by a neural network by a simplified expression, which is a function of a more limited number of variables v ,.
- the modeling method 100 comprises several stages, the sequence of which is shown in FIG. 1. These steps are implemented by a computer comprising at least one processor and one memory.
- the first step 101 of the modeling method 100 is a step of generating an artificial neural network.
- the number of layers and the number of neurons per layer of the neural network are fixed as well as other parameters such as the learning step and the coefficient of regularization, which describe its learning process.
- the neural network learning step defines the frequency with which the weights of the neural network are updated during the learning phase and the regularization coefficient limits the over-learning of the neural network.
- the neural network is ready to be trained.
- the second step 102 of the modeling method 100 is a step of training the neural network on a training database.
- the neural network is able to perform a predefined task on a certain type of data, the type of data present in the training database.
- the third step 103 of the modeling method 100 is a step of testing the neural network on at least one test datum depending on a plurality of variables v ,.
- the test data is the same type as the data in the training database.
- the neural network generates a result R per processed test data, the result R depending on the same variables v, as the processed test data.
- the fourth step 104 of the modeling method 100 is a step of linear approximation of a first function F-i depending on a result R generated in the previous step 103.
- a result R is a function of the variables v ,, function whose values are between 0 and 1.
- the result R is therefore a bounded function.
- a linear function is not bounded.
- a transformation is therefore advantageously applied to the result R to obtain a first unbounded function Fi which will be linearly approximated.
- the first function Fi is unbounded and depends on the same variables v, as the result R.
- the first function Fi is thus obtained by applying to the result R the inverse function of the sigmoid function s defining itself as:
- the sigmoid function is used in logistic regression, one of the simplest machine learning algorithms to separate a class from the set of other problem classes.
- logistic regression consists in the application of a sigmoid function to a linear expression.
- approximating the function Fi by a linear function L amounts to approximating the result R by a logistic regression a (L).
- the first function F is then linearly approximated, by carrying out for example a development limited to the first order in the vicinity of the test datum, to obtain a second function F 2 .
- the second function F 2 is then expressed as follows:
- the fifth step 105 of the modeling method 1 00 according to a first aspect of the invention is a step of simplifying the second function F 2 .
- the simplification step 105 comprises a first phase consisting in classifying the variables v, by eliminating the correlations between the variables v ,.
- variables v are normalized. For example, all variables v, have a zero mean and a standard deviation of 1.
- a contribution coefficient W is calculated for each variable v, of the test data.
- the contribution coefficient W k of the variable k is expressed as follows:
- variable v ref The variable v, with the contribution coefficient W, having the highest absolute value is designated as the reference variable v ref .
- Each variable v, different from the reference variable v ref is then expressed as a function of the reference value v ref in the following manner:
- the number of iterations is predefined.
- the number of iterations is strictly less than the number of variables v, of the second function F 2 and greater than or equal to 1.
- the relevance of the value chosen for the number of iterations can be checked by comparing the linear function obtained for this number of iterations and the linear function obtained for a higher number of iterations, using a proximity measure. , for example the ratio of the norms of the vectors of directing coefficients of the linear functions obtained.
- the reference value obtained at each iteration is a synthetic variable, the synthetic variables being independent of each other.
- p synthetic variables are obtained and at the end of these iterations, the contribution coefficients of all the other variables are set to zero.
- the first phase of step 105 of simplification consists, first, to calculate the correlation coefficient of each variable Wi, W 2 , W 3 , W 4 and W 5 .
- W 3 is worth:
- W 3 a 3 + C ⁇ a- L + C 32 a 2 + C 4 a 4 -I- C 35 a 5
- the absolute values of the correlation coefficients Wi, W 2 , W 3 , W 4 and W 5 are compared with each other and the variable with the highest absolute value correlation coefficient is selected as the reference value. For example, vi is selected as the reference value.
- v 2 is:
- v 2 ' is selected as the reference variable.
- v 3 ', v 4 ' and v 5 ' are then expressed as a function of v 2 '.
- v 3 'is is: Then, new variables v 3 ”, v 4 ” and v 5 ”are calculated.
- v 3 ”is At the end of these calculations, the second iteration is finished.
- a reference variable is selected from the new variables v 3 ”, v 4 ” and v 5 ”as before. For example, v 3 ”is selected as the reference variable. Then, the contribution coefficients of the remaining variables v 4 ”and v 5 ” are set to zero.
- the synthetic variables are therefore expressed as a function of the variables v, of the test datum using the following formula until their expression depends only on the variables v, of the second function F 2 :
- the variables v, of the test data on which the synthetic variables depend are the remaining variables vr p .
- the number of remaining variables vr p is strictly less than the number of variables v, of the test data.
- the third function F 3 is then expressed as:
- the synthetic variables are scanned in reverse order from the last selected to the first selected.
- the governing coefficient of the k-th synthetic variable selected at the k-th iteration of the first phase of l simplification step 105 is updated as follows: while the guiding coefficients of the variables selected after the k-th synthetic variable, that is to say the synthetic variables selected after the k-th iteration, are updated as follows:
- the synthetic variables are vi, v 2 'and v 3 ”.
- step 1 of calculating the remaining variable governing coefficients the governing coefficient of the third synthetic variable v 3 ”is updated as follows:
- step 2 of calculating the guiding coefficients of the remaining variable the guiding coefficient of the second synthetic variable v 2 ′ is updated as follows:
- step 3 of calculating the guiding coefficients of the remaining variable the guiding coefficient of the first synthetic variable vi is updated as follows:
- the guiding coefficients of the second and third synthetic variable v 2 'and v 3 ”selected after the first synthetic variable vi are updated as follows: ar-,
- the third function F 3 therefore depends only on the remaining variables vr p , that is to say on a reduced number of variables v, on the test datum.
- Step 106 of the modeling method 100 consists in applying the inverse function of the first function Fi to the third function F 3 to obtain an operating model of the neural network for the result R.
- the neural network operating model is a simplified R result, dependent on a reduced number of variables v, facilitating the control of the R result provided by the neural network.
- the modeling method 100 generates an operating model for each result R. If several results R have been generated by the neural network, the modeling method 100 may for example include an additional synthesis step operating models. Since the test data are similar, the synthesis step can be used to check the consistency of the results of the neural network.
- a second aspect of the invention relates to a monitoring method 200 for monitoring the results provided by an artificial neural network.
- the control method 200 according to a second aspect of the invention comprises several steps, the sequence of which is shown in FIG. 2.
- the control method 200 according to a second aspect of the invention comprises all the steps 101 to 106 of the modeling method 100 according to a first aspect of the invention making it possible to obtain at least one operating model of the neural network.
- the control method 200 then comprises a step 201 of evaluating the training database consisting in comparing the restricted number of variables v, on which each operating model depends with a certain number of variables v, relevant.
- the variables v are for example the mean and the variance of each pixel of the image
- the relevant variables v are therefore the mean and the variance of the pixels on which the people are located.
- the operating model depends mainly on variables v, linked to pixels not corresponding to a person in the image but to background pixels, this means that the variables v, taken into account in the decision-making of the network of neurons are wrong, and therefore that the learning did not allow the neural network to become efficient for the expected task. This is an indication that the training database is not suitable for detecting people.
- the irrelevant variables v taken into account by the neural network then provide leads making it possible to understand why the training database is not suitable and thus to remedy it.
- the fact that the neural network takes into account the pixels of the background may be due to too much homogeneity of the backgrounds behind the people.
- One solution is therefore to add images to the training database with more varied backgrounds.
- the operating model depends mainly on relevant variables v, this means that the training database is well suited for the expected task.
- a third aspect of the invention relates to a comparison method 300 for comparing the performance of two artificial neural networks.
- the two neural networks can, for a given test data, have similar results, for example, in the case where we want to predict a patient's disease based on symptoms, the two neural networks output the same disease with the same probability of certainty, or different results, for example, the two neural networks do not give the same disease as output.
- this can then make it possible to choose a preferred neural network, which uses more relevant variables in its decision-making.
- this can for example make it possible to understand why one of the neural networks is failing.
- the comparison method 300 according to a third aspect of the invention comprises several stages, the sequence of which is shown in FIG. 3.
- the comparison method 300 comprises all the steps 101 to 106 of the modeling method 100 according to a first aspect of the invention for a first neural network making it possible to obtain at least a first operating model of the first neural network and all the steps 101 to 106 of the modeling method 100 according to a first aspect of the invention for a second neural network making it possible to obtain at least a second operating model of the second neural network.
- the comparison method 300 then comprises a step 301 of comparing the performances of the first neural network and the second neural network by comparing for the same test data, the first operating model of the first artificial neural network and the second operating model of the second artificial neural network. More precisely, the comparison step 301 consists in comparing the variables v, on which the first operating model depends and the variables v, on which the second operating model depends. The variables v, taken into account in one of the two operating models and not in the other operating model are then compared with a certain number of relevant variables v. Thus, the neural network using the fewest variables v, irrelevant in its decision-making is considered to be the most efficient.
- the first operating model takes into account fever, fatigue and aches while the second operating model takes into account fever, fatigue, and ear pain to diagnose the flu.
- the variables v taken into account in one of the two operating models and not in the other operating model, are the aches for the first operating model and the ear pain for the second operating model.
- the relevant variables v are the symptoms commonly seen in a patient with influenza. Aches are therefore part of the relevant variables v, which is not the case for ear pain.
- the most efficient neural network to perform this task is therefore the first neural network.
- control method 200 and the comparison method 300 are compatible, that is to say that the comparison method 300 may include the step 201 of evaluation of the training database.
- the step 201 for evaluating the training database of the control method 200 and the step 301 for comparing the performances of the two neural networks can be implemented by a computer or performed manually.
- a fourth aspect of the invention relates to a method for analyzing a decision making of an artificial neural network.
- Decision-making is automatic, i.e. it is carried out by a neural network that has been trained for this decision-making.
- the decision is made based on at least one test data.
- the decision to make a neural network adapted to the detection of pedestrians can be to brake or not depending on the presence or not of a pedestrian in the environment close to the car .
- the analysis method 400 according to a fourth aspect of the invention comprises several stages, the sequence of which is shown in FIG. 4.
- the analysis method 400 comprises all the steps 101 to 106 of the modeling method 100 according to a first aspect of the invention for a neural network making it possible to obtain at least one operating model of the neural network from at least one test datum.
- the analysis method 400 then comprises a step 401 of generating a report explaining the decision-making of the neural network from the operating model or models corresponding to the one or more test data (s).
- the step 401 of generating a report consists, for example, of synthesizing the operating models if there are several of them to identify the variables having the most weight in decision making and of generating a report comprising these variables.
- the synthesis consists, for example, of only keeping the variables having a percentage of presence in the operating models above a certain threshold of presence.
- the report includes, for example, the variables accompanied by their percentage of presence and their weight in decision-making.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
- Complex Calculations (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1856012A FR3083354A1 (fr) | 2018-06-29 | 2018-06-29 | Procede de modelisation pour le controle des resultats fournis par un reseau de neurones artificiels et autres procedes associes |
PCT/EP2019/067289 WO2020002573A1 (fr) | 2018-06-29 | 2019-06-28 | Procede de modelisation pour le controle des resultats fournis par un reseau de neurones artificiels et autres procedes associes |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3815001A1 true EP3815001A1 (fr) | 2021-05-05 |
Family
ID=65443896
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19733051.7A Withdrawn EP3815001A1 (fr) | 2018-06-29 | 2019-06-28 | Procede de modelisation pour le controle des resultats fournis par un reseau de neurones artificiels et autres procedes associes |
Country Status (6)
Country | Link |
---|---|
US (1) | US20210279526A1 (fr) |
EP (1) | EP3815001A1 (fr) |
CA (1) | CA3104759A1 (fr) |
FR (1) | FR3083354A1 (fr) |
SG (1) | SG11202012987TA (fr) |
WO (1) | WO2020002573A1 (fr) |
-
2018
- 2018-06-29 FR FR1856012A patent/FR3083354A1/fr active Pending
-
2019
- 2019-06-28 SG SG11202012987TA patent/SG11202012987TA/en unknown
- 2019-06-28 US US17/255,824 patent/US20210279526A1/en active Pending
- 2019-06-28 EP EP19733051.7A patent/EP3815001A1/fr not_active Withdrawn
- 2019-06-28 CA CA3104759A patent/CA3104759A1/fr active Pending
- 2019-06-28 WO PCT/EP2019/067289 patent/WO2020002573A1/fr active Application Filing
Also Published As
Publication number | Publication date |
---|---|
FR3083354A1 (fr) | 2020-01-03 |
WO2020002573A1 (fr) | 2020-01-02 |
SG11202012987TA (en) | 2021-02-25 |
CA3104759A1 (fr) | 2020-01-02 |
US20210279526A1 (en) | 2021-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Suriyal et al. | Mobile assisted diabetic retinopathy detection using deep neural network | |
FR3095042A1 (fr) | Procede de definition d’un chemin | |
FR3018118A1 (fr) | Procede de test d'un systeme electronique | |
Shamia et al. | An Online Platform for Early Eye Disease Detection using Deep Convolutional Neural Networks | |
EP4099228A1 (fr) | Apprentissage automatique sans annotation ameliore par regroupements adaptatifs en ensemble ouvert de classes | |
EP3660748A1 (fr) | Procédé d'analyse d'un jeu de paramètres d'un réseau de neurones en vue d'obtenir une amélioration technique, par exemple un gain en mémoire | |
EP3815001A1 (fr) | Procede de modelisation pour le controle des resultats fournis par un reseau de neurones artificiels et autres procedes associes | |
WO2020229310A1 (fr) | Procédé d'analyse automatique d'images pour reconnaître automatiquement au moins une caractéristique rare | |
WO2019211367A1 (fr) | Procede de generation automatique de reseaux de neurones artificiels et procede d'evaluation d'un risque associe | |
EP3929809A1 (fr) | Procédé de détection d'au moins un trait biométrique visible sur une image d entrée au moyen d'un réseau de neurones à convolution | |
EP2825995B1 (fr) | Système pour déterminer l'identification d'un appareil photographique a partir d'une photographie et procédé mis en oeuvre dans un tel système | |
FR3126253A1 (fr) | Procédé pour normaliser la variabilité d’une image, application de ce procédé à la détection d’anomalie et système d’inspection visuelle implémentant cette détection | |
EP1554687B1 (fr) | SystEme associatif flou de description d objets multimEdia | |
EP4012620A1 (fr) | Méthode d'apprentissage automatique par transfert | |
EP4191530A1 (fr) | Procédé de localisation et cartographie simultanées intégrant un masquage temporel auto-supervisé et modèle d'apprentissage automatique pour générer un tel masquage | |
FR3112879A1 (fr) | Procédé de contrôle automatique de qualité d’une pièce aéronautique | |
FR3133472A1 (fr) | Methode pour la detection d'anomalie utilisant un modele global-local | |
EP4309118A1 (fr) | Échantillonnage neutre de données et fonction de perte de corrélation différenciable | |
FR3113155A1 (fr) | Procédé d’identification d’un implant dentaire visible sur une image d’entrée au moyen d’au moins un réseau de neurones à convolution. | |
FR3117646A1 (fr) | Méthode de compression d’un réseau de neurones artificiel | |
WO2021009364A1 (fr) | Procédé d'identification de données aberrantes dans d'un jeu de données d'entrée acquises par au moins un capteur | |
EP4189642A1 (fr) | Prédiction d'étiquettes pour images numériques notamment médicales et fourniture d'explications associées auxdites étiquettes | |
FR3128045A1 (fr) | Procédé de gestion d'une structure multitâche d'un ensemble de réseaux neuronaux convolutifs | |
WO2021245227A1 (fr) | Procédé de génération d'un système d'aide à la décision et systèmes associés | |
WO2024002959A1 (fr) | Procédé de classification d'images, dispositif électronique et produit programme d'ordinateur correspondant |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20201228 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20210817 |