EP1866834A2 - Système et procédé de localisation de points d'intérêt dans une image d'objet mettant en uvre un réseau de neurones - Google Patents

Système et procédé de localisation de points d'intérêt dans une image d'objet mettant en uvre un réseau de neurones

Info

Publication number
EP1866834A2
EP1866834A2 EP06725370A EP06725370A EP1866834A2 EP 1866834 A2 EP1866834 A2 EP 1866834A2 EP 06725370 A EP06725370 A EP 06725370A EP 06725370 A EP06725370 A EP 06725370A EP 1866834 A2 EP1866834 A2 EP 1866834A2
Authority
EP
European Patent Office
Prior art keywords
neurons
interest
object image
image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP06725370A
Other languages
German (de)
English (en)
French (fr)
Inventor
Christophe Garcia
Stefan Duffner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom SA filed Critical France Telecom SA
Publication of EP1866834A2 publication Critical patent/EP1866834A2/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • a system and method for locating points of interest in an object image implementing a neural network is disclosed.
  • the field of the invention is that of digital processing of still or moving images. More specifically, the invention relates to a technique for locating one or more point (s) of interest in an object represented on a digital image.
  • the invention finds in particular, but not exclusively, an application in the field of the detection of physical characteristics in the faces present on a digital or digitized image, such as the pupil of the eye, the corner of the eyes, the tip of the nose , mouth, eyebrows, etc.
  • a digital or digitized image such as the pupil of the eye, the corner of the eyes, the tip of the nose , mouth, eyebrows, etc.
  • the automatic detection of points of interest in face images is a major issue in the field of facial analysis.
  • the detectors used are based on an analysis of the chrominance of the face: the pixels of the face are labeled as belonging to the skin or to the facial elements according to their color. Other detectors use contrast variations. For this, a contour detector is applied, based on the analysis of the light gradient. We then try to identify the facial elements from the shape of the different contours detected.
  • Some known techniques implement a second phase of applying a geometric face model to all candidate positions determined in the first phase of independent detection of each element.
  • the elements detected in the initial phase form constellations of candidate positions and the geometric model that can be deformable makes it possible to select the best constellation.
  • a recent method allows to get rid of the classic two-step schema (independent search of facial elements followed by the application of geometric rules).
  • This method is based on the use of Active Models of Appearance (AAMs) and is described in particular by D. Chrisce and T. Cootes, in "A comparison of shape constrained facial feature detectors" (Proceedings of the 6 * International Conference on Automatic Face and Gesture Recognition 2004, Seoul, Korea, pages 375-380, 2004). It consists in predicting the position of the facial elements by trying to match an active face model on the face in the image, by adapting the parameters of a linear model combining form and texture.
  • AAMs Active Models of Appearance
  • detectors designed specifically for the detection of different facial elements do not withstand the extreme conditions of illumination of images, such as over-lighting or under lighting, side lighting, from below. They are also not very robust vis-à-vis the variations in image quality, especially in the case of low resolution images from video streams (acquired for example by means of a "webcam") or previously compressed.
  • Methods based on chrominance analysis are also sensitive to lighting conditions. In addition, they can not be applied to grayscale images.
  • the statistical models used being linear, created by ACP, they are not robust to the overall variations of the image, in particular the variations of lighting. They are also not very robust with regard to partial occlusions of the face.
  • the invention particularly aims to overcome these disadvantages of the prior art.
  • an object of the invention is to provide a technique for locating several points of interest in an image representative of an object that does not require long and tedious development of filters specific to each point of interest that the we want to be able to locate, and to each type of object.
  • Another object of the invention is to propose such a localization technique that is particularly robust to all noises that can affect the image, such as illumination conditions, chromatic variations, partial occlusions, etc.
  • the invention also aims to provide such a technique that takes into account occultations partially affecting the images, and that allows the inference of the position of the occulted points.
  • the invention also aims to propose such a technique that is simple to implement and inexpensive to implement.
  • Another object of the invention is to provide such a technique which is particularly well suited to the detection of facial elements in face images.
  • a system for locating at least two points of interest in an object image which implements an artificial neural network and has a layered architecture comprising: an input layer receiving said object image; at least one intermediate layer, called the first intermediate layer, comprising a plurality of neurons making it possible to generate at least two saliency maps each associated with a predefined distinct point of interest of said object image; at least one output layer comprising said saliency maps, which themselves comprise a plurality of neurons each connected to all the neurons of said first intermediate layer.
  • Said points of interest are localized, in said object image, by the position of a single global maximum on each of said saliency maps.
  • the invention is based on a completely new and inventive approach to the detection of several points of interest in a representative image of an object, since it proposes to use a layered neural architecture, which makes it possible to generate at the output several saliency cards allowing a direct detection of the points of interest to locate, by simple search of maximum.
  • the invention therefore proposes a global search, on the whole of the object image, of the various points of interest by the neural network, which makes it possible in particular to take account of the relative positions of these points, and also allows to overcome the problems related to their total or partial occultation.
  • the output layer comprises at least two saliency maps each associated with a predefined distinct point of interest.
  • Such a neural architecture is also more robust than techniques prior to possible problems of lighting images of objects. It is specified that here is meant by "predefined point of interest" a remarkable element of an object, such as for example an eye, a nose, a mouth, etc., in a face image.
  • the invention therefore does not consist of searching for any contour in an image, but rather a predefined identified element.
  • said object image is a face image.
  • the points of interest sought are then permanent physical traits, such as eyes, nose, mouth, eyebrows, etc.
  • such a location system also comprises at least a second intermediate convolution layer comprising a plurality of neurons.
  • a layer may specialize in the detection of low level elements, such as contrast lines, in the object image.
  • such a location system also comprises at least a third subsampling intermediate layer comprising a plurality of neurons.
  • a locating system comprises, between said input layer and said first layer intermediate: a second intermediate convolution layer comprising a plurality of neurons and for detecting at least one line-like elementary form in said object image, said second intermediate layer delivering a convolved object image; a third subsampling intermediate layer comprising a plurality of neurons and for reducing the size of said convoluted object image, said third intermediate layer providing a reduced convolved object image; a fourth convolutional intermediate layer comprising a plurality of neurons and for detecting at least one wedge type complex shape in said reduced convolved object image.
  • the invention also relates to a method of learning a neural network of a system for locating at least two points of interest in an object image as described above.
  • Each of said neurons has at least one weighted input by a synaptic weight, and a bias.
  • Such a learning method comprises steps of: constructing a learning base comprising a plurality of object images annotated according to said points of interest to be located; initialization of said synaptic weights and / or said biases; for each of said annotated images of said learning base: preparing said at least two desired saliency maps output from said at least two predefined points of interest annotated on said image; presenting said image at the input of said location system and determining said at least two saliency maps delivered at the output; minimizing a difference between said desired saliency maps and outputted to all of said annotated images of said learning base, so as to determine said synaptic weights and / or said optimal biases.
  • the neural network thus learns, based on examples manually annotated by a user, to recognize certain points of interest on the object images. He will then be able to locate them in any image provided at the input of the network.
  • said minimization is a minimization of a mean squared error between said desired saliency maps and output and implements an iterative gradient retropropagation algorithm. This algorithm is described in detail in Appendix 2 of this document, and allows a fast convergence towards the optimal values of the different synaptic biases and weights of the network.
  • the invention also relates to a method for locating at least two points of interest in an object image, which comprises steps of: presenting said object image at the input of a layered architecture implementing a artificial neural network; sequentially activating at least one intermediate layer, called the first intermediate layer, comprising a plurality of neurons and making it possible to generate at least two saliency maps each associated with a predefined distinct point of interest of said object image, and at least one output layer comprising said saliency maps, said saliency maps comprising a plurality of neurons each connected to all the neurons of said first intermediate layer; locating said points of interest in said object image by searching, in said saliency maps, for a position of a single global maximum on each of said maps.
  • such a locating method comprises preliminary steps of: detecting, in any image, a zone encompassing said object, and constituting said object image; resizing said object image.
  • This detection can be done from a conventional detector well known to those skilled in the art, for example a face detector which makes it possible to determine a box encompassing a face in a complex image. Resizing can be carried out automatically by the detector, or independently by dedicated means: it provides input to the neural network images all having the same size.
  • the invention further relates to a computer program comprising program code instructions for executing the method of learning a neural network described above when said program is executed by a processor, as well as a program of computer comprising program code instructions for executing the method of locating at least two points of interest in an object image previously described when said program is executed by a processor.
  • Such programs can be downloaded from a communication network (eg the global Internet network) and / or stored in a computer readable data medium.
  • a communication network eg the global Internet network
  • FIG. 1 presents a synoptic of the neural architecture of the location system of points of interest in an object image of the invention
  • Figure 2 more specifically illustrates a convolution map, followed by a subsampling map, in the neural architecture of Figure 1
  • Figures 3a and 3b show some examples of face images of the learning base
  • FIG. 4 describes the main steps of the method for locating facial elements in a face image according to the invention
  • Figure 5 shows a simplified block diagram of the locating system of the invention
  • FIG. 6 shows an example of a network of artificial neurons of the perceptron multi-layer type
  • Figure 7 illustrates more precisely the structure of an artificial neuron
  • Figure 8 shows the characteristic of the hyperbolic tangent function used as a transfer function for sigmoidal neurons. 7. Description of an embodiment of the invention
  • the general principle of the invention is based on the use of a neural architecture to be able to automatically detect several points of interest in object images (more particularly of semi-rigid objects), and in particular in images faces (detection of permanent features such as eyes, nose or mouth). More precisely, the principle of the invention consists in constructing a neural network making it possible to learn how to transform, in one pass, an object image into several saliency maps whose positions of the maxima correspond to the positions of points of interest. selected by the user in the input object image.
  • This neural architecture is composed of several heterogeneous layers, which make it possible to automatically develop robust low-level detectors, while learning rules allowing to regulate the plausible relative dispositions of the detected elements and to take into account naturally all information available to locate possible hidden elements.
  • All neuron connection weights are set during a learning phase, from a set of images of pre-segmented objects, and positions of points of interest in these images.
  • the neural architecture then acts as a cascade of filters making it possible to transform an image zone containing an object, previously detected in a larger image or in a video sequence, into a set of digital maps, of the size of the input image, whose elements are between -1 and 1.
  • Each map corresponds to a particular point of interest whose position is identified by a simple search of the position of the element whose value is maximum.
  • the method of the invention allows a robust detection of facial elements in faces, in various poses (orientations, semi-front), with various facial expressions, which can contain occulting elements, and appearing in images that exhibit significant variability in terms of resolution, contrast and illumination. 7.1 Neural architecture
  • the architecture of the artificial neural network of the point of interest localization system of the invention is presented.
  • the operating principle of such artificial neurons, as well as their structure, is recalled in Appendix 1, which forms an integral part of the present description.
  • Such a neural network is for example a multi-layer perceptron network, also described in appendix 1.
  • Such a neural network is composed of six interconnected heterogeneous layers referenced E, C 1 , S 2 , C 3 , N 4 and R 5 , which contain a series of cards resulting from a succession of convolution and subtraction operations. sampling.
  • E interconnected heterogeneous layers referenced E, C 1 , S 2 , C 3 , N 4 and R 5 , which contain a series of cards resulting from a succession of convolution and subtraction operations. sampling.
  • the proposed architecture includes: - an input layer E: it is a retina, which is a large image matrix
  • H x L where H is the number of lines and L is the number of columns.
  • the input layer E receives the elements of an image zone of the same size H x L.
  • E 1J (P 1J - 128) / 128, of value between -1 and 1.
  • HxL is also the size of the faces images of the learning base used for the parameterization of the neural network, and face images in which it is desired to detect one or more facial elements. This size can be that obtained directly at the output of the face detector that extracts face images, larger images or video sequences.
  • a first convolution layer C 1 consisting of NC 1 cards referenced C 11 .
  • Each card C 11 is connected 1O 1 to the input card E, and comprises a plurality of linear neurons (as presented in Appendix 1).
  • Each of these neurons is connected by synapses to a set of M 1 x M 1 neighboring elements in the E-card (receptive fields) as described in more detail in Figure 2.
  • Each of these neurons further receives bias.
  • These M 1 x M 1 synapses, plus the bias, are shared by all the C 11 neurons.
  • Each card C 11 thus corresponds to the result of a convolution by a core M 1 x M 1 11 augmented by a bias, in the input card E.
  • This convolution specializes in a detector of certain low-level shapes in the card input such as oriented contrast lines of the image.
  • Each card S 2j is connected 12 to a corresponding card C 11 .
  • Each neuron of a map S 2j receives the average of M 2 ⁇ M 2 neighboring elements 13 in the map C 11 (receptive fields), as illustrated in more detail in FIG. 2.
  • Each neuron multiplies this average by a synaptic weight, and adds a bias.
  • Synaptic weight and bias whose optimal values are determined during a learning phase, are shared by the set of neurons of each card S 2j .
  • the output of each neuron is obtained after passing through a sigmoid function.
  • Each card C 3k is connected 14 k to each of the cards S 2j of the subsampling layer
  • the neurons of a C 3k map are linear, and each of these neurons is connected by synapses to a set of M 3 ⁇ M 3 neighboring elements 15 in each of the maps S 2j . He also receives a bias.
  • the M 3 x M 3 synapses per card, plus the bias, are shared by all the neurons of the C 3k cards.
  • the cards C 3k correspond to the result of the sum of NC 3 convolutions by M 3 x M 3 15 nuclei, increased by a bias.
  • Each neuron of the N layer 4 is connected 1O 1 to all the neurons of the layer C 3 , and receives a bias.
  • N 41 neurons make it possible to learn how to generate the R 5m output cards by maximizing the responses on the positions of the points of interest in each of these cards, while taking into account the overallity of the C 3 cards, which makes it possible to detect a particular point of interest taking into account the detection of others.
  • NN 4 100 neurons are chosen, and the hyperbolic tangent function (denoted th or tanh) for the transfer function of sigmoidal neurons; a layer R 5 of cards, consisting of NR 5 R 5m cards, one for each point of interest chosen by the user (right eye, left eye, nose, mouth, etc.).
  • Each R 5m card is connected to all the neurons of the N layer 4 .
  • the neurons of an R 5m map are sigmoidal, and each is connected to all neurons in the N 4 layer.
  • Each card R 5m is of size Hx L, which is the size of the input layer E.
  • NR 5 4 cards of size 56 x 46 are chosen.
  • each R 5m card having a maximum output in each R 5m card corresponds to the position of the corresponding facial element in the image presented at the input of the network.
  • the layer R 5 has only one saliency map on which all the points of interest that one wishes to locate in the image.
  • FIG. 2 illustrates an example of a 5 ⁇ 5 11 convolution card C 11 followed by a 2 ⁇ 2 subsampling card S 2j .
  • the convolution performed does not take into account the pixels located on the edges of the card C 11 , to avoid edge effects.
  • a set T of face images is first manually extracted from a corpus of large images.
  • Each face image is resized to the size H x L of the input layer E of the neural architecture, preferably respecting the natural proportions of the faces.
  • the positions of the eyes, nose and center of the mouth are manually marked, as shown in Figure 3a: thus obtaining a set of annotated images based on the points of interest that the neural network will have to learn to locate.
  • These points of interest to locate in the images can be chosen freely by the user.
  • a training database of approximately 2500 manually annotated face images can be used depending on the position of the center of the left eye, the right eye, the nose and the mouth. After applying geometric modifications to these annotated images (translations, rotations, zooms, etc.), we obtain about 32,000 examples of annotated faces with significant variability.
  • each point corresponds to the point of value +1, whose position corresponds to that of a facial element to be located (right eye, left eye, nose or center of the mouth).
  • N 4 and R 5 of the neural network are activated one after the other.
  • the response of the neural network to image I is then obtained.
  • the goal is to obtain cards R 5m identical to the desired cards D 5m .
  • the following parameters can be used in the gradient retropropagation algorithm: a learning step of 0.005 for the neurons of the C 1 , S 2 , C 3 layers; a learning step of 0.001 for neurons of layer N 4 ; a learning step of 0.0005 for the neurons of the layer R 5 ; a momentum of 0.2 for all the neurons of the architecture.
  • the gradient retropropagation algorithm then converges to a stable solution after 25 iterations, if we consider that an iteration of the algorithm corresponds to the presentation of all the images of the training set T.
  • the neural network of FIG. 1 is ready to process any digital face image, in order to extract annotated points of interest from the images of the set of images. learning T.
  • FIG. 4 It is now possible to use the neural network of FIG. 1, set during the learning phase, for searching the facial elements in a face image.
  • the method implemented to achieve such a location is shown in FIG. 4.
  • the faces 44 and 45 present in the image 46 are detected using a face detector.
  • the latter locates the box enclosing the inside of each face 44, 45.
  • the image zones contained in each bounding box are extracted 41 and constitute the images of faces 47, 48 in which the search of the facial elements must be carried out.
  • Each face image extracted I 47, 48 is resized 41 at the waist
  • H x L is placed at the input E of the neural architecture of FIG. 1.
  • the input layer E, the intermediate layers C 1 , S 2 , C 3 , N 4 , and the output layer R 5 are activated one after the other, so as to perform a filter 42 of the image I 47, 48 by the neural architecture.
  • the response of the neural network to image I 47, 48 is obtained in the form of four saliency maps R 5m for each of images 1 47, 48.
  • the faces are detected in the images 46 by the CFF face detector presented by C. Garcia and M. Delakis, in “Convolutional Face Finder: a Neural Architecture for Fast and Robust Face”. Detection, "IEEE Transactions on Pattern Analysis and Machine
  • Such a face detector can in fact robustly detect faces of minimum size 20x20, tilted up to ⁇ 25 degrees and rotated up to ⁇ 60 degrees, in scenes with complex background, and under variable lighting.
  • FIG. 4 a simplified block diagram of a system or device for locating points of interest in an object image.
  • a system or device for locating points of interest in an object image comprises a memory M 51, and a processing unit 50 equipped with a ⁇ P processor, which is controlled by the computer program Pg 52.
  • the processing unit 50 receives as input a set T of learning face images, annotated according to the points of interest that the system must be able to locate in an image, from which the microprocessor ⁇ P performs, according to the instructions of the program Pg 52, the implementation of a gradient retropropagation algorithm for optimizing the synaptic bias and weight values of the neural network. These optimum values 54 are then stored in the memory M 51.
  • the optimal values of the synaptic bias and weight are loaded from the memory M 51.
  • the processing unit 50 receives as input an object image I, from which the microprocessor ⁇ P performs, according to the instructions of the program Pg 52, a filtering by the neural network and a maxima search in the saliency cards obtained at the output. At the output of the processing unit 50, the coordinates 53 of each of the points of interest sought in the image I are obtained.
  • APPENDIX 1 Artificial neurons and multi-layer perceptron neuron networks 1.
  • the multi-layered perceptron is a structured network of artificial neurons organized in layers, in which the information travels in one direction, from the input layer to the output layer.
  • FIG. 6 shows the example of a network containing an input layer 60, two hidden layers 61 and 62 and an output layer 63.
  • the input layer 60 always represents a virtual layer associated with the inputs of the system. It does not contain any neurons.
  • the following layers 61 to 63 are layers of neurons.
  • a multi-layer perceptron may have any number of layers and a number of neurons (or inputs) per layer of any kind.
  • the neural network has 3 inputs, 4 neurons on the first hidden layer 61, three neurons on the second 62 and four neurons on the output layer 63.
  • the outputs of the neurons of the last layer 63 correspond to the outputs of the system.
  • An artificial neuron is a computing unit that receives an input signal (X, vector of real values), through synaptic connections carrying weights
  • FIG. 7 shows the structure of such an artificial neuron, the operation of which is described in paragraph ⁇ 2 below.
  • the neurons of the network of FIG. 6 are connected together, from layer to layer, by the weighted synaptic connections. It is the weights of these connections that govern the operation of the network and "program" an application of the space of the inputs to the space of the outputs by means of a nonlinear transformation.
  • the creation of a multi-layer perceptron to solve a given problem thus passes through the inference of the best possible application, as defined by a set of training data consisting of desired input and output vector pairs. 2.
  • Each of the entries x is excites a weighted synapse W 1 .
  • a summing function 70 calculates a potential V, which, after passing through an activation function ⁇ , delivers a real value output y.
  • the quantity w o x o is called bias and corresponds to a threshold value for the neuron.
  • can take different forms depending on the intended applications.
  • APPENDIX 2 Gradient Retropropagation Algorithm
  • neural network learning consists in determining all the weights of the synaptic connections, so as to obtain a vector of desired outputs D as a function of a vector of neurons.
  • input X For this, a learning base is constituted, which consists of a list of K input / output pairs (X k , D k ) corresponding.
  • Sj c is the set of neurons of the index layer c-1 connected to the inputs of neuron i of the layer of index c;
  • W j j is the weight of the synaptic connection extending from neuron j to neuron i.
  • the gradient backpropagation algorithm operates in two successive passes, which are direct propagation and backpropagation passes: during the propagation pass, the input signal X k crosses the neural network and activates a response Y k in exit ; during the backpropagation, the error signal E k is backpropagated in the network, which makes it possible to modify the synaptic weights to minimize the error E k .
  • such an algorithm comprises the following steps: Set the learning step p to a sufficiently small positive value (of the order of 0.001) Set the momentum ⁇ to a positive value between 0 and 1 (of the order of 0.2) Randomly initialize the synaptic weights of the network at small values Repeat choose an example pair (X k , D k ): propagation: calculate in the order of the layers the outputs of the neurons:
  • V 1 c " V w ] r y ⁇ c _ ⁇ and the output
  • Aw Z P ⁇ , c y j, cx + ⁇ Aw Z> Y / e s ,, c
  • p is the learning step and ⁇ the momentum

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
EP06725370A 2005-03-31 2006-03-28 Système et procédé de localisation de points d'intérêt dans une image d'objet mettant en uvre un réseau de neurones Withdrawn EP1866834A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0503177A FR2884008A1 (fr) 2005-03-31 2005-03-31 Systeme et procede de localisation de points d'interet dans une image d'objet mettant en oeuvre un reseau de neurones
PCT/EP2006/061110 WO2006103241A2 (fr) 2005-03-31 2006-03-28 Système et procédé de localisation de points d'intérêt dans une image d'objet mettant en œuvre un réseau de neurones

Publications (1)

Publication Number Publication Date
EP1866834A2 true EP1866834A2 (fr) 2007-12-19

Family

ID=35748862

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06725370A Withdrawn EP1866834A2 (fr) 2005-03-31 2006-03-28 Système et procédé de localisation de points d'intérêt dans une image d'objet mettant en uvre un réseau de neurones

Country Status (6)

Country Link
US (1) US20080201282A1 (zh)
EP (1) EP1866834A2 (zh)
JP (1) JP2008536211A (zh)
CN (1) CN101171598A (zh)
FR (1) FR2884008A1 (zh)
WO (1) WO2006103241A2 (zh)

Families Citing this family (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009155415A2 (en) * 2008-06-20 2009-12-23 Research Triangle Institute Training and rehabilitation system, and associated method and computer program product
US8374436B2 (en) * 2008-06-30 2013-02-12 Thomson Licensing Method for detecting layout areas in a video image and method for generating an image of reduced size using the detection method
US8290250B2 (en) 2008-12-26 2012-10-16 Five Apes, Inc. Method and apparatus for creating a pattern recognizer
US8160354B2 (en) * 2008-12-26 2012-04-17 Five Apes, Inc. Multi-stage image pattern recognizer
US8229209B2 (en) * 2008-12-26 2012-07-24 Five Apes, Inc. Neural network based pattern recognizer
KR101558553B1 (ko) * 2009-02-18 2015-10-08 삼성전자 주식회사 아바타 얼굴 표정 제어장치
CN101639937B (zh) * 2009-09-03 2011-12-14 复旦大学 一种基于人工神经网络的超分辨率方法
US9405975B2 (en) 2010-03-26 2016-08-02 Brain Corporation Apparatus and methods for pulse-code invariant object recognition
US9906838B2 (en) 2010-07-12 2018-02-27 Time Warner Cable Enterprises Llc Apparatus and methods for content delivery and message exchange across multiple content delivery networks
US8515127B2 (en) 2010-07-28 2013-08-20 International Business Machines Corporation Multispectral detection of personal attributes for video surveillance
US9134399B2 (en) 2010-07-28 2015-09-15 International Business Machines Corporation Attribute-based person tracking across multiple cameras
US10424342B2 (en) 2010-07-28 2019-09-24 International Business Machines Corporation Facilitating people search in video surveillance
US8532390B2 (en) 2010-07-28 2013-09-10 International Business Machines Corporation Semantic parsing of objects in video
CN102567397B (zh) * 2010-12-30 2014-08-06 高德软件有限公司 兴趣点、连锁店分店兴趣点关联标记的方法与装置
US9224090B2 (en) 2012-05-07 2015-12-29 Brain Corporation Sensory input processing apparatus in a spiking neural network
US9412041B1 (en) 2012-06-29 2016-08-09 Brain Corporation Retinal apparatus and methods
US9186793B1 (en) 2012-08-31 2015-11-17 Brain Corporation Apparatus and methods for controlling attention of a robot
US9311594B1 (en) 2012-09-20 2016-04-12 Brain Corporation Spiking neuron network apparatus and methods for encoding of sensory data
US9111226B2 (en) 2012-10-25 2015-08-18 Brain Corporation Modulated plasticity apparatus and methods for spiking neuron network
US9183493B2 (en) 2012-10-25 2015-11-10 Brain Corporation Adaptive plasticity apparatus and methods for spiking neuron network
US9218563B2 (en) * 2012-10-25 2015-12-22 Brain Corporation Spiking neuron sensory processing apparatus and methods for saliency detection
US9275326B2 (en) 2012-11-30 2016-03-01 Brain Corporation Rate stabilization through plasticity in spiking neuron network
US9436909B2 (en) 2013-06-19 2016-09-06 Brain Corporation Increased dynamic range artificial neuron network apparatus and methods
US9239985B2 (en) 2013-06-19 2016-01-19 Brain Corporation Apparatus and methods for processing inputs in an artificial neuron network
US9552546B1 (en) 2013-07-30 2017-01-24 Brain Corporation Apparatus and methods for efficacy balancing in a spiking neuron network
CN103489107B (zh) * 2013-08-16 2015-11-25 北京京东尚科信息技术有限公司 一种制作虚拟试衣模特图像的方法和装置
US9984326B1 (en) * 2015-04-06 2018-05-29 Hrl Laboratories, Llc Spiking neural network simulator for image and video processing
US10198689B2 (en) 2014-01-30 2019-02-05 Hrl Laboratories, Llc Method for object detection in digital image and video using spiking neural networks
US9987743B2 (en) 2014-03-13 2018-06-05 Brain Corporation Trainable modular robotic apparatus and methods
US9533413B2 (en) 2014-03-13 2017-01-03 Brain Corporation Trainable modular robotic apparatus and methods
US9195903B2 (en) 2014-04-29 2015-11-24 International Business Machines Corporation Extracting salient features from video using a neurosynaptic system
CN103955718A (zh) * 2014-05-15 2014-07-30 厦门美图之家科技有限公司 一种图像主体对象的识别方法
KR101563569B1 (ko) * 2014-05-28 2015-10-28 한국과학기술원 학습형 다이내믹 시각 이미지 패턴 인식 시스템 및 방법
CN105981041A (zh) * 2014-05-29 2016-09-28 北京旷视科技有限公司 使用粗到细级联神经网络的面部关键点定位
US9373058B2 (en) 2014-05-29 2016-06-21 International Business Machines Corporation Scene understanding using a neurosynaptic system
US9798972B2 (en) 2014-07-02 2017-10-24 International Business Machines Corporation Feature extraction using a neurosynaptic system for object classification
US10115054B2 (en) 2014-07-02 2018-10-30 International Business Machines Corporation Classifying features using a neurosynaptic system
US9881349B1 (en) 2014-10-24 2018-01-30 Gopro, Inc. Apparatus and methods for computerized object identification
KR102288280B1 (ko) 2014-11-05 2021-08-10 삼성전자주식회사 영상 학습 모델을 이용한 영상 생성 방법 및 장치
US10650508B2 (en) * 2014-12-03 2020-05-12 Kla-Tencor Corporation Automatic defect classification without sampling and feature selection
CN106033594B (zh) * 2015-03-11 2018-11-13 日本电气株式会社 基于卷积神经网络所获得特征的空间信息恢复方法及装置
CN108027896A (zh) * 2015-03-18 2018-05-11 赫尔实验室有限公司 用于解码具有连续突触可塑性的脉冲储层的系统和方法
US9933264B2 (en) 2015-04-06 2018-04-03 Hrl Laboratories, Llc System and method for achieving fast and reliable time-to-contact estimation using vision and range sensor data for autonomous navigation
US9934437B1 (en) 2015-04-06 2018-04-03 Hrl Laboratories, Llc System and method for real-time collision detection
US9840003B2 (en) 2015-06-24 2017-12-12 Brain Corporation Apparatus and methods for safe navigation of robotic devices
CN107851195B (zh) * 2015-07-29 2022-02-11 诺基亚技术有限公司 利用神经网络进行目标检测
CN105260776B (zh) * 2015-09-10 2018-03-27 华为技术有限公司 神经网络处理器和卷积神经网络处理器
JP2017059207A (ja) * 2015-09-18 2017-03-23 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 画像認識方法
CN105205504B (zh) * 2015-10-04 2018-09-18 北京航空航天大学 一种基于数据驱动的图像关注区域质量评价指标学习方法
US20170124409A1 (en) * 2015-11-04 2017-05-04 Nec Laboratories America, Inc. Cascaded neural network with scale dependent pooling for object detection
US10860887B2 (en) * 2015-11-16 2020-12-08 Samsung Electronics Co., Ltd. Method and apparatus for recognizing object, and method and apparatus for training recognition model
KR102554149B1 (ko) * 2015-11-16 2023-07-12 삼성전자주식회사 오브젝트 인식 방법 및 장치, 인식 모델 학습 방법 및 장치
US10055652B2 (en) * 2016-03-21 2018-08-21 Ford Global Technologies, Llc Pedestrian detection and motion prediction with rear-facing camera
CN107315571B (zh) * 2016-04-27 2020-07-31 中科寒武纪科技股份有限公司 一种用于执行全连接层神经网络正向运算的装置和方法
WO2018052587A1 (en) * 2016-09-14 2018-03-22 Konica Minolta Laboratory U.S.A., Inc. Method and system for cell image segmentation using multi-stage convolutional neural networks
KR101804840B1 (ko) 2016-09-29 2017-12-05 연세대학교 산학협력단 컨벌루션 신경망 기반의 표면 영상 처리 방법 및 장치
KR101944536B1 (ko) * 2016-12-11 2019-02-01 주식회사 딥바이오 뉴럴 네트워크를 이용한 질병의 진단 시스템 및 그 방법
CN106778751B (zh) * 2017-02-20 2020-08-21 迈吉客科技(北京)有限公司 一种非面部roi识别方法及装置
JP6214073B2 (ja) * 2017-03-16 2017-10-18 ヤフー株式会社 生成装置、生成方法、及び生成プログラム
CN108259496B (zh) 2018-01-19 2021-06-04 北京市商汤科技开发有限公司 特效程序文件包的生成及特效生成方法与装置、电子设备
CN108388434B (zh) 2018-02-08 2021-03-02 北京市商汤科技开发有限公司 特效程序文件包的生成及特效生成方法与装置、电子设备
JP6757349B2 (ja) 2018-03-12 2020-09-16 株式会社東芝 固定小数点を用いて認識処理を行う多層の畳み込みニューラルネットワーク回路を実現する演算処理装置
US20190286988A1 (en) * 2018-03-15 2019-09-19 Ants Technology (Hk) Limited Feature-based selective control of a neural network
JP6996455B2 (ja) * 2018-08-31 2022-01-17 オムロン株式会社 検出器生成装置、モニタリング装置、検出器生成方法及び検出器生成プログラム
JP7035912B2 (ja) * 2018-08-31 2022-03-15 オムロン株式会社 検出器生成装置、モニタリング装置、検出器生成方法及び検出器生成プログラム
US11430084B2 (en) 2018-09-05 2022-08-30 Toyota Research Institute, Inc. Systems and methods for saliency-based sampling layer for neural networks
CN109491704A (zh) * 2018-11-08 2019-03-19 北京字节跳动网络技术有限公司 用于处理信息的方法和装置
CN109744996B (zh) * 2019-01-11 2021-06-15 中南大学 Oct图像的bmo位置定位方法
US11080884B2 (en) * 2019-05-15 2021-08-03 Matterport, Inc. Point tracking using a trained network
CN112825115A (zh) * 2019-11-20 2021-05-21 北京眼神智能科技有限公司 基于单目图像的眼镜检测方法、装置、存储介质及设备
US11687778B2 (en) 2020-01-06 2023-06-27 The Research Foundation For The State University Of New York Fakecatcher: detection of synthetic portrait videos using biological signals
EP4170678A4 (en) * 2020-07-23 2024-03-13 Deep Bio Inc. METHOD FOR ANNOTATION OF DISEASE PATHOGEN SITE BY MEANS OF SEMI-SUPERVISED LEARNING, AND DIAGNOSTIC SYSTEM FOR ITS IMPLEMENTATION
US11532147B2 (en) * 2020-09-25 2022-12-20 Microsoft Technology Licensing, Llc Diagnostic tool for deep learning similarity models
KR20240056112A (ko) * 2022-10-21 2024-04-30 삼성전자주식회사 이미지에서 관심 영역을 식별하기 위한 전자 장치 및 그 제어 방법

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2006103241A2 *

Also Published As

Publication number Publication date
US20080201282A1 (en) 2008-08-21
WO2006103241A2 (fr) 2006-10-05
JP2008536211A (ja) 2008-09-04
WO2006103241A3 (fr) 2007-01-11
FR2884008A1 (fr) 2006-10-06
CN101171598A (zh) 2008-04-30

Similar Documents

Publication Publication Date Title
EP1866834A2 (fr) Système et procédé de localisation de points d'intérêt dans une image d'objet mettant en uvre un réseau de neurones
EP3707676B1 (fr) Procédé d'estimation de pose d'une caméra dans le référentiel d'une scène tridimensionnelle, dispositif, système de réalite augmentée et programme d'ordinateur associé
US11494616B2 (en) Decoupling category-wise independence and relevance with self-attention for multi-label image classification
Ozcan et al. Lip reading using convolutional neural networks with and without pre-trained models
FR2884007A1 (fr) Procede d'identification de visages a partir d'images de visage, dispositif et programme d'ordinateur correspondants
EP2491532A1 (fr) Procede, programme d'ordinateur et dispositif de suivi hybride de representations d'objets, en temps reel, dans une sequence d'images
EP3640843A1 (fr) Procédé d'extraction de caractéristiques d'une empreinte digitale représentée par une image d'entrée
EP3582141A1 (fr) Procédé d'apprentissage de paramètres d'un réseau de neurones à convolution
Alafif et al. On detecting partially occluded faces with pose variations
EP0681270A1 (fr) Procédé de trajectographie d'objets et dispositif de mise en oeuvre de ce procédé
WO2008081152A2 (fr) Procede et systeme de reconnaissance d'un objet dans une image
CA2709180A1 (fr) Procedes de mise a jour et d'apprentissage d'une carte auto-organisatrice
EP3966739B1 (fr) Procédé d'analyse automatique d'images pour reconnaître automatiquement au moins une caractéristique rare
EP3929809A1 (fr) Procédé de détection d'au moins un trait biométrique visible sur une image d entrée au moyen d'un réseau de neurones à convolution
Bucci et al. Multimodal deep domain adaptation
EP2062196A1 (fr) Procede de cadrage d'un objet dans une image et dispositif correspondant
Yoon et al. Real-Time Hair Segmentation Using Mobile-Unet. Electronics 2021, 10, 99
EP3491582B1 (fr) Procédé de comparaison d'objets disposés au voisinage les uns des autres, et dispositif associé
Dassiè Machine Learning and Computer Vision in the Humanities
Moreira et al. Improving Real Age Estimation from Apparent Age Data
Beluch et al. Convolutional neural networks in the detection of astronomical objects from the Messier catalog
WO2023118768A1 (fr) Dispositif et procédé de traitement de données d'images de visages d'êtres humains
WO2012107696A1 (fr) Procédés, dispositif et programmes d'ordinateur pour la reconnaissance de formes, en temps réel, à l'aide d'un appareil comprenant des ressources limitées
WO2023031305A1 (fr) Procédé de mise en relation d'une image candidate avec une image de référence
FR3114423A1 (fr) Diffusion de connaissances avec preservation des relations semantiques pour une conversion d'image a image

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20071008

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

RIN1 Information on inventor provided before grant (corrected)

Inventor name: GARCIA, CHRISTOPHE

Inventor name: DUFFNER, STEFAN

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20121002