WO2019196308A1 - Dispositif et procédé de génération de modèle de reconnaissance faciale, et support d'informations lisible par ordinateur - Google Patents

Dispositif et procédé de génération de modèle de reconnaissance faciale, et support d'informations lisible par ordinateur Download PDF

Info

Publication number
WO2019196308A1
WO2019196308A1 PCT/CN2018/102401 CN2018102401W WO2019196308A1 WO 2019196308 A1 WO2019196308 A1 WO 2019196308A1 CN 2018102401 W CN2018102401 W CN 2018102401W WO 2019196308 A1 WO2019196308 A1 WO 2019196308A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
matching
point pairs
dimensional coordinates
preset
Prior art date
Application number
PCT/CN2018/102401
Other languages
English (en)
Chinese (zh)
Inventor
王义文
王健宗
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019196308A1 publication Critical patent/WO2019196308A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present application relates to the field of face recognition technologies, and in particular, to a device, a method, and a computer readable storage medium for generating a face recognition model.
  • Face recognition is a biometric recognition technology based on human facial feature information for identification.
  • the face recognition technology is a two-dimensional face recognition.
  • the two-dimensional face recognition usually uses a camera or a camera to collect a two-dimensional image containing a human face, and detects and tracks the face in the two-dimensional image, and further The detected face is identified.
  • a two-dimensional face image is acquired for recognition, it is easily affected by changes in non-geometric appearance such as posture, expression, illumination, and facial makeup, resulting in low accuracy of face recognition.
  • the present application provides a device, a method, and a computer readable storage medium for generating a face recognition model, the main purpose of which is to improve the accuracy of face recognition.
  • the present application provides a device for generating a face recognition model, the device comprising a memory and a processor, wherein the memory stores a model generation program executable on the processor, the model generation program The following steps are implemented when executed by the processor:
  • A1. Collecting a plurality of facial images of a user photographed from a plurality of perspectives, and acquiring camera parameters for capturing the plurality of facial images;
  • the acquired multiple face images are matched by two to two, and the matched feature point pairs are obtained, and the matched feature point pairs are filtered by a preset feature point screening algorithm to delete the matching. Wrong feature point pairs and obtain two-dimensional coordinates matching the correct feature point pairs on the face image;
  • A3 calculating, according to the two-dimensional coordinates and the camera parameter, matching the corresponding three-dimensional coordinates of the feature point pair, and constructing the three-dimensional point cloud data of the user face according to the calculated three-dimensional coordinates;
  • A4. Convert the three-dimensional point cloud data into a depth image, and use any one of the plurality of facial images as a color image of the user's face;
  • steps A1 to A4 to obtain a preset number of depth images and color images of the user, and use the depth image and the color image as inputs of a preset two-channel convolutional neural network model to train the pair.
  • the channel convolutional neural network model determines model parameters
  • a two-channel convolutional neural network model that determines model parameters is used as a face recognition model, wherein the two-channel convolutional neural network model takes the results of the fully connected layer as an output.
  • the present application further provides a method for generating a face recognition model, the method comprising:
  • B2 Matching the acquired plurality of facial images according to a preset feature matching algorithm to obtain matching feature point pairs, and filtering the matched feature point pairs by a preset feature point screening algorithm to delete the matching Wrong feature point pairs and obtain two-dimensional coordinates matching the correct feature point pairs on the face image;
  • step B1 to step B4 are repeatedly performed to obtain a preset number of depth images and color images of the user, and the depth image and the color image are used as inputs of a preset two-channel convolutional neural network model, and the pair is trained.
  • the channel convolutional neural network model determines model parameters
  • a two-channel convolutional neural network model that determines model parameters is used as a face recognition model, wherein the two-channel convolutional neural network model takes the results of the fully connected layer as an output.
  • the present application further provides a computer readable storage medium having a model generation program stored thereon, the model generation program being executable by one or more processors to implement The steps of the method of generating a face recognition model as described above.
  • FIG. 1 is a schematic diagram of a preferred embodiment of a device for generating a face recognition model of the present applicant
  • FIG. 2 is a flow chart of a preferred embodiment of a method for generating a face recognition model of the present applicant.
  • the application provides a device for generating a face recognition model.
  • FIG. 1 a schematic diagram of a preferred embodiment of a device for generating a face recognition model of the present invention is shown.
  • the face recognition model generating device 1 may be a PC (Personal Computer), or may be a terminal device such as a smart phone, a tablet computer, or a portable computer.
  • PC Personal Computer
  • terminal device such as a smart phone, a tablet computer, or a portable computer.
  • the face recognition model generating apparatus 1 includes at least a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (for example, an SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like.
  • the memory 11 may be an internal storage unit of the face recognition model generating device 1 in some embodiments, such as a hard disk of the face recognition model generating device 1.
  • the memory 11 may also be an external storage device of the face recognition model generating device 1 in other embodiments, such as a plug-in hard disk equipped with a face recognition model generating device 1 and a smart memory card (Smart Media Card, SMC). ), Secure Digital (SD) card, Flash Card, etc.
  • the memory 11 may also include an internal storage unit of the generating device 1 of the face recognition model and an external storage device.
  • the memory 11 can be used not only for storing application software of the face recognition model generating device 1 and various types of data, such as code of the model generation program 01, but also for temporarily storing data that has been output or is to be output.
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor or other data processing chip for running program code or processing stored in the memory 11.
  • Data such as execution model generation program 01 and the like.
  • Communication bus 13 is used to implement connection communication between these components.
  • the network interface 14 can optionally include a standard wired interface, a wireless interface (such as a WI-FI interface), and is typically used to establish a communication connection between the device 1 and other electronic devices.
  • a standard wired interface such as a WI-FI interface
  • Figure 1 shows only the face recognition model generation device 1 with components 11-14 and model generation program 01, but it should be understood that not all illustrated components may be implemented, alternative implementations may be more or more Less components.
  • the device 1 may further include a user interface
  • the user interface may include a display
  • an input unit such as a keyboard
  • the optional user interface may further include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like.
  • the display may also be appropriately referred to as a display screen or display unit for displaying information processed in the face recognition model generating device 1 and a user interface for displaying visualization.
  • model generation program 01 is stored in the memory 11; when the processor 12 executes the model generation program 01 stored in the memory 11, the following steps are implemented:
  • A1 Acquire a plurality of facial images of a user photographed from a plurality of angles of view, and acquire camera parameters for capturing the plurality of facial images.
  • the acquired multiple face images are matched by two to two, and the matched feature point pairs are obtained, and the matched feature point pairs are filtered by a preset feature point screening algorithm to delete the matching. Wrong feature point pairs and get the 2D coordinates of the correct feature point pair on the face image.
  • two cameras set at different viewing angles are set to capture the facial image of the user, and two facial images of different angles are obtained.
  • the relative positional relationship between the two cameras and the effective focal length of the camera are known.
  • Feature point matching is performed on two facial images according to a preset feature matching algorithm.
  • the preset feature matching algorithm may be an ORB (Oriented FAST and Rotated BRIEF) algorithm, and the ORB algorithm is a fast feature point extraction and description.
  • the algorithm can detect and match the feature points of the two facial images to find matching feature point pairs in the two images.
  • a SIFT (Scale-invariant feature transform) algorithm may be used to perform feature pair calculation.
  • the number of feature point pairs matched by these feature matching algorithms is small, and there may be a certain amount of feature points matching errors. Therefore, after the feature matching is completed, the acquired feature point pairs need to be filtered to delete the feature point pairs that match the errors, and the accuracy of the face recognition is improved.
  • step A2 may include the following refinement steps:
  • the screening is stopped, and the two-dimensional coordinates of the matching feature point pair on the facial image are obtained; if not, the matching feature point pair is further obtained according to the feature matching algorithm, and the feature point filtering algorithm is obtained according to the feature point matching algorithm.
  • the matched feature point pairs are filtered until the number of matching feature point pairs is greater than the first preset threshold.
  • the preset feature point screening algorithm is specifically as follows: the acquired facial image is divided into K ⁇ K grid regions, wherein the size of the K value may be determined according to pixels of the collected facial image, for example, a photo The size is 1600 ⁇ 1600, then the K value can be set to 80, then there are 20 ⁇ 20 pixel points in each grid area, wherein one feature point corresponds to one pixel point, and the matching in each grid area is counted.
  • the number of feature point pairs that the feature points match in their L ⁇ L neighborhood, preferably, in one embodiment, L 3, then the neighborhood of a grid area is the other 8 networks adjacent to it. Grid area. Based on the principle of motion smoothness, there must be more matching feature points in the neighborhood of the matched feature points.
  • the feature point pairs matched according to the preset feature matching algorithm are counted to determine whether the matching result is correct. If the number of statistical feature point pairs is smaller than the second preset threshold in the neighborhood of a feature point, the feature point is determined to be a pair of feature points that match the error.
  • the statistical feature point pair If a feature point is in the neighborhood of the feature point, the statistical feature point pair If the number is greater than or less than the second predetermined threshold, then the feature point is determined to be the correct feature point pair.
  • the second preset threshold may be set to a reasonable value according to the actual situation.
  • the number of correct feature point pairs after matching is determined, and it is determined whether it is greater than the first preset threshold. If the value is smaller than the first preset threshold, the feature matching is performed according to the preset feature matching algorithm, and the matching result is filtered again. According to this process, the calculation is iteratively continued until the number of matching feature point pairs is greater than the first predetermined threshold.
  • the number of iteration calculations may be set in advance. In the actual calculation process, the number of iteration calculations is counted. When the number of iterations reaches a preset number of times, the iteration is stopped, and the matching of the complete feature point pairs is performed. And screening.
  • A3. Calculate corresponding three-dimensional coordinates of the matching feature point pair according to the two-dimensional coordinates and the camera parameter, and construct three-dimensional point cloud data of the user face according to the calculated three-dimensional coordinates.
  • A4. Convert the three-dimensional point cloud data into a depth image, and use any one of the plurality of facial images as a color image of the user's face.
  • the coordinates of the feature point pairs on the two face images are obtained.
  • the spatial transformation matrix between the cameras is calculated according to the camera parameters; the corresponding three-dimensional coordinates of the correct feature point pairs are calculated according to the spatial transformation matrix and the two-dimensional coordinates.
  • the coordinate system of the image captured by the left camera is defined as O l -X l Y l , and the effective focal length of the left camera is f L; defined for the right camera coordinate system O r -x r y r z r , then the right camera coordinate system is defined as the effective focal length of the image O r -X r Y r, left camera is f r.
  • the projection model of the camera the following relationship can be obtained:
  • t x , t y , and t z are the amounts of translation of the second camera in three directions relative to the first camera, respectively.
  • ⁇ , ⁇ , ⁇ , t x , t y , t z represent the spatial relationship between the two cameras.
  • the coordinates of the correctly matched feature points calculated in the above step A2 are (X 1 , Y 1 ), (Xr, Y r ) on the two images, and the camera focal lengths f l and f r are known numbers.
  • the positional relationship between the cameras can calculate the space conversion matrix M. Therefore, the values of x and y can be calculated by the above relational expression 1, and the value of z can be calculated according to the above relational expression 4. That is, the coordinates (x, y, z) of the three-dimensional space points corresponding to (X 1 , Y 1 ), (X r , Y r ) are obtained.
  • the three-dimensional coordinates corresponding to each matched feature point are calculated, and the spatial points corresponding to the three-dimensional coordinates constitute a three-dimensional point cloud forming a face.
  • the obtained three-dimensional point cloud is converted into a depth image, and any one of the corresponding two images is used as the color image of the user.
  • steps A1 to A4 to obtain a preset number of depth images and color images of the user, and use the depth image and the color image as inputs of a preset two-channel convolutional neural network model to train the pair.
  • the channel convolutional neural network model determines model parameters
  • a two-channel convolutional neural network model that determines model parameters is used as a face recognition model, wherein the two-channel convolutional neural network model takes the results of the fully connected layer as an output.
  • Construct a two-channel convolutional neural network model which does not need to classify the output, and takes the result of the fully connected layer as an output, and outputs the result as a feature vector.
  • the input of one channel of the model is a color image
  • the input of the other channel is a depth image.
  • the depth image and the color image of the plurality of users are obtained as sample data, and all the sample data are divided into training samples and test samples according to a preset ratio, and the above model is trained and verified to obtain model parameters.
  • a two-channel convolutional neural network model with model parameters is determined as a face recognition model.
  • the face recognition model is applied to the face recognition process as follows:
  • Face registration process acquiring a multi-view face image of a user to be registered, acquiring a depth image and a color image of a face of the user to be registered according to the multi-view face image, and inputting the depth image and the color image into the trained face recognition In the model, a feature vector corresponding to the face image of the user is obtained.
  • the face recognition process acquiring a multi-view face image of the user to be recognized, acquiring a depth image and a color image of the face of the user to be registered according to the multi-view face image, and inputting the acquired depth image and color image to the trained person
  • a feature vector corresponding to the face image of the user to be identified is obtained; and an Euclidean distance between the feature vector of the user to be identified and the feature vector of the registered user is calculated, and if the calculated Euclidean distance is less than a preset threshold, It is determined that the user to be identified is the same person as the registered user, otherwise, it is determined that the user to be identified is not the same person as the registered user.
  • three or more cameras may be set to collect facial images of users with more viewing angles.
  • pairwise matching is performed to obtain multiple sets of three-dimensional point cloud data, and multiple sets of three-dimensional point cloud data are merged into a complete set of point cloud data.
  • the device for generating a face recognition model collects a plurality of face images of a user photographed from a plurality of angles of view, and acquires camera parameters for capturing a plurality of face images, according to a preset feature matching algorithm.
  • the acquired facial images are matched by two pairs, and the matched feature point pairs are obtained, and the matched feature point pairs are filtered by a preset feature point screening algorithm to delete the matching feature point pairs, and the correct feature point pairs are retained.
  • the three-dimensional coordinates are used to construct the three-dimensional point cloud data of the user's face according to the calculated three-dimensional coordinates, and the three-dimensional point cloud data is converted into a depth image to construct a two-channel convolutional neural network model, and the two channels are respectively used to input the depth image and a color image into which the depth image and color image of a plurality of users collected according to the above process are input to the two-channel convolutional neural network In the model, the model parameters are obtained.
  • the depth image is used as the input feature of the model, which reflects the three-dimensional characteristics of the facial features. Compared with the recognition of the traditional two-dimensional facial features, this recognition method is not easily accepted by the gesture. The influence of non-geometric appearance changes such as expressions and illumination improves the accuracy of face recognition.
  • the present application also provides a method for generating a face recognition model.
  • FIG. 2 it is a flowchart of a preferred embodiment of a method for generating a face recognition model of the present applicant. The method can be performed by a device that can be implemented by software and/or hardware.
  • the method for generating a face recognition model includes:
  • Step S10 collecting a plurality of face images of the user photographed from the plurality of angles of view, and acquiring camera parameters for capturing the plurality of face images.
  • Step S20 Matching the acquired plurality of facial images according to a preset feature matching algorithm to obtain matching feature point pairs, and screening the matched feature point pairs by a preset feature point screening algorithm to delete Match the wrong feature point pairs and obtain the 2D coordinates of the matching feature point pair on the face image.
  • two cameras set at different viewing angles are set to capture the facial image of the user, and two facial images of different angles are obtained.
  • the relative positional relationship between the two cameras and the effective focal length of the camera are known.
  • Feature point matching is performed on two facial images according to a preset feature matching algorithm.
  • the preset feature matching algorithm may be an ORB (Oriented FAST and Rotated BRIEF) algorithm, and the ORB algorithm is a fast feature point extraction and description.
  • the algorithm can detect and match the feature points of the two facial images to find matching feature point pairs in the two images.
  • a SIFT (Scale-invariant feature transform) algorithm may also be used to perform feature pair calculation.
  • the number of feature point pairs matched by these feature matching algorithms is small, and there may be a certain amount of feature points matching errors. Therefore, after the feature matching is completed, the acquired feature point pairs need to be filtered to delete the feature point pairs that match the errors, and the accuracy of the face recognition is improved.
  • step S20 may include the following refinement steps:
  • the screening is stopped, and the two-dimensional coordinates of the matching feature point pair on the facial image are obtained; if not, the matching feature point pair is further obtained according to the feature matching algorithm, and the feature point filtering algorithm is obtained according to the feature point matching algorithm.
  • the matched feature point pairs are filtered until the number of matching feature point pairs is greater than the first preset threshold.
  • the preset feature point screening algorithm is specifically as follows: the acquired facial image is divided into K ⁇ K grid regions, wherein the size of the K value may be determined according to pixels of the collected facial image, for example, a photo The size is 1600 ⁇ 1600, then the K value can be set to 80, then there are 20 ⁇ 20 pixel points in each grid area, wherein one feature point corresponds to one pixel point, and the matching in each grid area is counted.
  • the number of feature point pairs that the feature points match in their L ⁇ L neighborhood, preferably, in one embodiment, L 3, then the neighborhood of a grid area is the other 8 networks adjacent to it. Grid area. Based on the principle of motion smoothness, there must be more matching feature points in the neighborhood of the matched feature points.
  • the feature point pairs matched according to the preset feature matching algorithm are counted to determine whether the matching result is correct. If the number of statistical feature point pairs is smaller than the second preset threshold in the neighborhood of a feature point, the feature point is determined to be a pair of feature points that match the error.
  • the statistical feature point pair If a feature point is in the neighborhood of the feature point, the statistical feature point pair If the number is greater than or less than the second predetermined threshold, then the feature point is determined to be the correct feature point pair.
  • the second preset threshold may be set to a reasonable value according to actual conditions.
  • the number of correct feature point pairs after matching is determined, and it is determined whether it is greater than the first preset threshold. If the value is smaller than the first preset threshold, the feature matching is performed according to the preset feature matching algorithm, and the matching result is filtered again. According to this process, the calculation is iteratively continued until the number of matching feature point pairs is greater than the first predetermined threshold.
  • the number of iteration calculations may be set in advance. In the actual calculation process, the number of iteration calculations is counted. When the number of iterations reaches a preset number of times, the iteration is stopped, and the matching of the complete feature point pairs is performed. And screening.
  • Step S30 calculating corresponding three-dimensional coordinates of the matching feature point pair according to the two-dimensional coordinates and the camera parameter, and constructing three-dimensional point cloud data of the user face according to the calculated three-dimensional coordinates.
  • Step S40 converting the three-dimensional point cloud data into a depth image, and using any one of the plurality of facial images as a color image of the user's face.
  • the coordinates of the feature point pairs on the two face images are obtained.
  • the spatial transformation matrix between the cameras is calculated according to the camera parameters; the corresponding three-dimensional coordinates of the correct feature point pairs are calculated according to the spatial transformation matrix and the two-dimensional coordinates.
  • the coordinate system of the image captured by the left camera is defined as O l -X l Y l , and the effective focal length of the left camera is f L; defined for the right camera coordinate system O r -x r y r z r , then the right camera coordinate system is defined as the effective focal length of the image O r -X r Y r, left camera is f r.
  • the projection model of the camera the following relationship can be obtained:
  • t x , t y , and t z are the amounts of translation of the second camera in three directions relative to the first camera, respectively.
  • ⁇ , ⁇ , ⁇ , t x , t y , t z represent the spatial relationship between the two cameras.
  • the coordinates of the correctly matched feature points calculated in the above step S20 are (X 1 , Y 1 ), (X r , Y r ) on the two images, and the camera focal lengths f l and f r are known numbers.
  • the spatial transformation matrix M can be calculated by the positional relationship between the cameras, and therefore, the values of x and y can be calculated by the above relational expression 1, and the value of z can be calculated according to the above relational expression 4. That is, the coordinates (x, y, z) of the three-dimensional space points corresponding to (X 1 , Y 1 ), (X r , Y r ) are obtained.
  • the three-dimensional coordinates corresponding to each matched feature point are calculated, and the spatial points corresponding to the three-dimensional coordinates constitute a three-dimensional point cloud forming a face.
  • the obtained three-dimensional point cloud is converted into a depth image, and any one of the corresponding two images is used as the color image of the user.
  • Step S50, step S10 to step S40 are repeatedly performed to acquire a preset number of depth images and color images of the user, and the depth image and the color image are used as inputs of a preset two-channel convolutional neural network model, and the training is performed.
  • a two-channel convolutional neural network model is used to determine model parameters, and a two-channel convolutional neural network model that determines model parameters is used as a face recognition model, wherein the two-channel convolutional neural network model takes the results of the fully connected layer as an output.
  • Construct a two-channel convolutional neural network model which does not need to classify the output, and takes the result of the fully connected layer as an output, and outputs the result as a feature vector.
  • the input of one channel of the model is a color image
  • the input of the other channel is a depth image.
  • the depth image and the color image of the plurality of users are acquired as sample data, and all the sample data are divided into training samples and test samples according to a preset ratio, and the model is trained and verified to obtain model parameters.
  • a two-channel convolutional neural network model with model parameters is determined as a face recognition model.
  • the face recognition model is applied to the face recognition process as follows:
  • Face registration process acquiring a multi-view face image of a user to be registered, acquiring a depth image and a color image of a face of the user to be registered according to the multi-view face image, and inputting the depth image and the color image into the trained face recognition In the model, a feature vector corresponding to the face image of the user is obtained.
  • the face recognition process acquiring a multi-view face image of the user to be recognized, acquiring a depth image and a color image of the face of the user to be registered according to the multi-view face image, and inputting the acquired depth image and color image to the trained person
  • a feature vector corresponding to the face image of the user to be identified is obtained; and an Euclidean distance between the feature vector of the user to be identified and the feature vector of the registered user is calculated, and if the calculated Euclidean distance is less than a preset threshold, It is determined that the user to be identified is the same person as the registered user, otherwise, it is determined that the user to be identified is not the same person as the registered user.
  • three or more cameras may be set to collect facial images of users with more viewing angles.
  • pairwise matching is performed to obtain multiple sets of three-dimensional point cloud data, and multiple sets of three-dimensional point cloud data are merged into a complete set of point cloud data.
  • a method for generating a face recognition model collecting a plurality of face images of a user photographed from a plurality of angles of view, and acquiring camera parameters for capturing a plurality of face images, according to a preset feature matching algorithm
  • the acquired facial images are matched by two pairs, and the matched feature point pairs are obtained, and the matched feature point pairs are filtered by a preset feature point screening algorithm to delete the matching feature point pairs, and the correct feature point pairs are retained.
  • the three-dimensional coordinates are used to construct the three-dimensional point cloud data of the user's face according to the calculated three-dimensional coordinates, and the three-dimensional point cloud data is converted into a depth image to construct a two-channel convolutional neural network model, and the two channels are respectively used to input the depth image and a color image into which the depth image and color image of a plurality of users collected according to the above process are input to the two-channel convolutional neural network In the model, the model parameters are obtained.
  • the depth image is used as the input feature of the model, which reflects the three-dimensional characteristics of the facial features. Compared with the recognition of the traditional two-dimensional facial features, this recognition method is not easily accepted by the gesture. The influence of non-geometric appearance changes such as expressions and illumination improves the accuracy of face recognition.
  • the embodiment of the present application further provides a computer readable storage medium, where the model generation program 01 is stored, and the model generation program 01 can be executed by one or more processors to implement the following operating:
  • the specific embodiment of the computer readable storage medium of the present application is substantially the same as the foregoing embodiments of the apparatus and method for generating a face recognition model, and is not described herein.
  • B2 Matching the acquired plurality of facial images according to a preset feature matching algorithm to obtain matching feature point pairs, and filtering the matched feature point pairs by a preset feature point screening algorithm to delete the matching Wrong feature point pairs and obtain two-dimensional coordinates matching the correct feature point pairs on the face image;
  • step B1 to step B4 are repeatedly performed to obtain a preset number of depth images and color images of the user, and the depth image and the color image are used as inputs of a preset two-channel convolutional neural network model, and the pair is trained.
  • the channel convolutional neural network model determines model parameters
  • a two-channel convolutional neural network model that determines model parameters is used as a face recognition model, wherein the two-channel convolutional neural network model takes the results of the fully connected layer as an output.
  • the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM as described above). , a disk, an optical disk, including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un dispositif de génération d'un modèle de reconnaissance faciale, comprenant une mémoire et un processeur. La mémoire enregistre un programme de génération de modèle pouvant être exécuté sur le processeur. Lorsque le programme est exécuté par le processeur, les étapes suivantes sont mises en œuvre : acquisition de plusieurs images de visage d'un utilisateur capturées selon plusieurs angles de vue, et obtention de paramètres de caméra; mise en correspondance desdites images de visage par paires pour obtenir des paires de points caractéristiques appariés, et criblage des paires de points caractéristiques pour obtenir des coordonnées bidimensionnelles de paires de points caractéristiques correctement appariés; calcul des coordonnées tridimensionnelles des paires de points caractéristiques, et construction de données en nuage de points tridimensionnelles du visage de l'utilisateur; conversion des données de nuage de points tridimensionnelles en images de profondeur; et obtention d'un nombre prédéfini d'images de profondeur et d'images couleur de l'utilisateur en tant qu'entrée d'un modèle de réseau neuronal convolutionnel à double canal, et apprentissage du modèle pour déterminer des paramètres de modèle. La présente invention concerne également un procédé de génération d'un modèle de reconnaissance faciale, et un support d'informations lisible par ordinateur. La présente invention permet d'améliorer la précision d'une reconnaissance faciale.
PCT/CN2018/102401 2018-04-09 2018-08-27 Dispositif et procédé de génération de modèle de reconnaissance faciale, et support d'informations lisible par ordinateur WO2019196308A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810311642.1A CN108764024B (zh) 2018-04-09 2018-04-09 人脸识别模型的生成装置、方法及计算机可读存储介质
CN201810311642.1 2018-04-09

Publications (1)

Publication Number Publication Date
WO2019196308A1 true WO2019196308A1 (fr) 2019-10-17

Family

ID=63981561

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/102401 WO2019196308A1 (fr) 2018-04-09 2018-08-27 Dispositif et procédé de génération de modèle de reconnaissance faciale, et support d'informations lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN108764024B (fr)
WO (1) WO2019196308A1 (fr)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969085A (zh) * 2019-10-30 2020-04-07 维沃移动通信有限公司 脸部特征点定位方法及电子设备
CN111144483A (zh) * 2019-12-26 2020-05-12 歌尔股份有限公司 一种图像特征点过滤方法以及终端
CN111161395A (zh) * 2019-11-19 2020-05-15 深圳市三维人工智能科技有限公司 一种人脸表情的跟踪方法、装置及电子设备
CN111160278A (zh) * 2019-12-31 2020-05-15 河南中原大数据研究院有限公司 基于单个图像传感器的人脸纹理结构数据采集方法
CN111488856A (zh) * 2020-04-28 2020-08-04 江西吉为科技有限公司 一种基于正交引导学习的多模态2d及3d人脸表情识别
CN111696196A (zh) * 2020-05-25 2020-09-22 北京的卢深视科技有限公司 一种三维人脸模型重建方法及装置
CN111796299A (zh) * 2020-06-10 2020-10-20 东风汽车集团有限公司 一种障碍物的感知方法、感知装置和无人驾驶清扫车
CN111815698A (zh) * 2020-07-20 2020-10-23 广西安良科技有限公司 人工智能单目3d点云生成方法、装置、终端及存储介质
CN111898680A (zh) * 2020-07-31 2020-11-06 陈艳 一种基于检材多视角形态图像和深度学习的生物鉴别方法
CN112017225A (zh) * 2020-08-04 2020-12-01 华东师范大学 一种基于点云配准的深度图像匹配方法
CN112052730A (zh) * 2020-07-30 2020-12-08 广州市标准化研究院 一种3d动态人像识别监控设备及方法
CN112308912A (zh) * 2020-11-03 2021-02-02 长安大学 一种路面病害同源多特征图像获取系统、装置及方法
CN112348957A (zh) * 2020-11-05 2021-02-09 上海影创信息科技有限公司 一种基于多视角深度相机的三维人像实时重建及渲染方法
CN112562083A (zh) * 2020-12-10 2021-03-26 上海影创信息科技有限公司 基于深度相机的静态人像三维重建与动态人脸融合方法
CN112614166A (zh) * 2020-12-11 2021-04-06 北京影谱科技股份有限公司 基于cnn-knn的点云匹配方法和装置
CN112767484A (zh) * 2021-01-25 2021-05-07 脸萌有限公司 定位模型的融合方法、定位方法、电子装置
CN112816949A (zh) * 2019-11-18 2021-05-18 商汤集团有限公司 传感器的标定方法及装置、存储介质、标定系统
CN112883920A (zh) * 2021-03-22 2021-06-01 清华大学 基于点云深度学习的三维人脸扫描特征点检测方法和装置
CN113034345A (zh) * 2019-12-25 2021-06-25 广东奥博信息产业股份有限公司 一种基于sfm重建的人脸识别方法及系统
CN113095116A (zh) * 2019-12-23 2021-07-09 深圳云天励飞技术有限公司 身份识别方法及相关产品
CN113591602A (zh) * 2021-07-08 2021-11-02 娄浩哲 一种基于单视角的人脸三维轮廓特征重建装置及重建方法
CN113688784A (zh) * 2021-09-10 2021-11-23 平安医疗健康管理股份有限公司 基于人脸识别的医保卡盗用风险识别方法及其相关设备
CN113807217A (zh) * 2021-09-02 2021-12-17 浙江师范大学 人脸表情识别模型训练、识别方法、系统、装置及介质
CN114049675A (zh) * 2021-11-29 2022-02-15 合肥工业大学 基于轻量双通道神经网络的人脸表情识别方法
CN115641359A (zh) * 2022-10-17 2023-01-24 北京百度网讯科技有限公司 确定对象的运动轨迹的方法、装置、电子设备和介质

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685713B (zh) * 2018-11-13 2024-05-10 平安科技(深圳)有限公司 化妆模拟控制方法、装置、计算机设备及存储介质
CN110163064B (zh) * 2018-11-30 2022-04-05 腾讯科技(深圳)有限公司 一种道路标志物的识别方法、装置及存储介质
US11893681B2 (en) 2018-12-10 2024-02-06 Samsung Electronics Co., Ltd. Method for processing two-dimensional image and device for executing method
RU2703327C1 (ru) * 2018-12-10 2019-10-16 Самсунг Электроникс Ко., Лтд. Способ обработки двухмерного изображения и реализующее его вычислительное устройство пользователя
CN111325059A (zh) * 2018-12-14 2020-06-23 技嘉科技股份有限公司 脸部识别方法、装置及计算机可读取媒体
CN109753885B (zh) * 2018-12-14 2020-10-16 中国科学院深圳先进技术研究院 一种目标检测方法、装置以及行人检测方法、系统
CN109635770A (zh) * 2018-12-20 2019-04-16 上海瑾盛通信科技有限公司 活体检测方法、装置、存储介质及电子设备
CN109685839B (zh) * 2018-12-20 2023-04-18 广州华多网络科技有限公司 图像对齐方法、移动终端以及计算机存储介质
CN109508701B (zh) * 2018-12-28 2020-09-22 北京亿幕信息技术有限公司 一种人脸识别和追踪方法
CN109740511B (zh) * 2018-12-29 2022-11-22 广州方硅信息技术有限公司 一种人脸表情匹配方法、装置、设备及存储介质
CN109670487A (zh) * 2019-01-30 2019-04-23 汉王科技股份有限公司 一种人脸识别方法、装置及电子设备
CN110249340A (zh) * 2019-04-24 2019-09-17 深圳市汇顶科技股份有限公司 人脸注册方法、人脸识别装置、识别设备和可存储介质
CN110210322A (zh) * 2019-05-06 2019-09-06 深圳市华芯技研科技有限公司 一种通过3d原理进行人脸识别的方法
CN110222573B (zh) * 2019-05-07 2024-05-28 平安科技(深圳)有限公司 人脸识别方法、装置、计算机设备及存储介质
CN110287776B (zh) * 2019-05-15 2020-06-26 北京邮电大学 一种人脸识别的方法、装置以及计算机可读存储介质
CN111986246B (zh) * 2019-05-24 2024-04-30 北京四维图新科技股份有限公司 基于图像处理的三维模型重建方法、装置和存储介质
US20230119593A1 (en) * 2019-06-21 2023-04-20 One Connect Smart Technology Co., Ltd. Method and apparatus for training facial feature extraction model, method and apparatus for extracting facial features, device, and storage medium
CN110414358B (zh) * 2019-06-28 2022-11-25 平安科技(深圳)有限公司 基于人脸智能识别的信息输出方法、装置及存储介质
WO2021097744A1 (fr) * 2019-11-21 2021-05-27 北京机电研究所有限公司 Appareil de mesure dynamique de grandeur tridimensionnelle et procédé de mesure correspondant
CN111047703B (zh) * 2019-12-23 2023-09-26 杭州电力设备制造有限公司 一种用户高压配电设备识别与空间重建方法
CN111160232B (zh) * 2019-12-25 2021-03-12 上海骏聿数码科技有限公司 正面人脸重建方法、装置及系统
CN111667570B (zh) * 2020-06-05 2023-06-02 深圳市瑞立视多媒体科技有限公司 Marker点的三维重建方法、装置、设备及存储介质
CN111651954B (zh) * 2020-06-10 2023-08-18 嘉兴市像景智能装备有限公司 基于深度学习对smt电子元件三维重建的方法
CN114170640B (zh) * 2020-08-19 2024-02-02 腾讯科技(深圳)有限公司 人脸图像的处理方法、装置、计算机可读介质及设备
CN112580583B (zh) * 2020-12-28 2024-03-15 深圳市普汇智联科技有限公司 一种台球花色识别参数自动校准方法及系统
CN114494389B (zh) * 2022-04-01 2022-07-15 深圳数字视界科技有限公司 基于特征点识别连接的多段扫描的空间物体三维构建系统
CN116524569A (zh) * 2023-05-10 2023-08-01 深圳大器时代科技有限公司 一种基于归类算法的多并发人脸识别系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100134487A1 (en) * 2008-12-02 2010-06-03 Shang-Hong Lai 3d face model construction method
US20160148041A1 (en) * 2014-11-21 2016-05-26 Korea Institute Of Science And Technology Method for face recognition through facial expression normalization, recording medium and device for performing the method
CN106600686A (zh) * 2016-12-06 2017-04-26 西安电子科技大学 一种基于多幅未标定图像的三维点云重建方法
CN107274483A (zh) * 2017-06-14 2017-10-20 广东工业大学 一种物体三维模型构建方法
CN107491726A (zh) * 2017-07-04 2017-12-19 重庆邮电大学 一种基于多通道并行卷积神经网络的实时表情识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100134487A1 (en) * 2008-12-02 2010-06-03 Shang-Hong Lai 3d face model construction method
US20160148041A1 (en) * 2014-11-21 2016-05-26 Korea Institute Of Science And Technology Method for face recognition through facial expression normalization, recording medium and device for performing the method
CN106600686A (zh) * 2016-12-06 2017-04-26 西安电子科技大学 一种基于多幅未标定图像的三维点云重建方法
CN107274483A (zh) * 2017-06-14 2017-10-20 广东工业大学 一种物体三维模型构建方法
CN107491726A (zh) * 2017-07-04 2017-12-19 重庆邮电大学 一种基于多通道并行卷积神经网络的实时表情识别方法

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969085B (zh) * 2019-10-30 2024-03-19 维沃移动通信有限公司 脸部特征点定位方法及电子设备
CN110969085A (zh) * 2019-10-30 2020-04-07 维沃移动通信有限公司 脸部特征点定位方法及电子设备
CN112816949B (zh) * 2019-11-18 2024-04-16 商汤集团有限公司 传感器的标定方法及装置、存储介质、标定系统
CN112816949A (zh) * 2019-11-18 2021-05-18 商汤集团有限公司 传感器的标定方法及装置、存储介质、标定系统
CN111161395B (zh) * 2019-11-19 2023-12-08 深圳市三维人工智能科技有限公司 一种人脸表情的跟踪方法、装置及电子设备
CN111161395A (zh) * 2019-11-19 2020-05-15 深圳市三维人工智能科技有限公司 一种人脸表情的跟踪方法、装置及电子设备
CN113095116B (zh) * 2019-12-23 2024-03-22 深圳云天励飞技术有限公司 身份识别方法及相关产品
CN113095116A (zh) * 2019-12-23 2021-07-09 深圳云天励飞技术有限公司 身份识别方法及相关产品
CN113034345B (zh) * 2019-12-25 2023-02-28 广东奥博信息产业股份有限公司 一种基于sfm重建的人脸识别方法及系统
CN113034345A (zh) * 2019-12-25 2021-06-25 广东奥博信息产业股份有限公司 一种基于sfm重建的人脸识别方法及系统
CN111144483B (zh) * 2019-12-26 2023-10-17 歌尔股份有限公司 一种图像特征点过滤方法以及终端
CN111144483A (zh) * 2019-12-26 2020-05-12 歌尔股份有限公司 一种图像特征点过滤方法以及终端
CN111160278B (zh) * 2019-12-31 2023-04-07 陕西西图数联科技有限公司 基于单个图像传感器的人脸纹理结构数据采集方法
CN111160278A (zh) * 2019-12-31 2020-05-15 河南中原大数据研究院有限公司 基于单个图像传感器的人脸纹理结构数据采集方法
CN111488856A (zh) * 2020-04-28 2020-08-04 江西吉为科技有限公司 一种基于正交引导学习的多模态2d及3d人脸表情识别
CN111488856B (zh) * 2020-04-28 2023-04-18 江西吉为科技有限公司 一种基于正交引导学习的多模态2d及3d人脸表情识别方法
CN111696196A (zh) * 2020-05-25 2020-09-22 北京的卢深视科技有限公司 一种三维人脸模型重建方法及装置
CN111696196B (zh) * 2020-05-25 2023-12-08 合肥的卢深视科技有限公司 一种三维人脸模型重建方法及装置
CN111796299A (zh) * 2020-06-10 2020-10-20 东风汽车集团有限公司 一种障碍物的感知方法、感知装置和无人驾驶清扫车
CN111815698A (zh) * 2020-07-20 2020-10-23 广西安良科技有限公司 人工智能单目3d点云生成方法、装置、终端及存储介质
CN112052730B (zh) * 2020-07-30 2024-03-29 广州市标准化研究院 一种3d动态人像识别监控设备及方法
CN112052730A (zh) * 2020-07-30 2020-12-08 广州市标准化研究院 一种3d动态人像识别监控设备及方法
CN111898680A (zh) * 2020-07-31 2020-11-06 陈艳 一种基于检材多视角形态图像和深度学习的生物鉴别方法
CN112017225A (zh) * 2020-08-04 2020-12-01 华东师范大学 一种基于点云配准的深度图像匹配方法
CN112017225B (zh) * 2020-08-04 2023-06-09 华东师范大学 一种基于点云配准的深度图像匹配方法
CN112308912B (zh) * 2020-11-03 2023-09-15 长安大学 一种路面病害同源多特征图像获取系统、装置及方法
CN112308912A (zh) * 2020-11-03 2021-02-02 长安大学 一种路面病害同源多特征图像获取系统、装置及方法
CN112348957A (zh) * 2020-11-05 2021-02-09 上海影创信息科技有限公司 一种基于多视角深度相机的三维人像实时重建及渲染方法
CN112562083A (zh) * 2020-12-10 2021-03-26 上海影创信息科技有限公司 基于深度相机的静态人像三维重建与动态人脸融合方法
CN112614166A (zh) * 2020-12-11 2021-04-06 北京影谱科技股份有限公司 基于cnn-knn的点云匹配方法和装置
CN112767484A (zh) * 2021-01-25 2021-05-07 脸萌有限公司 定位模型的融合方法、定位方法、电子装置
CN112767484B (zh) * 2021-01-25 2023-09-05 脸萌有限公司 定位模型的融合方法、定位方法、电子装置
CN112883920A (zh) * 2021-03-22 2021-06-01 清华大学 基于点云深度学习的三维人脸扫描特征点检测方法和装置
CN113591602A (zh) * 2021-07-08 2021-11-02 娄浩哲 一种基于单视角的人脸三维轮廓特征重建装置及重建方法
CN113591602B (zh) * 2021-07-08 2024-04-30 娄浩哲 一种基于单视角的人脸三维轮廓特征重建装置及重建方法
CN113807217B (zh) * 2021-09-02 2023-11-21 浙江师范大学 人脸表情识别模型训练、识别方法、系统、装置及介质
CN113807217A (zh) * 2021-09-02 2021-12-17 浙江师范大学 人脸表情识别模型训练、识别方法、系统、装置及介质
CN113688784A (zh) * 2021-09-10 2021-11-23 平安医疗健康管理股份有限公司 基于人脸识别的医保卡盗用风险识别方法及其相关设备
CN113688784B (zh) * 2021-09-10 2024-05-14 平安医疗健康管理股份有限公司 基于人脸识别的医保卡盗用风险识别方法及其相关设备
CN114049675B (zh) * 2021-11-29 2024-02-13 合肥工业大学 基于轻量双通道神经网络的人脸表情识别方法
CN114049675A (zh) * 2021-11-29 2022-02-15 合肥工业大学 基于轻量双通道神经网络的人脸表情识别方法
CN115641359A (zh) * 2022-10-17 2023-01-24 北京百度网讯科技有限公司 确定对象的运动轨迹的方法、装置、电子设备和介质
CN115641359B (zh) * 2022-10-17 2023-10-31 北京百度网讯科技有限公司 确定对象的运动轨迹的方法、装置、电子设备和介质

Also Published As

Publication number Publication date
CN108764024A (zh) 2018-11-06
CN108764024B (zh) 2020-03-24

Similar Documents

Publication Publication Date Title
WO2019196308A1 (fr) Dispositif et procédé de génération de modèle de reconnaissance faciale, et support d'informations lisible par ordinateur
US11928800B2 (en) Image coordinate system transformation method and apparatus, device, and storage medium
WO2018032861A1 (fr) Procédé et dispositif de reconnaissance de veine de doigt
US9818023B2 (en) Enhanced face detection using depth information
WO2019071664A1 (fr) Procédé et appareil de reconnaissance de visage humain combinés à des informations de profondeur, et support de stockage
CN103425964B (zh) 图像处理设备和图像处理方法
WO2015172679A1 (fr) Procédé et dispositif de traitement d'image
WO2016177259A1 (fr) Procédé et dispositif de reconnaissance d'images similaires
TWI394093B (zh) 一種影像合成方法
WO2021012494A1 (fr) Procédé et appareil de reconnaissance faciale basée sur l'apprentissage profond, et support de stockage lisible par ordinateur
WO2016150240A1 (fr) Procédé et appareil d'authentification d'identité
CN104246793A (zh) 移动设备的三维脸部识别
CN111091075B (zh) 人脸识别方法、装置、电子设备及存储介质
CN103793642B (zh) 移动互联网掌纹身份认证方法
Boutellaa et al. On the use of Kinect depth data for identity, gender and ethnicity classification from facial images
TWI669664B (zh) 眼睛狀態檢測系統及眼睛狀態檢測系統的操作方法
CN109858433B (zh) 一种基于三维人脸模型识别二维人脸图片的方法及装置
WO2022002262A1 (fr) Procédé et appareil de reconnaissance de séquences de caractères basés sur la vision artificielle, dispositif et support
WO2019061659A1 (fr) Procédé et dispositif permettant de supprimer des lunettes d'une image de visage, et support d'informations
WO2019200807A1 (fr) Appareil et procédé de synthèse d'image et support d'informations lisible par ordinateur
JP2017211938A (ja) 生体情報処理装置、生体情報処理方法および生体情報処理プログラム
Wu et al. Rendering or normalization? An analysis of the 3D-aided pose-invariant face recognition
JP5555193B2 (ja) データ処理装置、データ処理システム、及びプログラム
WO2019037257A1 (fr) Dispositif et procédé de commande de saisie de mot de passe, et support de stockage lisible par ordinateur
WO2022133993A1 (fr) Procédé et dispositif pour effectuer un enregistrement de visage sur la base de données vidéo, et tableau blanc électronique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18914104

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/01/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18914104

Country of ref document: EP

Kind code of ref document: A1