WO2019095998A1 - Procédé et dispositif de reconnaissance d'image, dispositif informatique et support de stockage lisible par ordinateur - Google Patents

Procédé et dispositif de reconnaissance d'image, dispositif informatique et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2019095998A1
WO2019095998A1 PCT/CN2018/112760 CN2018112760W WO2019095998A1 WO 2019095998 A1 WO2019095998 A1 WO 2019095998A1 CN 2018112760 W CN2018112760 W CN 2018112760W WO 2019095998 A1 WO2019095998 A1 WO 2019095998A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
database
query
region
query image
Prior art date
Application number
PCT/CN2018/112760
Other languages
English (en)
Chinese (zh)
Inventor
杨茜
牟永强
Original Assignee
深圳云天励飞技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳云天励飞技术有限公司 filed Critical 深圳云天励飞技术有限公司
Publication of WO2019095998A1 publication Critical patent/WO2019095998A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model

Definitions

  • the present invention relates to the field of computer vision technology, and in particular, to an image recognition method and apparatus, a computer apparatus, and a computer readable storage medium.
  • the image includes color features, texture features, and shape features.
  • the color feature is the most resolving feature.
  • Gray and Tao use the AdaBoost method to verify from the color features and texture features that the color features account for more than 75% of the overall weight.
  • the commonly used color features do not contain the spatial position information of the image. Although the color features are best identified, the loss of the spatial position information may cause certain recognition and misjudgment, thereby affecting the accuracy of the recognition. However, color features containing spatial position information often have problems such as high dimensionality, high computational complexity, and high accuracy and robustness.
  • the commonly used shape space position information is the shape context feature. However, the existing shape context feature needs to use all the points of the corresponding part as a reference, which is computationally intensive and susceptible to spurious points.
  • a first aspect of the present application provides an image recognition method, the method comprising:
  • the query image and the database image are character images
  • the zoning of the query image and the database image includes:
  • the query image and the database image are respectively divided into upper and lower areas according to the character image in the query image and the database image, wherein the upper area corresponds to the upper body of the character, and the lower area corresponds to the lower body of the character.
  • the calculating, by using the query image and the database image, the partial shape context feature with each cluster center as a reference point includes:
  • the cluster center of each region of the query image is used as a reference point to query the logarithmic relative RGB coordinate difference between the pixel points of each of the other regions of the image and the cluster center as the coordinates of the pixel point. And obtaining a logarithmic angle two-dimensional distribution histogram formed by the cluster center of the region and the pixel points of each of the other regions of the query image; for the database image, the cluster center of each region of the database image is used as a reference point, Calculating the cluster center of the region and the pixels of each of the other regions of the database image by using the logarithm-to-RGB coordinate difference between the pixel points of each of the other regions of the database image and the cluster center as the coordinates of the pixel point. A logarithmic two-dimensional distribution histogram of points.
  • the calculating a similarity coefficient between the query image and the database image according to the partial shape context feature includes:
  • the calculating a similarity coefficient between the query image and the database image according to the partial shape context feature further includes:
  • the similarity coefficient of the query image and the database image calculated by the two-dimensional histogram intersection method is divided by the distance between the query image and the cluster center corresponding to the database image as the similarity coefficient.
  • a second aspect of the present application provides an image recognition apparatus, the apparatus comprising:
  • a region dividing unit configured to perform area division on the query image and the database image
  • a coordinate calculation unit configured to calculate a logarithmic relative RGB coordinate of each pixel of each region of the query image and the database image
  • a clustering unit configured to cluster pixel points in each region of the query image and the database image according to a logarithm of each pixel of each region of the query image and the database image to obtain a query image and a database The cluster center of each region of the image;
  • a feature calculation unit configured to separately calculate a partial shape context feature with each cluster center as a reference point for the query image and the database image;
  • a similarity coefficient calculation unit configured to calculate a similarity coefficient between the query image and the database image according to the partial shape context feature
  • a matching unit configured to determine, according to the similarity coefficient, whether the query image matches the database image.
  • the query image and the database image include a character image
  • the area dividing unit is specifically configured to:
  • the query image and the database image are respectively divided into upper and lower areas according to the character image in the query image and the database image, wherein the upper area corresponds to the upper body of the character, and the lower area corresponds to the lower body of the character.
  • the feature calculation unit is specifically configured to:
  • a partial shape context feature with each cluster center as a reference point is calculated separately from the query image and the database image using a logarithmic relative RGB coordinate difference.
  • the similarity coefficient calculation unit is specifically configured to:
  • a histogram intersection value of the partial shape context feature of the query image and the database image with each cluster center as a reference point is calculated, and the histogram intersection value is used as the similarity coefficient between the query image and the database image.
  • the similarity coefficient calculation unit is further configured to:
  • the similarity coefficient of the query image and the database image calculated by the two-dimensional histogram intersection method is divided by the distance between the query image and the cluster center corresponding to the database image as the similarity coefficient.
  • a third aspect of the present application provides a computer apparatus including a processor that implements the image recognition method when executing a computer program stored in a memory.
  • a fourth aspect of the present application provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the image recognition method.
  • the invention divides the query image and the database image into regions; calculates the logarithmic relative RGB coordinates of each pixel of each region of the query image and the database image; according to each pixel of each region of the query image and the database image Logarithmic relative RGB coordinates cluster the pixel points in each region of the query image and the database image to obtain the cluster center of each region of the query image and the database image; calculate the query image and the database image separately for each cluster
  • the class center is a partial shape context feature of the reference point; the similarity coefficient of the query image and the database image is calculated according to the partial shape context feature; and whether the query image matches the database image is determined according to the similarity coefficient.
  • the invention utilizes logarithmic relative RGB coordinates for image recognition, and the logarithmic relative RGB coordinate distribution obtained by different poses and shooting angles is very similar, so the robustness to the pose and the angle is better, thereby increasing the robustness of image recognition.
  • the invention utilizes the shape context feature (ie, the partial shape context feature) to perform image recognition, increases the spatial information of the image, overcomes the defect of the misidentification caused by the lost spatial information, and improves the accuracy of the image recognition.
  • the present invention calculates the similarity coefficient between the query image and the database image according to the partial shape context feature of the query image and the database image with each cluster center as a reference point, which reduces the data calculation amount and reduces the operation complexity. Therefore, the present invention can realize image recognition with high speed, high accuracy and high robustness.
  • FIG. 1 is a flowchart of an image recognition method according to Embodiment 1 of the present invention.
  • Figure 2 is a logarithmic versus RGB coordinate distribution of an image.
  • Fig. 3 is a diagram for calculating a partial shape context feature with reference to each cluster center for an image.
  • FIG. 4 is a structural diagram of an image recognition apparatus according to a second embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a computer apparatus according to Embodiment 3 of the present invention.
  • the image recognition method of the present invention is applied in one or more computer devices.
  • the computer device is a device capable of automatically performing numerical calculation and/or information processing according to an instruction set or stored in advance, and the hardware thereof includes but is not limited to a microprocessor and an application specific integrated circuit (ASIC). , Field-Programmable Gate Array (FPGA), Digital Signal Processor (DSP), embedded devices, etc.
  • ASIC application specific integrated circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Signal Processor
  • embedded devices etc.
  • the computer device may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the computer device can perform human-computer interaction with the user through a keyboard, a mouse, a remote controller, a touch panel, or a voice control device.
  • FIG. 1 is a flowchart of an image recognition method according to Embodiment 1 of the present invention.
  • the image recognition method is applied to a computer device.
  • the image recognition method specifically includes the following steps:
  • the query image is an image that needs to be identified or matched
  • the database image is an image in a pre-established image library.
  • the image recognition method compares the query image with the database image to determine whether the query image matches the database image to confirm whether the content in the query image is consistent with the content in the database image. For example, when performing pedestrian recognition, the pedestrian image captured by the camera on the road is a query image, and the portrait library image of the traffic management system is a database image, and whether the pedestrian image and the portrait library image match are determined according to the similarity coefficient between the pedestrian image and the portrait library image.
  • the person in the pedestrian image is considered to be a character in the portrait library image; otherwise, if it does not match, the person in the non-portrait library image in the pedestrian image is considered to be able to image the pedestrian and another portrait library image. Identify.
  • Database images are typically associated with specific information, such as personally identifiable information. Based on the matching result, related information (for example, personal identification information) of the query image can be obtained. For example, when performing pedestrian recognition, if the pedestrian image matches the portrait library image, the personal identification information corresponding to the portrait library image is taken as the personal identification information of the person in the pedestrian image.
  • related information for example, personal identification information
  • the image recognition method can be applied to various fields such as video surveillance, product detection, medical diagnosis, and the like.
  • the present invention can be utilized for pedestrian identification, driver identification, vehicle identification, and the like.
  • the same division method is adopted.
  • the query image and the database image are each divided into two upper and lower regions or two left and right regions.
  • the image recognition method is used for character recognition (for example, pedestrian recognition), and the query image and the database image are character images, and the query image and the database image may be divided into upper and lower regions according to the character shapes in the image. .
  • the upper area corresponds to the upper body of the character
  • the lower area corresponds to the lower body of the character.
  • the query image is divided into an upper region R1 and a lower region R2
  • the database image is divided into an upper region R1' and a lower region R2'.
  • the characters in the image are upright characters, since the proportions of the upright characters are roughly similar but the postures and actions are different, the division of the upper and lower regions according to the shape of the characters in the image is more robust.
  • the most colorful character costume is usually a jacket, so the character image is divided into upper and lower regions.
  • the position of the division can be determined according to the empirical value, for example, according to the golden ratio of the upper and lower body of the human body.
  • the boundary between the top and bottom of the character in the character image can be identified, and the division is performed from the boundary.
  • the query image and the database image can be divided into regions in other ways.
  • the pyramid model can be used to divide the query image and the database image into regions.
  • the query image and the database image may be respectively divided into two regions, and the query image and the database image may each be divided into more than two regions, for example, divided into three regions or four regions.
  • the logarithm of the pixel point i of the red component is Ri
  • the green component is Gi
  • the blue component is Bi is (xi, yi)
  • a logarithmic relative RGB coordinate distribution map of the query image and the database image can be obtained.
  • the image recognition method of the present invention is used for person recognition, if the color difference between the upper and lower body clothing of the character image is large, the logarithm of the upper part of the character image (corresponding to the upper body of the character) corresponds to the RGB coordinate and the character image.
  • the logarithmic relative RGB coordinates of the lower region (corresponding to the lower body of the character) are often distributed in two different regions, so that two central coordinate clusters are usually obtained.
  • Figure 2 is a logarithmic versus RGB coordinate distribution of an image.
  • the image is divided into two regions R1 and R2 (for example, the query image is divided into an upper region R1 and a lower region R2), where 20 is the logarithm of the pixel of the region R1 relative to the RGB coordinate distribution, and 21 is the region R2.
  • the logarithm of the pixel is relative to the RGB coordinate distribution.
  • the clustering center of the area is the clustering center of the area.
  • clustering pixel points of the upper region R1 and the lower region R2 of the query image to obtain a cluster center (x 1 , y 1 ) of the upper region R1 of the query image and a cluster center (x 2 of the lower region R2 of the query image) y 2 ); clustering the pixel points of the upper region R1' and the lower region R2' of the database image to obtain the cluster center (x 1 ', y 1 ') and the lower region R2' of the upper region R1' of the query image Clustering center (x 2 ', y 2 ').
  • the pixel points of the region R1 are clustered according to the logarithm of each pixel of the region R1 with respect to the RGB coordinates to obtain the cluster center 22 of the region R1; the logarithm of each pixel according to the region R2 The pixel points of the region R2 are clustered with respect to the RGB coordinates to obtain the cluster center 23 of the region R2.
  • the GMM Global System for Mobile Communications
  • K-Means algorithm may be used to cluster the pixel points in each region of the query image and the database image to obtain a cluster center of each region of the query image and the database image.
  • GMM Gaussian Mixture Model
  • K-Means algorithm with a clustering center number of 2
  • the cluster center (x 1 , y 1 ) of the upper region R1 of the query image and the cluster center of the lower region R2 (x 2 , y) are obtained.
  • the cluster center (x 1 ', y 1 ') of the upper region R1' of the database image and the cluster center (x 2 ', y 2 ') of the lower region R2' are obtained.
  • clustering algorithms can also be used to cluster the query image with pixels in each region of the database image.
  • a DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm is used to cluster pixel points in each region of the query image and the database image.
  • the partial shape context feature may be a logarithmic two-dimensional distribution histogram.
  • the cluster center of each region of the query image is used as a reference point, and the logarithmic angle of the cluster center of the region and the pixel points of each of the other regions of the query image are obtained.
  • Figure. For the database image, taking the cluster center of each region of the database image as a reference point, a logarithmic angle two-dimensional distribution histogram composed of the cluster center of the region and the pixel points of each of the other regions of the database image is obtained.
  • the cluster center (x 1 , y 1 ) of the above region R1 is a reference point, and the cluster center (x 1 , y 1 ) of the upper region R1 is obtained.
  • the cluster center (x 1 ', y 1 ') of the above region R1' is a reference point, and the cluster center of the upper region R1' is obtained (x 1 ', y 1 ') and the logarithmic angle two-dimensional distribution histogram H D1 (r, ⁇ ) formed by the pixel points of the lower region R2'; the cluster center (x 2 ', y 2 ') of the following region R2' is Referring to the reference point, the logarithmic two-dimensional distribution histogram H D2 (r, ⁇ ) formed by the pixel points of the region R1' on the cluster center (x 2 ', y 2 ') of the lower region R2' is obtained.
  • the clustering center of the first region is used as a reference point, and the clustering of the first region is obtained.
  • the class center is a reference point, and a logarithmic angle two-dimensional distribution histogram composed of the cluster center of the third region and the pixel points of the first region and a logarithmic angle two-dimensional distribution histogram composed of the pixels of the second region are obtained.
  • FIG. 3 is a schematic diagram of a partial shape context feature that calculates a reference point for each cluster center for an image.
  • 30 is a point distribution map obtained by using the cluster center of the region R1 as a reference point (ie, the center)
  • 31 is a point distribution map obtained by using the cluster center of the region R2 as a reference point (ie, the center)
  • 32 is a region R1.
  • 33 is a logarithmic angle two-dimensional distribution histogram composed of the cluster center of the region R2 and the pixel of the region R1.
  • the logarithmic relative RGB coordinate difference is used for calculation.
  • the partial shape context features calculated using the logarithmic relative RGB coordinate difference are not affected by the illumination intensity, and the shape context features calculated by different illumination intensities are the same, thereby improving the recognition accuracy.
  • the logarithmic relative RGB coordinates under different light intensity conditions can be expressed as:
  • the calculation of the partial shape context feature is performed using a logarithmic relative RGB coordinate difference with illumination intensity invariance. Because, according to the diagonal model, the logarithmic relative RGB coordinate difference between two points of the same image is still the same under different illumination intensities:
  • the cluster center of each region of the query image is used as a reference point, and the pixel relative to the RGB coordinate difference between the pixel point of each of the other regions of the image and the cluster center is used as the pixel.
  • the coordinates of the point are obtained by obtaining a logarithmic two-dimensional distribution histogram composed of the cluster center of the region and the pixel points of each of the other regions of the query image.
  • the cluster center of each region of the database image is taken as a reference point, and the relative RGB coordinate difference between the pixel points of each of the other regions of the database image and the cluster center is used as the coordinates of the pixel point.
  • a logarithmic two-dimensional distribution histogram composed of the cluster centers of the region and the pixels of each of the other regions of the database image is obtained.
  • the logarithmic angle two-dimensional distribution histogram calculated by logarithmic relative RGB coordinate difference is not affected by the illumination intensity, and the logarithmic angle two-dimensional distribution histogram calculated by different illumination intensities is the same, thereby improving the recognition accuracy.
  • the similarity coefficient of the query image and the database image can be calculated using the two-dimensional histogram intersection method. That is, the histogram intersection value of the partial shape context feature of the query image and the database image with each cluster center as a reference point is calculated, and the histogram intersection value is used as the similarity coefficient between the query image and the database image.
  • the similarity coefficient between the query image and the database image is calculated using the following formula:
  • the similarity coefficient of the query image and the database image can be obtained by calculating a histogram distance (for example, Euclidean distance).
  • the query is calculated according to the partial shape context feature of the query image and the database image with each cluster center as a reference point.
  • the similarity coefficient between the image and the database image reduces the amount of data calculation and reduces the computational complexity.
  • the above-mentioned two-dimensional histogram intersection method for calculating the similarity coefficient between the query image and the database image only needs to calculate the intersection matrix of the query image and the database image once, without calculating the large C matrix and its minimum path distance.
  • H Di (r, ⁇ ) contains only relative color information, and there is no absolute color. Therefore, in the present embodiment, the obtained similarity coefficient P' (Q, D) can be obtained. Dividing the distance between the query image and the cluster center corresponding to the cluster image as the similarity coefficient:
  • the similarity coefficient contains both the spatial information of the color and the difference of the absolute coordinates (ie, the absolute color).
  • the calculation of different colors may result in the same relative color, while the absolute colors are different.
  • Considering the absolute color when calculating the similarity coefficient can further improve the accuracy of recognition.
  • the calculated similarity coefficient is divided by the distance between the query image and the cluster center corresponding to the database image as the similarity coefficient.
  • the similarity coefficient of the query image and the database image is obtained by calculating a histogram distance (for example, an Euclidean distance), and the similarity coefficient is divided by the distance between the query image and the cluster center corresponding to the database image as the similarity coefficient.
  • the pedestrian image and the portrait library image match based on the similarity coefficient of the pedestrian image captured by the camera and the portrait library image of the traffic control system. If it matches, the person in the pedestrian image is considered to be a character in the portrait library image; otherwise, if it does not match, the person in the non-portrait library image in the pedestrian image is considered to be able to image the pedestrian and another portrait library image. Identify.
  • the query image and the database image It can be determined whether the similarity coefficient of the query image and the database image is greater than or equal to the preset coefficient. If the similarity coefficient of the query image and the database image is greater than or equal to the preset coefficient, the query image is determined to match the database image; otherwise, if the query image and the database are If the similarity coefficient of the image is smaller than the preset coefficient, it is determined that the query image does not match the database image.
  • the query image is judged The database image is matched; otherwise, if the similarity coefficient between the query image and the database image is not greater than the similarity coefficient between the query image and other database images, it is determined that the query image does not match the database image.
  • the image recognition method of the first embodiment performs area division on the query image and the database image; calculates the logarithm relative RGB coordinates of each pixel of each region of the query image and the database image; according to each region of the query image and the database image The logarithm of each pixel is compared with the RGB coordinates to cluster the pixel in each region of the query image and the database image to obtain the cluster center of each region of the query image and the database image; respectively, the query image and the database image are respectively Calculating a partial shape context feature with each cluster center as a reference point; calculating a similarity coefficient between the query image and the database image according to the partial shape context feature; determining whether the query image matches the database image according to the similarity coefficient.
  • the image recognition method of the first embodiment uses the logarithmic relative RGB coordinates for image recognition, and the logarithmic relative RGB coordinate distribution obtained by different poses and shooting angles is very similar, so the robustness to the pose and the angle is better, thereby increasing the image recognition. Robustness.
  • the image recognition method of the first embodiment utilizes the shape context feature (ie, the partial shape context feature) for image recognition, increases the spatial information of the image, overcomes the defect of the misidentification caused by the lost spatial information, and improves the accuracy of the image recognition.
  • the image recognition method of the first embodiment calculates the similarity coefficient between the query image and the database image according to the partial shape context feature of the query image and the database image with each cluster center as a reference point, thereby reducing the data calculation amount and reducing the operation complexity. . Therefore, the image recognition method of the first embodiment can realize image recognition with high speed, high accuracy and high robustness.
  • FIG. 4 is a structural diagram of an image recognition apparatus according to Embodiment 2 of the present invention.
  • the image recognition apparatus 10 may include a region dividing unit 401, a coordinate calculation unit 402, a clustering unit 403, a feature calculation unit 404, a similarity coefficient calculation unit 405, and a matching unit 406.
  • the area dividing unit 401 is configured to perform area division on the query image and the database image.
  • the query image is an image that needs to be identified or matched
  • the database image is an image in a pre-established image library.
  • the image recognition method compares the query image with the database image to determine whether the query image matches the database image to confirm whether the content in the query image and the content in the database image are one.
  • the pedestrian image captured by the camera on the road is a query image
  • the portrait library image of the traffic management system is a database image
  • whether the pedestrian image and the portrait library image match are determined according to the similarity coefficient between the pedestrian image and the portrait library image.
  • the person in the pedestrian image is considered to be a character in the portrait library image; otherwise, if it does not match, the person in the non-portrait library image in the pedestrian image is considered to be able to image the pedestrian and another portrait library image. Identify.
  • Database images are typically associated with specific information, such as personally identifiable information. Based on the matching result, related information (for example, personal identification information) of the query image can be obtained. For example, when performing pedestrian recognition, if the pedestrian image matches the portrait library image, the personal identification information corresponding to the portrait library image is taken as the personal identification information of the person in the pedestrian image.
  • related information for example, personal identification information
  • the image recognition device can be applied to various fields such as video surveillance, product detection, medical diagnosis, and the like.
  • the present invention can be utilized for pedestrian identification, driver identification, vehicle identification, and the like.
  • the same division method is adopted.
  • the query image and the database image are each divided into two upper and lower regions or two left and right regions.
  • the image recognition method is used for character recognition (for example, pedestrian recognition), and the query image and the database image are character images, and the query image and the database image may be divided into upper and lower regions according to the character shapes in the image. .
  • the upper area corresponds to the upper body of the character
  • the lower area corresponds to the lower body of the character.
  • the query image is divided into an upper region R1 and a lower region R2
  • the database image is divided into an upper region R1' and a lower region R2'.
  • the characters in the image are upright characters, since the proportions of the upright characters are roughly similar but the postures and actions are different, the division of the upper and lower regions according to the shape of the characters in the image is more robust.
  • the most colorful character costume is usually a jacket, so the character image is divided into upper and lower regions.
  • the position of the division can be determined according to the empirical value, for example, according to the golden ratio of the upper and lower body of the human body.
  • the boundary between the top and bottom of the character in the character image can be identified, and the division is performed from the boundary.
  • the query image and the database image can be divided into regions in other ways.
  • the pyramid model can be used to divide the query image and the database image into regions.
  • the query image and the database image may be respectively divided into two regions, and the query image and the database image may each be divided into more than two regions, for example, divided into three regions or four regions.
  • the coordinate calculation unit 402 is configured to calculate a logarithmic relative RGB coordinate of each pixel of each region of the query image and the database image.
  • the logarithm of the pixel point i of the red component is Ri
  • the green component is Gi
  • the blue component is Bi is (xi, yi)
  • a logarithmic relative RGB coordinate distribution map of the query image and the database image can be obtained.
  • the image recognition method of the present invention is used for character recognition, if the color difference between the upper and lower body clothing of the character image is large, the logarithm of the pixel corresponding to the upper region of the character image (corresponding to the upper body of the character) is relatively RGB coordinates.
  • the logarithm of the lower area of the character image is distributed in another area with respect to the RGB coordinates, and the two areas are clearly spaced, so that two central coordinate clusters are usually obtained.
  • Figure 2 is a logarithmic versus RGB coordinate distribution of an image.
  • the image is divided into two regions R1 and R2 (for example, the query image is divided into an upper region R1 and a lower region R2), where 20 is the logarithm of the pixel of the region R1 relative to the RGB coordinate distribution, and 21 is the region R2.
  • the logarithm of the pixel is relative to the RGB coordinate distribution.
  • the clustering unit 403 is configured to cluster the pixel in each region of the query image and the database image according to the logarithm of the pixel of each pixel of each region of the database image and the RGB coordinates of the database image to obtain a query image and The cluster center of each region of the database image.
  • clustering pixel points of the upper region R1 and the lower region R2 of the query image to obtain a cluster center (x 1 , y 1 ) of the upper region R1 of the query image and a cluster center (x 2 of the lower region R2 of the query image) y 2 ); clustering the pixel points of the upper region R1' and the lower region R2' of the database image to obtain the cluster center (x 1 ', y 1 ') and the lower region R2' of the upper region R1' of the query image Clustering center (x 2 ', y 2 ').
  • the pixel points of the region R1 are clustered according to the logarithm of each pixel of the region R1 with respect to the RGB coordinates to obtain the cluster center 22 of the region R1; the logarithm of each pixel according to the region R2 The pixel points of the region R2 are clustered with respect to the RGB coordinates to obtain the cluster center 23 of the region R2.
  • the GMM Global System for Mobile Communications
  • K-Means algorithm may be used to cluster the pixel points in each region of the query image and the database image to obtain a cluster center of each region of the query image and the database image.
  • GMM Gaussian Mixture Model
  • K-Means algorithm with a clustering center number of 2
  • the cluster center (x 1 , y 1 ) of the upper region R1 of the query image and the cluster center of the lower region R2 (x 2 , y) are obtained.
  • the cluster center (x 1 ', y 1 ') of the upper region R1' of the database image and the cluster center (x 2 ', y 2 ') of the lower region R2' are obtained.
  • clustering algorithms can also be used to cluster the query image with pixels in each region of the database image.
  • a DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm is used to cluster pixel points in each region of the query image and the database image.
  • the feature calculation unit 404 is configured to separately calculate a partial shape context feature with each cluster center as a reference point for the query image and the database image.
  • the partial shape context feature may be a logarithmic two-dimensional distribution histogram.
  • the cluster center of each region of the query image is used as a reference point, and the logarithmic angle of the cluster center of the region and the pixel points of each of the other regions of the query image are obtained.
  • Figure. For the database image, taking the cluster center of each region of the database image as a reference point, a logarithmic angle two-dimensional distribution histogram composed of the cluster center of the region and the pixel points of each of the other regions of the database image is obtained.
  • the cluster center (x 1 , y 1 ) of the above region R1 is a reference point, and the cluster center (x 1 , y 1 ) of the upper region R1 is obtained.
  • the cluster center (x 1 ', y 1 ') of the above region R1' is a reference point
  • the cluster center of the upper region R1' is obtained (x 1 ', y 1 ') and the logarithmic angle formed by the pixel points of the lower region R2' are two-dimensionally distributed in a straight line H D1 (r, ⁇ );
  • the cluster center (x 2 ', y 2 ') of the following region R2' is a reference Point
  • the logarithmic angle two-dimensional distribution straight H D2 (r, ⁇ ) formed by the pixel points of the region R1' on the cluster center (x 2 ', y 2 ') of the lower region R2' is obtained.
  • 3 is a schematic diagram of calculating a partial shape context feature with each cluster center as a reference point for a query image divided into an upper region R1 and a lower region R2.
  • the clustering center of the first region is used as a reference point, and the clustering of the first region is obtained.
  • the class center is a reference point, and a logarithmic angle two-dimensional distribution histogram composed of the cluster center of the third region and the pixel points of the first region and a logarithmic angle two-dimensional distribution histogram composed of the pixels of the second region are obtained.
  • Fig. 3 is a schematic diagram of the partial shape context feature for the image calculation with each cluster center as a reference point.
  • 30 is a point distribution map obtained by using the cluster center of the region R1 as a reference point (ie, the center)
  • 31 is a point distribution map obtained by using the cluster center of the region R2 as a reference point (ie, the center)
  • 31 is a region R1.
  • the logarithmic angle two-dimensional distribution histogram formed by the cluster center and the pixel of the region R2, and 32 is a logarithmic angle two-dimensional distribution histogram composed of the cluster center of the region R2 and the pixel of the region R1.
  • the logarithmic relative RGB coordinate difference is used for calculation.
  • the partial shape context features calculated using the logarithmic relative RGB coordinate difference are not affected by the illumination intensity, and the shape context features calculated by different illumination intensities are the same, thereby improving the recognition accuracy.
  • the logarithmic relative RGB coordinates under different light intensity conditions can be expressed as:
  • the calculation of the partial shape context feature is performed using a logarithmic relative RGB coordinate difference with illumination intensity invariance. Because, according to the diagonal model, the logarithmic relative RGB coordinate difference between two points of the same image is still the same under different illumination intensities:
  • the cluster center of each region of the query image is used as a reference point, and the pixel relative to the RGB coordinate difference between the pixel point of each of the other regions of the image and the cluster center is used as the pixel.
  • the coordinates of the point are obtained by obtaining a logarithmic two-dimensional distribution histogram composed of the cluster center of the region and the pixel points of each of the other regions of the query image.
  • the cluster center of each region of the database image is taken as a reference point, and the relative RGB coordinate difference between the pixel points of each of the other regions of the database image and the cluster center is used as the coordinates of the pixel point.
  • a logarithmic two-dimensional distribution histogram composed of the cluster centers of the region and the pixels of each of the other regions of the database image is obtained.
  • the logarithmic angle two-dimensional distribution histogram calculated by logarithmic relative RGB coordinate difference is not affected by the illumination intensity, and the logarithmic angle two-dimensional distribution histogram calculated by different illumination intensities is the same, thereby improving the recognition accuracy.
  • the similarity coefficient calculation unit 405 is configured to calculate a similarity coefficient between the query image and the database image according to the partial shape context feature of the query image and the database image with each cluster center as a reference point.
  • the similarity coefficient of the query image and the database image can be calculated using the two-dimensional histogram intersection method. That is, according to the two-dimensional histogram intersection method, the histogram intersection value of the partial shape context feature of the query image and the database image with each cluster center as a reference point is calculated, and the histogram intersection value is used as the similarity coefficient between the query image and the database image. .
  • the similarity coefficient between the query image and the database image is calculated using the following formula:
  • the similarity coefficient of the query image and the database image can be obtained by calculating the histogram distance.
  • the query is calculated according to the partial shape context feature of the query image and the database image with each cluster center as a reference point.
  • the similarity coefficient between the image and the database image reduces the amount of data calculation and reduces the computational complexity.
  • the above-mentioned two-dimensional histogram intersection method for calculating the similarity coefficient between the query image and the database image only needs to calculate the intersection matrix of the query image and the database image once, without calculating the large C matrix and its minimum path distance.
  • H Di (r, ⁇ ) contains only relative color information, and there is no absolute color. Therefore, in the present embodiment, the obtained similarity coefficient P' (Q, D) can be obtained. Dividing the distance between the query image and the cluster center corresponding to the cluster image as the similarity coefficient:
  • the similarity coefficient contains both the spatial information of the color and the difference of the absolute coordinates (ie, the absolute color).
  • the calculation of different colors may result in the same relative color, while the absolute colors are different.
  • Considering the absolute color when calculating the similarity coefficient can further improve the accuracy of recognition.
  • the calculated similarity coefficient is divided by the distance between the query image and the cluster center corresponding to the database image as the similarity coefficient.
  • the similarity coefficient of the query image and the database image is obtained by calculating a histogram distance (for example, an Euclidean distance), and the similarity coefficient is divided by the distance between the query image and the cluster center corresponding to the database image as the similarity coefficient.
  • the matching unit 406 determines whether the query image matches the database image according to the similarity coefficient between the query image and the database image.
  • the pedestrian image and the portrait library image match based on the similarity coefficient of the pedestrian image captured by the camera and the portrait library image of the traffic control system. If it matches, the person in the pedestrian image is considered to be a character in the portrait library image; otherwise, if it does not match, the person in the non-portrait library image in the pedestrian image is considered to be able to image the pedestrian and another portrait library image. Identify.
  • the query image and the database image It can be determined whether the similarity coefficient of the query image and the database image is greater than or equal to the preset coefficient. If the similarity coefficient of the query image and the database image is greater than or equal to the preset coefficient, the query image is determined to match the database image; otherwise, if the query image and the database are If the similarity coefficient of the image is smaller than the preset coefficient, it is determined that the query image does not match the database image.
  • the query image is judged The database image is matched; otherwise, if the similarity coefficient between the query image and the database image is not greater than the similarity coefficient between the query image and other database images, it is determined that the query image does not match the database image.
  • the image recognition apparatus of the second embodiment performs area division on the query image and the database image; calculates a logarithmic relative RGB coordinate of each pixel of each area of the query image and the database image; and according to the query image and each area of the database image The logarithm of each pixel is compared with the RGB coordinates to cluster the pixel in each region of the query image and the database image to obtain the cluster center of each region of the query image and the database image; respectively, the query image and the database image are respectively Calculating a partial shape context feature with each cluster center as a reference point; calculating a similarity coefficient between the query image and the database image according to the partial shape context feature; determining whether the query image matches the database image according to the similarity coefficient.
  • the image recognition device of the second embodiment uses the logarithmic relative RGB coordinates for image recognition, and the logarithmic relative RGB coordinate distribution obtained by different poses and photographing angles is very similar, so that the posture and the angle are robust, thereby increasing image recognition. Robustness.
  • the image recognition device of the second embodiment uses the shape context feature (ie, the partial shape context feature) to perform image recognition, increases the spatial information of the image, overcomes the defect of the misidentification caused by the lost spatial information, and improves the accuracy of the image recognition.
  • the image recognition apparatus of the second embodiment calculates the similarity coefficient between the query image and the database image according to the partial shape context feature of the query image and the database image with each cluster center as a reference point, thereby reducing the data calculation amount and reducing the operation complexity. . Therefore, the image recognition apparatus of the second embodiment can realize image recognition with high speed, high accuracy and high robustness.
  • FIG. 5 is a schematic diagram of a computer apparatus according to Embodiment 3 of the present invention.
  • the computer device 1 includes a memory 20, a processor 30, and a computer program 40, such as an image recognition program, stored in the memory 20 and executable on the processor 30.
  • a computer program 40 such as an image recognition program
  • the steps in the foregoing image recognition method embodiment are implemented, for example, steps 101-106 shown in FIG.
  • the processor 30 executes the computer program 40
  • the functions of the modules/units in the above device embodiments are implemented, such as the units 401-406 in FIG.
  • the computer program 40 can be partitioned into one or more modules/units that are stored in the memory 20 and executed by the processor 30 to complete this invention.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing a particular function for describing the execution of the computer program 40 in the computer device 1.
  • the computer program 40 may be divided into the area dividing unit 401, the coordinate calculating unit 402, the clustering unit 403, the feature calculating unit 404, the similarity coefficient calculating unit 405, and the matching unit 406 in FIG. Example 2.
  • the computer device 1 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. It will be understood by those skilled in the art that the schematic diagram 5 is merely an example of the computer device 1, and does not constitute a limitation of the computer device 1, and may include more or less components than those illustrated, or may combine some components, or different.
  • the components, such as the computer device 1, may also include input and output devices, network access devices, buses, and the like.
  • the processor 30 may be a central processing unit (CPU), or may be other general-purpose processors, a digital signal processor (DSP), an application specific integrated circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor 30 may be any conventional processor or the like, and the processor 30 is a control center of the computer device 1, and connects the entire computer device 1 by using various interfaces and lines. Various parts.
  • the memory 20 can be used to store the computer program 40 and/or modules/units by running or executing computer programs and/or modules/units stored in the memory 20, and by calling in memory.
  • the data within 20 implements various functions of the computer device 1.
  • the memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be Data (such as audio data, phone book, etc.) created according to the use of the computer device 1 is stored.
  • the memory 20 may include a high-speed random access memory, and may also include a non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a smart memory card (SMC), and a secure digital (Secure Digital, SD).
  • a non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a smart memory card (SMC), and a secure digital (Secure Digital, SD).
  • SMC smart memory card
  • SD Secure Digital
  • Card flash card, at least one disk storage device, flash device, or other volatile solid state storage device.
  • the modules/units integrated by the computer device 1 can be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the present invention implements all or part of the processes in the foregoing embodiments, and may also be completed by a computer program to instruct related hardware.
  • the computer program may be stored in a computer readable storage medium. The steps of the various method embodiments described above may be implemented when the program is executed by the processor.
  • the computer program comprises computer program code, which may be in the form of source code, object code form, executable file or some intermediate form.
  • the computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM). , random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. It should be noted that the content contained in the computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in a jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer readable media Does not include electrical carrier signals and telecommunication signals.
  • the disclosed computer apparatus and method may be implemented in other manners.
  • the computer device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division, and the actual implementation may have another division manner.
  • each functional unit in each embodiment of the present invention may be integrated in the same processing unit, or each unit may exist physically separately, or two or more units may be integrated in the same unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software function modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un procédé de reconnaissance d'image. Le procédé consiste : à réaliser une division de région sur une image d'interrogation et une image de base de données ; à calculer des coordonnées RVB relatives logarithmiques de chaque point de pixel dans chaque région de l'image d'interrogation et de l'image de base de données ; à regrouper des points de pixels dans chaque région de l'image d'interrogation et de l'image de base de données pour obtenir un centre de regroupement de chaque région de l'image d'interrogation et de l'image de base de données ; à calculer un caractère de contexte de forme partielle, qui prend chaque centre de regroupement en tant que point de référence, pour l'image d'interrogation et l'image de base de données respectivement ; selon le caractère de contexte de forme partielle, à calculer un coefficient de similarité de l'image d'interrogation et de l'image de base de données ; et à déterminer, en fonction du coefficient de similarité, si l'image d'interrogation correspond à l'image de base de données. L'invention concerne également un dispositif de reconnaissance d'image, un dispositif informatique et un support de stockage lisible. Au moyen de la présente invention, une reconnaissance d'image à robustesse élevée et à performance anti-brouillage élevée peut être réalisée rapidement.
PCT/CN2018/112760 2017-11-15 2018-10-30 Procédé et dispositif de reconnaissance d'image, dispositif informatique et support de stockage lisible par ordinateur WO2019095998A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711133055.XA CN107895021B (zh) 2017-11-15 2017-11-15 图像识别方法及装置、计算机装置和计算机可读存储介质
CN201711133055.X 2017-11-15

Publications (1)

Publication Number Publication Date
WO2019095998A1 true WO2019095998A1 (fr) 2019-05-23

Family

ID=61805531

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/112760 WO2019095998A1 (fr) 2017-11-15 2018-10-30 Procédé et dispositif de reconnaissance d'image, dispositif informatique et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN107895021B (fr)
WO (1) WO2019095998A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708907A (zh) * 2020-06-11 2020-09-25 中国建设银行股份有限公司 一种目标人员的查询方法、装置、设备及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895021B (zh) * 2017-11-15 2019-12-17 深圳云天励飞技术有限公司 图像识别方法及装置、计算机装置和计算机可读存储介质
CN109446408B (zh) * 2018-09-19 2021-01-26 北京京东尚科信息技术有限公司 检索相似数据的方法、装置、设备及计算机可读存储介质
CN110689046A (zh) * 2019-08-26 2020-01-14 深圳壹账通智能科技有限公司 图像识别方法、装置、计算机装置及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020028021A1 (en) * 1999-03-11 2002-03-07 Jonathan T. Foote Methods and apparatuses for video segmentation, classification, and retrieval using image class statistical models
CN104915673A (zh) * 2014-03-11 2015-09-16 株式会社理光 一种基于视觉词袋模型的目标分类方法和系统
CN107871143A (zh) * 2017-11-15 2018-04-03 深圳云天励飞技术有限公司 图像识别方法及装置、计算机装置和计算机可读存储介质
CN107895021A (zh) * 2017-11-15 2018-04-10 深圳云天励飞技术有限公司 图像识别方法及装置、计算机装置和计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020028021A1 (en) * 1999-03-11 2002-03-07 Jonathan T. Foote Methods and apparatuses for video segmentation, classification, and retrieval using image class statistical models
CN104915673A (zh) * 2014-03-11 2015-09-16 株式会社理光 一种基于视觉词袋模型的目标分类方法和系统
CN107871143A (zh) * 2017-11-15 2018-04-03 深圳云天励飞技术有限公司 图像识别方法及装置、计算机装置和计算机可读存储介质
CN107895021A (zh) * 2017-11-15 2018-04-10 深圳云天励飞技术有限公司 图像识别方法及装置、计算机装置和计算机可读存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHEN, JIANPING ET AL.: "Application of Clustering Method in Image Recognition", COMPUTER APPLICATIONS, vol. 23, no. 10, 31 October 2003 (2003-10-31), pages 52, ISSN: 2096-4188 *
XIA, RONGJIN: "Research on the Methods of Image Content Retrieval Based on Shape Contexts", CHINESE MASTER S THESES FULL-TEXT DATABASE (INFORMATION SCIENCE AND TECHNOLOGY, 31 July 2012 (2012-07-31), pages 1138 - 1737 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708907A (zh) * 2020-06-11 2020-09-25 中国建设银行股份有限公司 一种目标人员的查询方法、装置、设备及存储介质
CN111708907B (zh) * 2020-06-11 2023-07-18 中国建设银行股份有限公司 一种目标人员的查询方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN107895021B (zh) 2019-12-17
CN107895021A (zh) 2018-04-10

Similar Documents

Publication Publication Date Title
WO2019095997A1 (fr) Procédé et dispositif de reconnaissance d'image, dispositif informatique et support d'informations lisible par ordinateur
WO2019095998A1 (fr) Procédé et dispositif de reconnaissance d'image, dispositif informatique et support de stockage lisible par ordinateur
Ban et al. Face detection based on skin color likelihood
WO2021012484A1 (fr) Procédé et appareil de suivi de cible sur la base d'un apprentissage profond et support de stockage lisible par ordinateur
WO2022027912A1 (fr) Procédé et appareil de reconnaissance de pose du visage, dispositif terminal et support de stockage
CN102667810B (zh) 数字图像中的面部识别
WO2019242416A1 (fr) Procédé et appareil de traitement d'image vidéo, support d'informations lisible par ordinateur et dispositif électronique
CN109918969B (zh) 人脸检测方法及装置、计算机装置和计算机可读存储介质
CN110569756B (zh) 人脸识别模型构建方法、识别方法、设备和存储介质
US10885660B2 (en) Object detection method, device, system and storage medium
CN111435438A (zh) 适于增强现实、虚拟现实和机器人的图形基准标记识别
WO2019174405A1 (fr) Procédé d'identification de plaque d'immatriculation et système associé
WO2021036309A1 (fr) Procédé et appareil de reconnaissance d'image, appareil informatique, et support de stockage
JP6351243B2 (ja) 画像処理装置、画像処理方法
Buza et al. Skin detection based on image color segmentation with histogram and k-means clustering
WO2020248848A1 (fr) Procédé et dispositif de détermination intelligente de cellule anormale, et support d'informations lisible par ordinateur
CN110163111A (zh) 基于人脸识别的叫号方法、装置、电子设备及存储介质
WO2019041967A1 (fr) Procédé et système de détection de main, procédé et système de détection d'image, procédé de segmentation de main, support de stockage et dispositif
CN110852311A (zh) 一种三维人手关键点定位方法及装置
KR20100098641A (ko) 불변적인 시각적 장면 및 객체 인식
CN110222572A (zh) 跟踪方法、装置、电子设备及存储介质
WO2020087922A1 (fr) Procédé d'identification d'attribut facial, dispositif, dispositif informatique et support d'informations
WO2023124040A1 (fr) Procédé et appareil de reconnaissance faciale
Juang et al. Stereo-camera-based object detection using fuzzy color histograms and a fuzzy classifier with depth and shape estimations
Cai et al. Robust facial expression recognition using RGB-D images and multichannel features

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18878740

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 13/08/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18878740

Country of ref document: EP

Kind code of ref document: A1