WO2018082308A1 - Procédé de traitement d'image et terminal - Google Patents

Procédé de traitement d'image et terminal Download PDF

Info

Publication number
WO2018082308A1
WO2018082308A1 PCT/CN2017/087702 CN2017087702W WO2018082308A1 WO 2018082308 A1 WO2018082308 A1 WO 2018082308A1 CN 2017087702 W CN2017087702 W CN 2017087702W WO 2018082308 A1 WO2018082308 A1 WO 2018082308A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
target
layers
image
group
Prior art date
Application number
PCT/CN2017/087702
Other languages
English (en)
Chinese (zh)
Inventor
张兆丰
牟永强
Original Assignee
深圳云天励飞技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳云天励飞技术有限公司 filed Critical 深圳云天励飞技术有限公司
Publication of WO2018082308A1 publication Critical patent/WO2018082308A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/164Detection; Localisation; Normalisation using holistic features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method and a terminal.
  • Face detection technology has been widely used in the field of video surveillance.
  • face detection is the first link, and its accuracy has a great impact on the performance of face recognition.
  • Face detection needs to be robust, because in practical applications, face images are affected by many factors, such as lighting, occlusion, and posture changes. Face detection is the most frequently invoked in the face recognition process and needs to be able to be executed efficiently.
  • Face detection technology mainly adopts features based on manual design, such as Haar feature, LBP (local binary mode histogram) feature, HOG (gradient direction histogram) feature, etc. The calculation time of these features is acceptable, in practical applications. In the prior art, the face detection calculation algorithm is more complicated, and thus the face detection efficiency is low.
  • Embodiments of the present invention provide an image processing method and a terminal, so as to quickly detect a face position.
  • a first aspect of the embodiments of the present invention provides an image processing method, including:
  • n is an integer greater than or equal to 1;
  • the calculating a number of layers of a feature pyramid of the to-be-processed image to obtain an n layer includes:
  • n is the number of layers of the feature pyramid
  • k up is a multiple of the sampled image to be processed
  • w img , h img respectively representing the width and height of the image to be processed
  • w m , h m respectively representing the width and height of the preset face detection model
  • n octave refers to the number of layers of the image between each of the two dimensions in the feature pyramid.
  • the constructing the feature pyramid based on the N layer includes:
  • the N layer comprises P real feature layers and Q approximate feature layers, wherein P is an integer greater than or equal to 1, and the Q is an integer greater than or equal to 0;
  • the third target feature and the fourth target feature constitute the feature pyramid.
  • the determining the K target second target according to the K group first target feature Features including:
  • Calculating a pixel comparison feature for the i-th color feature training the first preset face model based on the calculated pixel comparison feature, and extracting the first target pixel comparison feature from the trained first preset face model, a fifth target feature, wherein the ith group color feature is any one of the K group color features;
  • the determining, by using the M specified decision trees, the second target feature of the K group , get the size and position of the target face frame including:
  • the size and position of the target face frame are merged.
  • a second aspect of the embodiments of the present invention provides a terminal, including:
  • An obtaining unit configured to acquire an image to be processed
  • a calculating unit configured to calculate a number of layers of the feature pyramid of the image to be processed, to obtain an n layer, wherein n is an integer greater than or equal to 1;
  • a constructing unit configured to construct the feature pyramid based on the n layers
  • An extracting unit configured to perform feature extraction on the K preset detection windows on the feature pyramid to obtain the K group first target feature, wherein each set of the preset detection window corresponds to a group of first targets a characteristic, the K being an integer greater than or equal to 1;
  • a determining unit configured to determine the K group second target feature according to the K group first target feature
  • a decision unit configured to determine the size and location of the target face frame by using the M specified decision trees, where the M is an integer greater than or equal to 1.
  • the calculating unit is specifically configured to:
  • n is the number of layers of the feature pyramid
  • k up is a multiple of the sampled image to be processed
  • w img , h img respectively representing the width and height of the image to be processed
  • w m , h m respectively representing the width and height of the preset face detection model
  • n octave refers to the number of layers of the image between each of the two dimensions in the feature pyramid.
  • the constructing unit includes:
  • a first determining module configured to determine that the N layer includes P real feature layers and Q approximate feature layers, where P is an integer greater than or equal to 1, and the Q is an integer greater than or equal to 0;
  • a first extraction module configured to perform feature extraction on the P real feature layers to obtain a third target feature
  • a second determining module configured to determine, according to the P real feature layers, the Q approximate feature layers Fourth target feature
  • a constructing module configured to form the third target feature and the fourth target feature into the feature pyramid.
  • the determining unit includes:
  • a second extraction module configured to separately extract color features from the K group first target features to obtain the K group color features
  • a first training module configured to calculate a pixel comparison feature for the i-th color feature, train the first preset face model based on the calculated pixel comparison feature, and extract the first preset face model from the training a target pixel comparison feature to obtain a fifth target feature, wherein the ith group color feature is any one of the K group color features;
  • a second training module configured to train a second preset face model by using the fifth target feature and the first target feature, and extract a second pixel comparison feature from the trained second preset face model , obtaining the sixth target feature;
  • a combination module configured to combine the first target feature and the sixth target feature into the second target feature.
  • the determining unit includes:
  • a decision module configured to determine, by using the M specified decision trees, the second target feature of the K group on the feature pyramid, to obtain an X personal face frame, where the X is an integer greater than or equal to 1;
  • a merging module configured to merge the size and position of the target face frame according to the X personal face frame.
  • the image to be processed is acquired, the number of layers of the feature pyramid of the image to be processed is calculated, and n layers are obtained, where n is an integer greater than or equal to 1.
  • the feature pyramid is constructed, and on the feature pyramid, Feature extraction is performed on K preset detection windows to obtain a first target feature of the K group, wherein each set of preset detection windows corresponds to a set of first target features, and K is an integer greater than or equal to 1, according to the K group first
  • the target feature determines the second target feature of the K group, and uses the M specified decision trees to make a decision on the second target feature of the K group, and obtains the size and position of the target face frame, where M is an integer greater than or equal to 1. Thereby, the face position can be detected quickly.
  • FIG. 1 is a schematic flowchart of an embodiment of an image processing method according to an embodiment of the present invention
  • FIG. 2a is a schematic structural diagram of a first embodiment of a terminal according to an embodiment of the present invention.
  • FIG. 2b is a schematic structural diagram of a structural unit of the terminal depicted in FIG. 2a according to an embodiment of the present invention
  • FIG. 2c is a schematic structural diagram of a determining unit of the terminal depicted in FIG. 2a according to an embodiment of the present invention
  • FIG. 2d is a schematic structural diagram of a determining unit of the terminal depicted in FIG. 2a according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a second embodiment of a terminal according to an embodiment of the present invention.
  • references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the invention.
  • the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
  • the terminal described in the embodiment of the present invention may include a smart phone (such as an Android mobile phone, an iOS mobile phone, a Windows Phone mobile phone, etc.), a tablet computer, a palmtop computer, a notebook computer, and a mobile internet device. (MID, Mobile Internet Devices) or wearable devices, etc., the above terminals are merely examples, not exhaustive, including but not limited to the above terminals.
  • FIG. 1 is a schematic flowchart of an embodiment of an image processing method according to an embodiment of the present invention.
  • the image processing method described in this embodiment includes the following steps:
  • the image to be processed is an image including a human face.
  • the image to be processed includes at least one face.
  • the terminal can acquire the original image.
  • the original image is a grayscale image
  • the image needs to be converted into an RGB image, that is, the grayscale information of the original image is copied to the R channel, the G channel, and the B channel.
  • the original image is a color image
  • the original image is not an RGB image, it can be converted into an RGB image, and if the original image is an RGB image, it is directly taken as an image to be processed.
  • n is an integer greater than or equal to 1.
  • calculating the number of layers of the feature pyramid of the image to be processed to obtain the n layer may be implemented as follows:
  • n is the number of layers of the feature pyramid
  • k up is the multiple of the sampled image to be processed
  • w img and h img respectively represent the width and height of the image to be processed
  • w m and h m respectively preset the width of the face detection model.
  • height n octave refers to the number of layers of the image between every two dimensions in the feature pyramid.
  • the size thereof may be a known amount, and the size of the preset face model is also a known amount.
  • the above k up can be specified by the user, or the system defaults.
  • the above n octave can be specified by the user, or the system defaults.
  • the obtained feature may form a feature pyramid.
  • a Laplacian pyramid transform is performed on an image to be processed to obtain a feature pyramid.
  • the number of layers of the feature pyramid in the embodiment of the present invention is not specified by the user, but is calculated according to the size of the image to be processed and the size of the preset face detection model, and thus, the determined features of the image to be processed of different sizes are determined.
  • the number of layers of the pyramid is different, so that the number of layers of the feature pyramid determined by the embodiment of the present invention is more Appropriate to the size of the image.
  • At least one preset face detection model may be used in the embodiment of the present invention.
  • all preset face detection models may have the same size.
  • the constructing the feature pyramid based on the N layer may include the following steps:
  • N layer includes P real feature layers and Q approximate feature layers, where P is an integer greater than or equal to 1, and the Q is an integer greater than or equal to 0;
  • the conventional method generally calculates the feature pyramid of the image first, and then calculates the corresponding feature based on each layer image of the feature pyramid.
  • a real feature layer only a small number of image layer features are calculated, which is called a real feature layer.
  • the features of other layer images are based on real feature interpolation and are called approximate feature layers.
  • the real feature layer in the pyramid is specified by the user or by default, and the other layers are approximate feature layers, which are obtained by interpolation of the real feature layer closest to the distance.
  • the feature extraction may be performed on the real feature layer in step 32, for example, extracting color features, gradient magnitude features, and direction histogram features.
  • the color features can be RGB, LUV, HSV, GRAY, gradient magnitude features, and direction histogram features corresponding to a special form of HOG features, ie, the number of cells in the block is one.
  • the color feature, the gradient magnitude feature, and the direction histogram feature may be referred to the prior art, and details are not described herein again.
  • the feature of the approximate feature layer can be calculated based on the real feature layer.
  • the approximate feature layer can be obtained by interpolation of the real feature layer.
  • the feature value needs to be multiplied by a coefficient.
  • the calculation method can refer to the following formula:
  • s refers to the ratio of the approximate feature layer to the real feature layer
  • ⁇ ⁇ is constant for one feature
  • the value of ⁇ ⁇ can be estimated in the following manner. Estimating, be replaced by a k ⁇ s k s, among them It refers to scaling the image I i by the scale s
  • f ⁇ (I) means finding the feature ⁇ for the image I, and averaging these features
  • N refers to the number of pictures participating in the estimation.
  • s is Take 50,000 and find ⁇ ⁇ by least squares method.
  • K is an integer greater than or equal to 1.
  • the preset detection window can be set by the system default or by the user.
  • the preset detection window can include a window size and a window position. Feature extraction is performed on each preset detection window in the K preset detection windows, and a set of first target features are respectively obtained, so that the K target first target feature is obtained, and the K is an integer greater than or equal to 1.
  • the position of the preset detection window and the size of the window are fixed.
  • one step can be moved in the x and y directions each time.
  • determining, according to the K group first target feature, the second target feature of the K group including:
  • the method for extracting pixel comparison features in the above steps 52 and 53 may refer to the following formula:
  • I represents the image I
  • l i , l j are the pixel points at different positions in the image I
  • I(l i ) respectively refer to the pixel values at the positions of l i and l j in the image I
  • compare The pixel value of I(l i ) and I(l j ) can be obtained as a comparison feature of two pixels.
  • the image to be processed can also be divided into The area bins that do not overlap each other, the size of the area is b ⁇ b, and the comparison feature in bins is defined as follows.
  • l i ⁇ bin i , l j ⁇ bin j , f cb refer to pixel comparison features of two different regions in the image to be processed.
  • the image to be processed is calculated pixel by pixel. Therefore, when the size of the model is fixed, it is not determined whether the feature is different because of the training process. Calculation.
  • the comparison features are different and depend on the model training process. In order to better fuse color, gradient magnitude, direction histogram features and pixel comparison features.
  • the first preset face model is trained using only the pixel comparison feature, and the size of the first preset face model is n ⁇ n pixels. Then, when training, there are (n/b) 2 ⁇ ((n/b) 2 -1)/2 comparison features. Training is performed using the adaboost method, which has a depth of 5 and a number of 500.
  • the pixel comparison features selected from the first preset face model will be greatly reduced, and the number of the pixel comparison features (ie, the fifth target feature) is controlled within 10000.
  • the second preset face model is trained in combination using the fifth target feature and the first target feature (ie, color feature, gradient magnitude, and direction histogram feature). Still using the adaboost method for training, the depth of the decision tree is 5, the number is 500, and the second pixel comparison feature is extracted from the trained second preset face model to obtain the sixth target feature;
  • first target feature and the sixth target feature are combined into a second target feature.
  • the present invention combines the use of the fused multi-channel feature and the pixel comparison feature, overcomes the problem that the position of the face frame is inaccurate when only the fused multi-channel feature is used, and further improves the detection rate of the face in the case of backlighting.
  • the embodiment of the present invention may adopt M designated decision trees, where M is an integer greater than or equal to 1, and the specified decision tree sends a second target feature in the preset detection window to make a decision on the second target feature. Get the score and accumulate the score. If the score is below a certain threshold, the window will be directly eliminated. If the score is higher than the threshold, continue to classify on the next decision tree, obtain the score and accumulate the score until all the decision trees are traversed, and convert the position coordinate, width and height information of the window to the image to be processed and output the face. Box, including the position and size of the face frame.
  • the determining, by using the M specified decision trees, the second target feature of the K group, and determining the size and location of the target face frame including:
  • step 61 Among them, step 61
  • the terminal may merge the face frames with overlapping positions by using a Non-Maximum Suppression (NMS) algorithm to output a final face frame.
  • NMS Non-Maximum Suppression
  • the image to be processed is obtained, the number of layers of the feature pyramid of the image to be processed is calculated, and n layers are obtained, where n is an integer greater than or equal to 1, and the feature pyramid is constructed based on the n layer.
  • feature extraction is performed on K preset detection windows to obtain a first target feature of the K group, wherein each set of preset detection windows corresponds to a set of first target features, and K is an integer greater than or equal to 1, according to
  • the first target feature of the K group determines the second target feature of the K group
  • the M target decision tree is used to determine the second target feature of the K group, and the size and position of the target face frame are obtained, where M is an integer greater than or equal to 1. .
  • FIG. 2 is a schematic structural diagram of a first embodiment of a terminal according to an embodiment of the present invention.
  • the terminal described in this embodiment includes: an obtaining unit 201, a calculating unit 202, a constructing unit 203, an extracting unit 204, a determining unit 205, and a determining unit 206, as follows:
  • An obtaining unit 201 configured to acquire an image to be processed
  • the calculating unit 202 is configured to calculate a number of layers of the feature pyramid of the image to be processed, to obtain an n layer, where n is an integer greater than or equal to 1;
  • the constructing unit 203 is configured to construct the feature pyramid based on the n layer;
  • the extracting unit 204 is configured to perform feature extraction on the K preset detection windows on the feature pyramid to obtain the K group first target feature, where each group of the preset detection windows corresponds to a group of first a target feature, the K being an integer greater than or equal to 1;
  • a determining unit 205 configured to determine, according to the K group first target feature, the K group second target Levy
  • the determining unit 206 is configured to determine the size and location of the target face frame by using the M specified decision trees to obtain the size and position of the target face frame, where the M is an integer greater than or equal to 1.
  • the calculating unit 202 is specifically configured to:
  • n is the number of layers of the feature pyramid
  • k up is a multiple of the sampled image to be processed
  • w img , h img respectively representing the width and height of the image to be processed
  • w m , h m respectively representing the width and height of the preset face detection model
  • n octave refers to the number of layers of the image between each of the two dimensions in the feature pyramid.
  • the configuration unit of the terminal as described in FIG. 2b and FIG. 2b may include: a first determining module 2031, a first extracting module 2032, a second determining module 2033, and a constructing module 2034, as follows:
  • the first determining module 2031 is configured to determine that the N layer includes P real feature layers and Q approximate feature layers, where P is an integer greater than or equal to 1, and the Q is an integer greater than or equal to 0;
  • a first extraction module 2032 configured to perform feature extraction on the P real feature layers to obtain a third target feature
  • a second determining module 2033 configured to determine, according to the P real feature layers, a fourth target feature of the Q approximate feature layers
  • the constructing module 2034 is configured to form the third target feature and the fourth target feature to form the feature pyramid.
  • the determining unit 205 of the terminal may include: a second extracting module 2051, a first training module 2052, a second training module 2053, and a combining module 2054, as follows:
  • a second extraction module 2051 configured to separately extract color features from the K group first target features to obtain the K group color features
  • the first training module 2052 is configured to calculate a pixel comparison feature for the i-th color feature, train the first preset face model based on the calculated pixel comparison feature, and extract the first preset face model from the training
  • the first target pixel compares the feature to obtain a fifth target feature, wherein the ith set of color features is any one of the K sets of color features;
  • a second training module 2053 configured to train a second preset face model by using the fifth target feature and the first target feature, and extracting a second pixel comparison from the trained second preset face model Feature, obtaining a sixth target feature;
  • the combining module 2054 is configured to combine the first target feature and the sixth target feature into the second target feature.
  • the decision unit 206 of the terminal as described in FIG. 2d and FIG. 2a may include: a decision module 2061 and a merge module 2062, as follows:
  • the decision module 2061 is configured to determine, by using the M specified decision trees, the second target feature of the K group on the feature pyramid to obtain an X personal face frame, where the X is an integer greater than or equal to 1;
  • the merging module 2062 is configured to merge the size and position of the target face frame according to the X personal face frame.
  • the image to be processed is acquired, and the number of layers of the feature pyramid of the image to be processed is calculated to obtain an n layer, where n is an integer greater than or equal to 1, and the n layer is constructed.
  • Feature pyramid on the feature pyramid, feature extraction of K preset detection windows to obtain K group first target features, wherein each set of preset detection windows corresponds to a set of first target features, K is greater than or equal to 1
  • the integer is determined according to the first target feature of the K group, and the second target feature of the K group is determined by the M specified decision trees, and the size and position of the target face frame are obtained, wherein M is greater than or An integer equal to 1.
  • FIG. 3 it is a schematic structural diagram of a second embodiment of a terminal according to an embodiment of the present invention.
  • the terminal described in this embodiment includes: at least one input device 1000; at least one output device 2000; at least one processor 3000, such as a CPU; and a memory 4000, the input device 1000, the output device 2000, the processor 3000, and the memory 4000 is connected via bus 5000.
  • the input device 1000 may be a touch panel, a physical button, or a mouse.
  • the output device 2000 described above may specifically be a display screen.
  • the above memory 4000 may be a high speed RAM memory or a non-volatile memory such as a magnetic disk memory.
  • the above memory 4000 is used to store a set of program codes, and the input device 1000, the output device 2000, and the processor 3000 are used to call the memory 4000.
  • the program code stored in do the following:
  • the processor 3000 is configured to:
  • n is an integer greater than or equal to 1;
  • the processor 3000 calculates the number of layers of the feature pyramid of the image to be processed, and obtains n layers, including:
  • n is the number of layers of the feature pyramid
  • k up is a multiple of the sampled image to be processed
  • w img , h img respectively representing the width and height of the image to be processed
  • w m , h m respectively representing the width and height of the preset face detection model
  • n octave refers to the number of layers of the image between each of the two dimensions in the feature pyramid.
  • the foregoing processor 3000 constructs the feature pyramid based on the N layer, including:
  • the N layer comprises P real feature layers and Q approximate feature layers, wherein P is an integer greater than or equal to 1, and the Q is an integer greater than or equal to 0;
  • the third target feature and the fourth target feature constitute the feature pyramid.
  • the processor 3000 determines, according to the K group first target feature, the K group second target feature, including:
  • Calculating a pixel comparison feature for the i-th color feature training the first preset face model based on the calculated pixel comparison feature, and extracting the first target pixel comparison feature from the trained first preset face model, a fifth target feature, wherein the ith group color feature is any one of the K group color features;
  • the processor 3000 determines, by using the M specified decision trees, the second target feature of the K group, and obtains the size and location of the target face frame, including:
  • the size and position of the target face frame are merged.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium can store a program, and the program includes some or all of the steps of any one of the image processing methods described in the foregoing method embodiments.
  • embodiments of the present invention can be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program is stored/distributed in a suitable medium, provided with other hardware or as part of the hardware, or in other distributed forms, such as over the Internet or other wired or wireless telecommunication systems.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de traitement d'image et un terminal. Ce procédé consiste : à acquérir une image à traiter; à calculer le nombre de couches d'une pyramide de caractéristiques de l'image à traiter de manière à obtenir n couches, n étant un nombre entier supérieur ou égal à 1; à construire la pyramide de caractéristiques sur la base des n couches; à effectuer une extraction de caractéristiques sur K fenêtres de détection préétablies sur la pyramide de caractéristiques de façon à obtenir K groupes de premières caractéristiques cibles, chaque groupe de fenêtres de détection préétablies correspondant à un groupe de premières caractéristiques cibles, K étant un nombre entier supérieur ou égal à 1; à déterminer K groupes de secondes caractéristiques cibles selon les K groupes de premières caractéristiques cibles; à prendre une décision sur les K groupes de secondes caractéristiques cibles à l'aide de M arborescences de décision spécifiées de façon à obtenir la taille et la position d'un cadre de visage cible, M étant un nombre entier supérieur ou égal à 1. La position d'un visage peut être rapidement détectée.
PCT/CN2017/087702 2016-11-07 2017-06-09 Procédé de traitement d'image et terminal WO2018082308A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610982791.1A CN106650615B (zh) 2016-11-07 2016-11-07 一种图像处理方法及终端
CN201610982791.1 2016-11-07

Publications (1)

Publication Number Publication Date
WO2018082308A1 true WO2018082308A1 (fr) 2018-05-11

Family

ID=58806382

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/087702 WO2018082308A1 (fr) 2016-11-07 2017-06-09 Procédé de traitement d'image et terminal

Country Status (2)

Country Link
CN (1) CN106650615B (fr)
WO (1) WO2018082308A1 (fr)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650615B (zh) * 2016-11-07 2018-03-27 深圳云天励飞技术有限公司 一种图像处理方法及终端
CN108229297B (zh) * 2017-09-30 2020-06-05 深圳市商汤科技有限公司 人脸识别方法和装置、电子设备、计算机存储介质
CN109727188A (zh) * 2017-10-31 2019-05-07 比亚迪股份有限公司 图像处理方法及其装置、安全驾驶方法及其装置
CN108090417A (zh) * 2017-11-27 2018-05-29 上海交通大学 一种基于卷积神经网络的人脸检测方法
CN109918969B (zh) * 2017-12-12 2021-03-05 深圳云天励飞技术有限公司 人脸检测方法及装置、计算机装置和计算机可读存储介质
CN112424787A (zh) * 2018-09-20 2021-02-26 华为技术有限公司 提取图像关键点的方法及装置
AU2018452738B2 (en) * 2018-12-12 2022-07-21 Paypal, Inc. Binning for nonlinear modeling
CN109902576B (zh) * 2019-01-25 2021-05-18 华中科技大学 一种头肩图像分类器的训练方法及应用
CN109871829B (zh) * 2019-03-15 2021-06-04 北京行易道科技有限公司 一种基于深度学习的检测模型训练方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080232698A1 (en) * 2007-03-21 2008-09-25 Ricoh Company, Ltd. Object image detection method and object image detection device
CN102831411A (zh) * 2012-09-07 2012-12-19 云南晟邺科技有限公司 一种快速人脸检测方法
CN103049751A (zh) * 2013-01-24 2013-04-17 苏州大学 一种改进的加权区域匹配高空视频行人识别方法
CN103778430A (zh) * 2014-02-24 2014-05-07 东南大学 一种基于肤色分割和AdaBoost相结合的快速人脸检测方法
CN106650615A (zh) * 2016-11-07 2017-05-10 深圳云天励飞技术有限公司 一种图像处理方法及终端

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101567048B (zh) * 2008-04-21 2012-06-06 夏普株式会社 图像辨别装置及图像检索装置
CN105512638B (zh) * 2015-12-24 2018-07-31 王华锋 一种基于融合特征的人脸检测与对齐方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080232698A1 (en) * 2007-03-21 2008-09-25 Ricoh Company, Ltd. Object image detection method and object image detection device
CN102831411A (zh) * 2012-09-07 2012-12-19 云南晟邺科技有限公司 一种快速人脸检测方法
CN103049751A (zh) * 2013-01-24 2013-04-17 苏州大学 一种改进的加权区域匹配高空视频行人识别方法
CN103778430A (zh) * 2014-02-24 2014-05-07 东南大学 一种基于肤色分割和AdaBoost相结合的快速人脸检测方法
CN106650615A (zh) * 2016-11-07 2017-05-10 深圳云天励飞技术有限公司 一种图像处理方法及终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ABRAMSON, Y. ET AL.: "Yet Even Faster (YEF) Real-Time Object Detection", INT. J. INTELLIGENT SYSTEMS TECHNOLOGIES AND APPLICATIONS, vol. 2, no. 2/3, 30 June 2007 (2007-06-30), XP055481201 *

Also Published As

Publication number Publication date
CN106650615A (zh) 2017-05-10
CN106650615B (zh) 2018-03-27

Similar Documents

Publication Publication Date Title
WO2018082308A1 (fr) Procédé de traitement d'image et terminal
US10096122B1 (en) Segmentation of object image data from background image data
US10872262B2 (en) Information processing apparatus and information processing method for detecting position of object
JP5554984B2 (ja) パターン認識方法およびパターン認識装置
US11087169B2 (en) Image processing apparatus that identifies object and method therefor
WO2017190646A1 (fr) Procédé et appareil de traitement d'image de visage et support d'informations
WO2019114036A1 (fr) Procédé et dispositif de détection de visage, dispositif informatique et support d'informations lisible par ordinateur
WO2020199478A1 (fr) Procédé d'entraînement de modèle de génération d'images, procédé, dispositif et appareil de génération d'images, et support de stockage
WO2017088432A1 (fr) Procédé et dispositif de reconnaissance d'image
CN108446694B (zh) 一种目标检测方法及装置
CN109960742B (zh) 局部信息的搜索方法及装置
WO2018090937A1 (fr) Support d'informations, terminal et procédé de traitement d'image
JP6482195B2 (ja) 画像認識装置、画像認識方法及びプログラム
US9626552B2 (en) Calculating facial image similarity
CN110765860A (zh) 摔倒判定方法、装置、计算机设备及存储介质
WO2013086255A1 (fr) Calculs de distances alignées par mouvement permettant des comparaisons d'images
CN109886223B (zh) 人脸识别方法、底库录入方法、装置及电子设备
WO2022174523A1 (fr) Procédé d'extraction d'une caractéristique de démarche d'un piéton, et procédé et système de reconnaissance de démarche
WO2023151237A1 (fr) Procédé et appareil d'estimation de position du visage, dispositif électronique et support de stockage
US11741615B2 (en) Map segmentation method and device, motion estimation method, and device terminal
WO2023159898A1 (fr) Système, procédé et appareil de reconnaissance d'actions, procédé et appareil d'entraînement de modèles, dispositif informatique et support de stockage lisible par ordinateur
CN106407978B (zh) 一种结合似物度的无约束视频中显著物体检测方法
WO2015186347A1 (fr) Système de détection, procédé de détection, et support de stockage de programmes
WO2021084972A1 (fr) Dispositif de suivi d'objet et procédé de suivi d'objet
WO2020001016A1 (fr) Procédé et appareil de génération d'image animée et dispositif électronique et support d'informations lisible par ordinateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17867013

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17867013

Country of ref document: EP

Kind code of ref document: A1