CN111160291A - Human eye detection method based on depth information and CNN - Google Patents

Human eye detection method based on depth information and CNN Download PDF

Info

Publication number
CN111160291A
CN111160291A CN201911416013.6A CN201911416013A CN111160291A CN 111160291 A CN111160291 A CN 111160291A CN 201911416013 A CN201911416013 A CN 201911416013A CN 111160291 A CN111160291 A CN 111160291A
Authority
CN
China
Prior art keywords
depth
image
cnn
face
human eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911416013.6A
Other languages
Chinese (zh)
Other versions
CN111160291B (en
Inventor
朱志林
张伟香
王禹衡
方勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Evis Technology Co ltd
Original Assignee
Shanghai Evis Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Evis Technology Co ltd filed Critical Shanghai Evis Technology Co ltd
Priority to CN201911416013.6A priority Critical patent/CN111160291B/en
Publication of CN111160291A publication Critical patent/CN111160291A/en
Application granted granted Critical
Publication of CN111160291B publication Critical patent/CN111160291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human eye detection method based on depth information and CNN, which comprises the following steps: step S1, inputting an image and a depth image thereof; step S2, preprocessing the depth image according to the detection distance range, and removing the background in the non-detection distance range; step S3, carrying out depth histogram segmentation on the preprocessed depth map to obtain a target candidate region; step S4, performing matching verification on the candidate region by using a head-shoulder template to determine a face candidate region; step S5, comparing the overlapping areas of the face candidate frames, and merging the candidate frames meeting the set threshold; and step S6, performing face frame regression and key point regression calculation in the image region corresponding to the face candidate region in the trained CNN model to obtain the position of the human eyes. The human eye detection method based on the depth information and the CNN can improve the detection accuracy, reduce the calculation complexity, improve the detection efficiency, ensure the detection real-time and accuracy and meet the requirements of a naked eye 3D display.

Description

Human eye detection method based on depth information and CNN
Technical Field
The invention belongs to the technical field of human eye detection, relates to a human eye detection method, and particularly relates to a human eye detection method based on depth information and CNN.
Background
With the technology of naked eye 3D displays becoming mature, how to accurately detect the positions of the eyes of the viewer in real time and output the best 3D viewing effect according to the positions of the eyes of the viewer becomes an important development direction of the naked eye 3D displays. At present, the human eye detection algorithm is realized by training a CNN model and performing regression on positions of a human face frame and eyes on the basis of human face region classification, so that human eye detection is accurately realized.
Meanwhile, with the development of a depth information acquisition technology of the depth camera, depth features which are not included in the image can be obtained by extracting scene depth information through the depth camera. And the depth features can accurately segment objects in the scene, and acquire the corresponding depth range, thereby greatly reducing the computational complexity of CNN extraction of the face candidate region.
In view of this, a method combining depth information and CNN is designed to realize human eye detection and meet the requirements of the naked eye 3D display on real-time performance and accuracy.
Disclosure of Invention
The invention provides a human eye detection method based on depth information and CNN, which can improve the detection accuracy, reduce the calculation complexity and improve the detection efficiency.
In order to solve the technical problem, according to one aspect of the present invention, the following technical solutions are adopted:
a human eye detection method based on depth information and CNN comprises the following steps:
step S1, inputting an image and a depth image thereof;
step S2, preprocessing the depth image according to the detection distance range, and removing the background in the non-detection distance range;
step S3, carrying out depth histogram segmentation on the preprocessed depth map to obtain a target candidate region;
step S4, performing matching verification on the candidate region by using a head-shoulder template to determine a face candidate region;
step S5, comparing the overlapping areas of the face candidate frames, and merging the candidate frames meeting the set threshold;
and step S6, performing face frame regression and key point regression calculation in the image region corresponding to the face candidate region in the trained CNN model to obtain the position of the human eyes.
As an embodiment of the present invention, in step S2, a pixel point of the depth value in the depth map within the detection range is extracted, and the pixel point is converted into a mask with a pixel value set to 255 and the rest of the mask is 0, and the mask is multiplied by the depth map point to remove the background pixels outside the range.
As an embodiment of the present invention, in step S3, the masked depth map is converted to 0 to 255, the depth map is mapped to an xz-axis plane from an xy-axis of a screen coordinate system, and the mapped image is projected to the xz-axis respectively to segment an object range; the divided range corresponds to an x-axis region range and a z-axis region range, and a template image with a region template matching corresponding scale is obtained according to the intermediate value of the corresponding depth range.
As an embodiment of the present invention, in step S3, making full use of differences of depth values of different object positions, first converting a screen coordinate system of an xy axis to an xz axis, performing xz two-dimensional vertical projection on an object in a scene, then finding out peaks and valleys of the projection according to the x axis vertical projection on an object projection diagram, considering that each peak position represents the existence of the object, and taking valleys before and after each peak as a threshold for segmentation, and performing x axis segmentation on the object in the scene;
and performing vertical projection on the z axis on each segmented x-axis area, finding out the wave crest and the wave trough of the projection, taking each wave crest as the depth of the object, taking the wave troughs before and after the wave crest as a segmentation threshold value, forming the segmentation range of each object with the segmentation of the x axis, and segmenting each object in the scene.
As an embodiment of the present invention, in step S3, a method of adaptive depth value matching with template scale is adopted, so as to avoid the complexity of traditional multi-scale simultaneous detection; according to the depth range values obtained by different objects, the middle value of the range is taken as the depth of the object, and the head and shoulder template image matched with the depth value is selected, so that the optimal scale matching accuracy of the head and shoulder template is detected, and the complexity of simultaneous detection of the multi-scale template is avoided.
As an embodiment of the present invention, in step S4, the depth map is divided into a plurality of detection parts, the plurality of depth areas are detected in parallel, a template matching method is adopted, and similarity detection is performed between the head-shoulder template image corresponding to the current depth value and the input depth map through a sliding window with a step size of 1, and the obtained value is stored in the result map.
In one embodiment of the present invention, in step S5, a non-maximum suppression method is used to merge candidate frames in the result map, segment the head of the obtained candidate frames according to the head-shoulder ratio, and map the segmented positions in the depth map to the image as the input image of the CNN model.
In one embodiment of the present invention, in step S6, the face region resize obtained by image segmentation is set to the same training scale as the training image as the input image of the CNN model.
As an embodiment of the present invention, in step S6, four convolutional layers are used, and a model of two fully connected layers is used to perform face binary classification and face frame and key point regression, where the face binary classification uses a softmax loss function, and the face frame and face key point regression uses a minimum mean square error function to train out a model; and importing the trained model into a network structure, and inputting a face image segmented by combining the depth information to obtain a new face frame position and a face key point.
The invention has the beneficial effects that: the human eye detection method based on the depth information and the CNN can improve the detection accuracy, reduce the calculation complexity, improve the detection efficiency, ensure the detection real-time and accuracy and meet the requirements of a naked eye 3D display.
The human eye detection method provided by the invention firstly utilizes depth information to roughly extract a human face region, and then refines the human face position and the human eye position by a CNN method. According to the method, objects in the scene are segmented through the depth information according to the characteristics of the horizontal interval and the depth interval of the objects in the scene, and then head and shoulder template matching is carried out on each object, so that the rapid positioning of the human face is realized, the complexity of human face detection is greatly reduced on the premise of ensuring the accuracy of the human face detection, and the time consumption of the algorithm is reduced. On the basis of the segmented human face region, a network structure of 4 convolutional layers and 2 fully-connected layers is adopted, human face frame edge regression and position regression of key point (landmark) points are carried out on the human face, and accurate positioning on human eyes is achieved through pre-trained model parameters.
Drawings
Fig. 1 is a flowchart of a human eye recognition method according to an embodiment of the invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
For a further understanding of the invention, reference will now be made to the preferred embodiments of the invention by way of example, and it is to be understood that the description is intended to further illustrate features and advantages of the invention, and not to limit the scope of the claims.
The description in this section is for several exemplary embodiments only, and the present invention is not limited only to the scope of the embodiments described. It is within the scope of the present disclosure and protection that the same or similar prior art means and some features of the embodiments may be interchanged.
The invention discloses a human eye identification method based on depth information and CNN, and FIG. 1 is a flow chart of the human eye identification method in an embodiment of the invention; referring to fig. 1, in an embodiment of the invention, the method includes:
s1, obtaining an image and a corresponding depth map through a depth camera;
and S2, extracting pixel points of a detection range from the depth value in the depth map, converting the pixel points into a mask with the pixel value set to be 255 and the rest 0, and multiplying the mask with the depth map points to remove pixels outside the range.
And S3, converting the depth value of the depth map after being masked to 0-255, mapping the depth map to an xz-axis plane from an xy axis of a screen coordinate system, performing x-axis projection on the mapped image, finding out the positions of wave crests and wave troughs, and taking the front and rear wave trough values of the wave crest position as a segmentation area of the x axis. And respectively performing z-axis projection on the divided regions, finding out the wave crests and wave troughs of the projection, and taking the wave troughs before and after the wave crests as a region of the z-axis. The divided range corresponds to an x-axis region range and a z-axis region range, and a template image with a region template matching corresponding scale is obtained according to the intermediate value of the corresponding depth range.
And S4, dividing the depth map into a plurality of detection parts, detecting a plurality of depth areas in parallel, detecting the similarity between the head and shoulder template image corresponding to the current depth value and the input depth map by adopting a template matching method and a sliding window with the step length of 1, and storing the obtained value into a result map.
And S5, comparing the overlapped areas of the face candidate frames, and merging the candidate frames meeting the set threshold.
In an embodiment of the present invention, each pixel value of the result graph is traversed, a threshold is set, a candidate frame meeting a threshold condition is used as a candidate frame, a non-maximum suppression method is adopted for the candidate frame, and the candidate frames meeting a certain IOU threshold (intersection ratio threshold) are merged.
And S6, mapping the combined candidate frame to an image, dividing a head region to be used as an input image, and performing face frame regression and key point regression calculation in the trained CNN model to obtain the position of the human eyes.
In an embodiment of the present invention, in step S6, the steps of the face frame edge and face key point regression method are as follows:
step S61: and (3) inputting the face region resize to a training scale, sending the face region resize to 4 convolutional layers, wherein the convolutional kernel is 3x3, adding a prelu layer behind each convolutional layer, importing trained model weight parameters, and extracting 3x3x128 featuremap.
And step S62, outputting a 2+4+2xPoint Num vector by the obtained feature map through a 2-layer full connection layer, wherein 2 represents whether the face is classified, 4 represents the position of a face frame, and PointNum represents the number of key points of the face.
Step S63: and extracting the positions of human eyes from the key point positions of the obtained human face, and mapping the positions of the human eyes on the human face back to the source image to realize human eye position detection.
In summary, the human eye detection method based on the depth information and the CNN provided by the invention can ensure the real-time performance and accuracy of detection and meet the requirements of a naked eye 3D display.
The human eye detection method provided by the invention firstly utilizes depth information to roughly extract a human face region, and then refines the human face position and the human eye position by a CNN method. According to the method, objects in the scene are segmented through the depth information according to the characteristics of the horizontal interval and the depth interval of the objects in the scene, and then head and shoulder template matching is carried out on each object, so that the rapid positioning of the human face is realized, the complexity of human face detection is greatly reduced on the premise of ensuring the accuracy of the human face detection, and the time consumption of the algorithm is reduced. On the basis of the segmented face region, the invention adopts a network structure of 4 convolution layers and 2 full-connection layers to carry out face frame edge regression and key point position regression on the face, and realizes accurate positioning on the eyes through pre-trained model parameters.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The description and applications of the invention herein are illustrative and are not intended to limit the scope of the invention to the embodiments described above. Variations and modifications of the embodiments disclosed herein are possible, and alternative and equivalent various components of the embodiments will be apparent to those skilled in the art. It will be clear to those skilled in the art that the present invention may be embodied in other forms, structures, arrangements, proportions, and with other components, materials, and parts, without departing from the spirit or essential characteristics thereof. Other variations and modifications of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.

Claims (9)

1. A human eye detection method based on depth information and CNN is characterized in that the human eye detection method comprises the following steps:
step S1, inputting an image and a depth image thereof;
step S2, preprocessing the depth image according to the detection distance range, and removing the background in the non-detection distance range;
step S3, carrying out depth histogram segmentation on the preprocessed depth map to obtain a target candidate region;
step S4, performing matching verification on the candidate region by using a head-shoulder template to determine a face candidate region;
step S5, comparing the overlapping areas of the face candidate frames, and merging the candidate frames meeting the set threshold;
and step S6, performing face frame regression and key point regression calculation in the image region corresponding to the face candidate region in the trained CNN model to obtain the position of the human eyes.
2. The depth information and CNN-based human eye detection method of claim 1, wherein:
in step S2, a pixel point of the detection range is extracted from the depth value in the depth map, the pixel value is converted into a mask with a pixel value set to 255 and the rest of the mask being 0, and the mask is multiplied by the depth map point to remove background pixels outside the range.
3. The depth information and CNN-based human eye detection method of claim 1, wherein:
in step S3, converting the masked depth map depth value to 0-255, mapping the depth map from the xy axis of the screen coordinate system to the xz axis plane, projecting the mapped image to the xz axis, and segmenting the object range; the divided range corresponds to an x-axis region range and a z-axis region range, and a template image with a region template matching corresponding scale is obtained according to the intermediate value of the corresponding depth range.
4. The depth information and CNN-based human eye detection method of claim 1, wherein:
in step S3, making full use of the difference of depth values of different object positions, first converting the screen coordinate system of the xy axis to the xz axis, making an xz two-dimensional vertical projection of the object in the scene, then finding out the peaks and valleys of the projection according to the vertical projection of the x axis on the object projection diagram, considering that the position of each peak indicates the existence of the object, taking the valleys before and after each peak as the threshold value of segmentation, and making x-axis segmentation on the object in the scene;
and performing vertical projection on the z axis on each segmented x-axis area, finding out the wave crest and the wave trough of the projection, taking each wave crest as the depth of the object, taking the wave troughs before and after the wave crest as a segmentation threshold value, forming the segmentation range of each object with the segmentation of the x axis, and segmenting each object in the scene.
5. The depth information and CNN-based human eye detection method of claim 1, wherein:
in step S3, a method of matching the template scale with the adaptive depth value is adopted; and according to the depth range values obtained by different objects, taking the middle value of the range as the depth of the object, and selecting the head and shoulder template image matched with the depth value.
6. The depth information and CNN-based human eye detection method of claim 1, wherein:
in step S4, the depth map is divided into multiple detection portions, the multiple depth areas are detected in parallel, similarity detection is performed between the head-shoulder template image corresponding to the current depth value and the input depth map by using a template matching method through a sliding window with a step size of 1, and the obtained value is stored in a result map.
7. The depth information and CNN-based human eye detection method of claim 1, wherein:
in step S5, a non-maximum suppression method is used to merge candidate frames in the result image, segment the head of the obtained candidate frames according to the head-shoulder ratio, and map the segmented positions in the depth image to the image as the input image of the CNN model.
8. The depth information and CNN-based human eye detection method of claim 1, wherein:
in step S6, the face region resize obtained by image segmentation is set to the same training scale as the training image, and the face region resize is used as the input image of the CNN model.
9. The depth information and CNN-based human eye detection method of claim 1, wherein:
in the step S6, performing face two-classification and face frame and key point regression by using four convolutional layers and two fully-connected layer models, wherein the face two-classification uses a softmax loss function, and the face frame and face key point regression uses a minimum mean square error function to train out a model; and importing the trained model into a network structure, and inputting a face image segmented by combining the depth information to obtain a new face frame position and a face key point.
CN201911416013.6A 2019-12-31 2019-12-31 Human eye detection method based on depth information and CNN Active CN111160291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911416013.6A CN111160291B (en) 2019-12-31 2019-12-31 Human eye detection method based on depth information and CNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911416013.6A CN111160291B (en) 2019-12-31 2019-12-31 Human eye detection method based on depth information and CNN

Publications (2)

Publication Number Publication Date
CN111160291A true CN111160291A (en) 2020-05-15
CN111160291B CN111160291B (en) 2023-10-31

Family

ID=70560194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911416013.6A Active CN111160291B (en) 2019-12-31 2019-12-31 Human eye detection method based on depth information and CNN

Country Status (1)

Country Link
CN (1) CN111160291B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036284A (en) * 2020-08-25 2020-12-04 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN112257674A (en) * 2020-11-17 2021-01-22 珠海大横琴科技发展有限公司 Visual data processing method and device
CN112329752A (en) * 2021-01-06 2021-02-05 腾讯科技(深圳)有限公司 Training method of human eye image processing model, image processing method and device
CN112346258A (en) * 2020-11-06 2021-02-09 上海易维视科技有限公司 Grating visual area calibration method and system based on square wave fitting
CN112365547A (en) * 2020-11-06 2021-02-12 上海易维视科技有限公司 Camera correction method and system based on multiple depth grating visual points
CN113257392A (en) * 2021-04-20 2021-08-13 哈尔滨晓芯科技有限公司 Automatic preprocessing method for universal external data of ultrasonic machine

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120051588A1 (en) * 2009-12-21 2012-03-01 Microsoft Corporation Depth projector system with integrated vcsel array
CN102737235A (en) * 2012-06-28 2012-10-17 中国科学院自动化研究所 Head posture estimation method based on depth information and color image
JP2015106252A (en) * 2013-11-29 2015-06-08 シャープ株式会社 Face direction detection device and three-dimensional measurement device
CN108174182A (en) * 2017-12-30 2018-06-15 上海易维视科技股份有限公司 Three-dimensional tracking mode bore hole stereoscopic display vision area method of adjustment and display system
CN108256391A (en) * 2016-12-29 2018-07-06 广州映博智能科技有限公司 A kind of pupil region localization method based on projecting integral and edge detection
CN108615016A (en) * 2018-04-28 2018-10-02 北京华捷艾米科技有限公司 Face critical point detection method and face critical point detection device
CN109034051A (en) * 2018-07-24 2018-12-18 哈尔滨理工大学 Human-eye positioning method
CN109725721A (en) * 2018-12-29 2019-05-07 上海易维视科技股份有限公司 Human-eye positioning method and system for naked eye 3D display system
CN109961006A (en) * 2019-01-30 2019-07-02 东华大学 A kind of low pixel multiple target Face datection and crucial independent positioning method and alignment schemes
CN109993086A (en) * 2019-03-21 2019-07-09 北京华捷艾米科技有限公司 Method for detecting human face, device, system and terminal device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120051588A1 (en) * 2009-12-21 2012-03-01 Microsoft Corporation Depth projector system with integrated vcsel array
CN102737235A (en) * 2012-06-28 2012-10-17 中国科学院自动化研究所 Head posture estimation method based on depth information and color image
JP2015106252A (en) * 2013-11-29 2015-06-08 シャープ株式会社 Face direction detection device and three-dimensional measurement device
CN108256391A (en) * 2016-12-29 2018-07-06 广州映博智能科技有限公司 A kind of pupil region localization method based on projecting integral and edge detection
CN108174182A (en) * 2017-12-30 2018-06-15 上海易维视科技股份有限公司 Three-dimensional tracking mode bore hole stereoscopic display vision area method of adjustment and display system
CN108615016A (en) * 2018-04-28 2018-10-02 北京华捷艾米科技有限公司 Face critical point detection method and face critical point detection device
CN109034051A (en) * 2018-07-24 2018-12-18 哈尔滨理工大学 Human-eye positioning method
CN109725721A (en) * 2018-12-29 2019-05-07 上海易维视科技股份有限公司 Human-eye positioning method and system for naked eye 3D display system
CN109961006A (en) * 2019-01-30 2019-07-02 东华大学 A kind of low pixel multiple target Face datection and crucial independent positioning method and alignment schemes
CN109993086A (en) * 2019-03-21 2019-07-09 北京华捷艾米科技有限公司 Method for detecting human face, device, system and terminal device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘林涛: "用于3D裸眼显示的人眼实时探测与跟踪并行算法与实现" *
李燕晓: "人脸检测及眼睛定位算法的研究" *
王伟, 张佑生, 方芳: "人脸检测与识别技术综述" *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036284A (en) * 2020-08-25 2020-12-04 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN112036284B (en) * 2020-08-25 2024-04-19 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN112346258A (en) * 2020-11-06 2021-02-09 上海易维视科技有限公司 Grating visual area calibration method and system based on square wave fitting
CN112365547A (en) * 2020-11-06 2021-02-12 上海易维视科技有限公司 Camera correction method and system based on multiple depth grating visual points
CN112365547B (en) * 2020-11-06 2023-08-22 上海易维视科技有限公司 Camera correction method and system based on multi-depth grating visual point
CN112257674A (en) * 2020-11-17 2021-01-22 珠海大横琴科技发展有限公司 Visual data processing method and device
CN112329752A (en) * 2021-01-06 2021-02-05 腾讯科技(深圳)有限公司 Training method of human eye image processing model, image processing method and device
CN112329752B (en) * 2021-01-06 2021-04-06 腾讯科技(深圳)有限公司 Training method of human eye image processing model, image processing method and device
CN113257392A (en) * 2021-04-20 2021-08-13 哈尔滨晓芯科技有限公司 Automatic preprocessing method for universal external data of ultrasonic machine
CN113257392B (en) * 2021-04-20 2024-04-16 哈尔滨晓芯科技有限公司 Automatic preprocessing method for universal external data of ultrasonic machine

Also Published As

Publication number Publication date
CN111160291B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN111160291B (en) Human eye detection method based on depth information and CNN
CN103325112B (en) Moving target method for quick in dynamic scene
CN108062525B (en) Deep learning hand detection method based on hand region prediction
CN110555412B (en) End-to-end human body gesture recognition method based on combination of RGB and point cloud
Kamencay et al. Improved Depth Map Estimation from Stereo Images Based on Hybrid Method.
CN108734194B (en) Virtual reality-oriented single-depth-map-based human body joint point identification method
WO2019071976A1 (en) Panoramic image saliency detection method based on regional growth and eye movement model
CN103530599A (en) Method and system for distinguishing real face and picture face
WO2020134818A1 (en) Image processing method and related product
CN106934351A (en) Gesture identification method, device and electronic equipment
CN112200056A (en) Face living body detection method and device, electronic equipment and storage medium
Roth et al. Deep end-to-end 3d person detection from camera and lidar
Chen et al. Shape prior guided instance disparity estimation for 3d object detection
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
CN111160292B (en) Human eye detection method
Wang et al. Handling occlusion and large displacement through improved RGB-D scene flow estimation
Li et al. Monocular 3-D Object Detection Based on Depth-Guided Local Convolution for Smart Payment in D2D Systems
CN116703996A (en) Monocular three-dimensional target detection algorithm based on instance-level self-adaptive depth estimation
Gan et al. A dynamic detection method to improve SLAM performance
Fan et al. Human-M3: A Multi-view Multi-modal Dataset for 3D Human Pose Estimation in Outdoor Scenes
Huang et al. 3D object detection incorporating instance segmentation and image restoration
Jing et al. Static Map Building Scheme for Vision and Lidar Fusion
Kerdvibulvech Hybrid model of human hand motion for cybernetics application
He et al. Lane Detection and Tracking through Affine Rectification.
Erabati et al. SL3D-Single Look 3D Object Detection based on RGB-D Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant