CN110099254B - Driver face tracking device and method - Google Patents

Driver face tracking device and method Download PDF

Info

Publication number
CN110099254B
CN110099254B CN201910425924.9A CN201910425924A CN110099254B CN 110099254 B CN110099254 B CN 110099254B CN 201910425924 A CN201910425924 A CN 201910425924A CN 110099254 B CN110099254 B CN 110099254B
Authority
CN
China
Prior art keywords
face
frame
image
deviation
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910425924.9A
Other languages
Chinese (zh)
Other versions
CN110099254A (en
Inventor
庄千洋
张克华
王佳逸
陈倩倩
朱苗苗
丁璐
黄勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CN201910425924.9A priority Critical patent/CN110099254B/en
Publication of CN110099254A publication Critical patent/CN110099254A/en
Application granted granted Critical
Publication of CN110099254B publication Critical patent/CN110099254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a driver face tracking device and a method, wherein the method comprises the following steps: capturing face image information of a driver; judging whether a face template exists or not; positioning by a face classifier; predicting the motion state of the human face; calculating the difference between the center of the face area and the center of the image; the singlechip controls the double-shaft cradle head to follow the human face. The invention can move along with the target face in real time, and improves the quality of the acquired image.

Description

Driver face tracking device and method
Technical Field
The invention belongs to the technical field of face tracking, and particularly relates to a driver face tracking device and method.
Background
Along with the development of technology, the application of the face tracking technology is also more and more extensive, such as fields of safety monitoring, video conference, visual fatigue driving detection and the like, however, the image acquisition range of a camera is limited, and when a detection target exceeds the acquisition range, the problems of missed detection and the like can occur. In the visual driving fatigue detection, if the focal length of the camera is too small, the pixel occupied by the target feature is too small, and if the camera is too large, the acquired image range is too small, the detection omission is easy to occur, and the erroneous judgment is easy to generate.
Therefore, how to provide a driver face tracking device and method is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a driver face tracking device and a method thereof, which can follow the movement of a target face in real time and improve the quality of acquired images.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a driver face tracking method comprises the following steps:
(1) Acquiring image information containing the face of a driver in a driving environment by using a camera;
(2) If the current frame is the first frame of the video stream detection, carrying out full-image search on the acquired image by utilizing a face classifier to acquire face state information (x, y, v) x ,v y W, h), intercepting a face image and storing a template; if the current frame is not the first frame of the video stream detection, matching the face template of the previous frame with the face coordinate position of the previous frame in the current frame, performing coarse positioning, searching in a coarse positioning face region classifier, performing fine positioning, and obtaining face state information and the face template;
wherein X represents X-axis coordinate information of a center point of the face regression frame, Y represents Y-axis coordinate information of a center point of the face regression frame, and v x Representing the movement speed of the face regression frame on the X axis, v y The motion speed of the face regression frame on the Y axis is represented, w represents the width of the face regression frame, and h represents the length of the face regression frame;
(3) Extracting x, y and v from face state information x ,v y Forms motion state vectors (x, y, v) x ,v y ) Estimating the motion position of the face through an iterative algorithm;
(4) Through iteration of the step (3), motion state vectors (x, y, v) of the face image of the next frame are estimated x ,v y ) Calculating the deviation (dx, dy) between the center of the face image and the center of the image acquired by the camera, wherein the formula is as follows:
wherein dx represents the deviation between the central position of the face frame and the X-axis of the center of the adopted image, dy represents the deviation between the central position of the face frame and the Y-axis of the center of the acquired image, X represents the X-axis coordinate information of the central point of the face frame, Y represents the Y-axis coordinate information of the central point of the face frame, and X 0 Representing X-axis coordinate information of central point of acquired image and y 0 Representing Y-axis coordinate information of a central point of an acquired image, wherein a1 is an error parameter;
(5) Encoding the deviation generated in the step (4), transmitting the encoded deviation to a microcontroller through serial communication, decoding data by the microcontroller, controlling the rotation of the double-shaft holder through PWN, and correcting the deviation.
Preferably, in the step (1), the resolution of the image acquired by the camera is 640 pixels by 480 pixels, wherein the face area occupies the area of the camera
Preferably, in the step (2), the matching method of the face template and the face coordinate position in the current frame includes: carrying out gray level conversion processing on the face template, then carrying out downsampling to 8 x 8 size, and calculating a downsampled pixel average value avg, wherein the calculation formula is as follows:
wherein x is ij Respectively representing the gray values of the pixels in the ith row and the jth column;
the individual pixels are then compared to avg, which is formulated as:
wherein T represents the comparison result of the pixels, x ij Representing the ith row and jth column pixel values;
by performing a side-to-side, top-to-bottom, ranking comparison, 64 digital sequences representing the image features are generated.
Preferably, the iterative algorithm in the step (3) estimates the face motion position, and the formula is as follows:
S(k|k-1)=FS(k-1|k-1)
wherein S (k|k-1) represents a face motion state estimation vector in the kth frame image, F represents a state transition matrix, and S (K-1|k-1) represents a face motion state optimal estimation vector in the kth-1 frame image.
Preferably, the deviation data encoding in the step (5) includes a header, a data length, x-axis deviation data, y-axis deviation data, and an ending symbol.
A driver face tracking device, comprising: the camera comprises a base, a first tripod head motor, a first tripod head support, a second tripod head motor and a camera device, wherein the first tripod head motor is installed in the base, one end of the first tripod head support is in transmission connection with the output end of the first tripod head motor, the other end of the first tripod head support is in transmission connection with the second tripod head motor, the camera device is arranged on one side of the first tripod head support, and the second tripod head motor is installed in the camera device.
Preferably, the camera further comprises a microcontroller, and the first holder motor, the second holder motor and the camera device are electrically connected with the microcontroller.
Preferably, the microcontroller is mounted in the microcontroller housing, and a microcontroller fan is fixedly mounted on the microcontroller.
Preferably, the camera device comprises an outer shell and a camera, and the camera is installed in the outer shell.
Preferably, the device further comprises a first code wheel and a second code wheel, wherein the bottom end of the first code wheel is in threaded connection with an output shaft of the first holder motor, and the top end of the first code wheel is embedded in a groove at the bottom end of the holder bracket; one side of the second code wheel is in threaded connection with an output shaft of the second holder motor, and the other side of the second code wheel is embedded into a groove at the top end of the holder bracket.
The invention has the beneficial effects that:
according to the invention, the camera is enabled to follow the human face through the human face classifier, template matching, human face motion prediction technology and motion motor PWM control technology, and high-quality human face characteristic data are collected. The coarse positioning of the face is carried out by adopting a matching algorithm, so that a large number of interference features can be eliminated, and the recognition speed and accuracy of the classifier are improved. According to the iterative face prediction algorithm, the motion trend of the face can be predicted, and the camera following device is controlled to be positioned in advance. When the method is applied to visual driving fatigue detection, face data of a driver under different poses can be captured, the recognition accuracy is improved, the target face can be followed in real time, and the quality of acquired images is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a face tracking method of the present invention.
Fig. 2 is a diagram showing a data encoding structure of the present invention.
Fig. 3 is a schematic structural diagram of the face tracking apparatus of the present invention.
Fig. 4 is a schematic diagram of the overall structure of the present invention.
Wherein, in the drawing,
1-a base lower cover; 2-a base body; 31-a first cradle head motor; 32-a second cradle head motor; 4-a base upper cover; 51-code wheel I; 52-code disc II; 6-a cradle head bracket; 7-left cover of camera; 8-a camera rear cover; 9-a camera; 10-a right cover of the camera device; 11-a camera front cover; 12-a camera USB line; 13-a cradle head motor power supply signal connecting wire; 14-a microcontroller housing; 15-a microcontroller; 16-microcontroller fan.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1-2, the invention provides a method for tracking a face of a driver, comprising the following steps:
s1: acquiring image information containing the face of a driver in a driving environment by using a camera; wherein, the resolution of the image collected by the camera is 640 pixels and 480 pixels, and the face area occupies the area of the image
S2: if the current frame is the first frame of the video stream detection, carrying out full-image search on the acquired image by utilizing a face classifier to acquire face state information (x, y, v) x ,v y W, h), intercepting a face image and storing a template; if the current frame is not the first frame of the video stream detection, matching the face template of the previous frame with the face coordinate position of the previous frame in the current frame, performing coarse positioning, searching in a coarse positioning face region classifier, performing fine positioning, and obtaining face state information and the face template;
wherein X represents X-axis coordinate information of a center point of the face regression frame, Y represents Y-axis coordinate information of a center point of the face regression frame, and v x Representing the movement speed of the face regression frame on the X axis, v y The motion speed of the face regression frame on the Y axis is represented, w represents the width of the face regression frame, and h represents the length of the face regression frame.
The matching method of the face template and the face coordinate position in the current frame comprises the following steps: carrying out gray level conversion processing on the face template, then carrying out downsampling to 8 x 8 size, and calculating a downsampled pixel average value avg, wherein the calculation formula is as follows:
wherein x is ij Respectively representing the gray values of the pixels in the ith row and the jth column;
the individual pixels are then compared to avg, which is formulated as:
wherein T represents the comparison result of the pixels, x ij Representing the ith row and jth column pixel values;
by performing a side-to-side, top-to-bottom, ranking comparison, 64 digital sequences representing the image features are generated.
S3: extracting x, y and v from face state information x ,v y Forms motion state vectors (x, y, v) x ,v y ) Estimating the motion position of the face through an iterative algorithm;
the iterative algorithm estimates the human face movement position, and the formula is as follows:
S(k|k-1)=FS(k-1|k-1)
wherein S (k|k-1) represents a face motion state estimation vector in the kth frame image, F represents a state transition matrix, and S (K-1|k-1) represents a face motion state optimal estimation vector in the kth-1 frame image.
S4: through the iteration of the step S3, the motion state vector (x, y, v) of the face image of the next frame is estimated x ,v y ) Calculating the deviation (dx, dy) between the center of the face image and the center of the image acquired by the camera, wherein the formula is as follows:
wherein, dx tableDeviation of central position of face frame and X-axis of image center is shown, dy represents deviation of central position of face frame and Y-axis of collected image center, X represents X-axis coordinate information of central point of face frame, Y represents Y-axis coordinate information of central point of face frame, X represents Y-axis coordinate information of central point of face frame 0 Representing X-axis coordinate information of central point of acquired image and y 0 And the coordinate information of the Y-axis of the central point of the acquired image is represented, a1 is an error parameter, and the value is 0.5.
S5: and (3) encoding the deviation generated in the step (S4), transmitting the encoded deviation to a microcontroller through serial port communication, decoding data by the microcontroller, and controlling the rotation of the double-shaft holder through PWN to correct the deviation.
The encoded data includes a header, a data length, x-axis deviation data, y-axis deviation data, and an ending symbol.
According to the invention, the camera is enabled to follow the human face through the human face classifier, template matching, human face motion prediction technology and motion motor PWM control technology, and high-quality human face characteristic data are collected. The coarse positioning of the face is carried out by adopting a matching algorithm, so that a large number of interference features can be eliminated, and the recognition speed and accuracy of the classifier are improved. According to the iterative face prediction algorithm, the motion trend of the face can be predicted, and the camera following device is controlled to be positioned in advance. When the method is applied to visual driving fatigue detection, face data of a driver under different poses can be captured, the recognition accuracy is improved, the target face can be followed in real time, and the quality of acquired images is improved.
Example 2
Referring to fig. 3-4, the present invention provides a driver face tracking apparatus, comprising: base, cloud platform motor one 31, cloud platform support 6, cloud platform motor two 32 and camera device, cloud platform motor one 31 is installed in the base, and cloud platform support 6 one end is connected with the output transmission of cloud platform motor one 31, and the other end is connected with cloud platform motor two 32 transmissions, and camera device sets up in cloud platform support 6 one side, and cloud platform motor two 32 are installed in camera device. Under the effect of cloud platform motor one 31, cloud platform support 6 drives camera device and rotates along the horizontal direction, under the effect of cloud platform motor two 32, camera device rotates along vertical direction to can make camera device follow target face motion in real time, improve the collection image quality.
Wherein, the base includes pedestal 2, base lower cover 1 and base upper cover 4, and pedestal 2 bottom passes through bolted connection with base lower cover 1, and the top passes through camera bellows and base upper cover 4 fixed connection. Not only is convenient for the disassembly and assembly of the base, but also is convenient for the installation, maintenance and disassembly of the fittings in the base.
The invention also comprises a microcontroller 15, and a first holder motor 31, a second holder motor 32 and a camera device are electrically connected with the microcontroller 15. Automatic control of the first pan-tilt motor 31, the second pan-tilt motor 32 and the camera device can be achieved through the microcontroller 15.
In another embodiment, the microcontroller 15 is mounted within the microcontroller housing 14, and a microcontroller fan 16 is fixedly mounted on the microcontroller 15. The arrangement of the micro-controller fan 16 can improve the heat dissipation speed of the micro-controller 15 and avoid the problem that the micro-controller is damaged due to overhigh temperature.
In another embodiment, the device further comprises a first code wheel 51 and a second code wheel 52, wherein the bottom end of the first code wheel 51 is in threaded connection with the output shaft of the first holder motor 31, and the top end of the first code wheel is embedded in a groove at the bottom end of the holder bracket 6. The first code wheel 51 is electrically connected with the microcontroller 15, and the first code wheel 51 can be used for measuring horizontal rotation angular displacement of the first holder motor 31, so that the measurement is accurate. One side of the second code wheel 52 is in threaded connection with the output shaft of the second holder motor 32, and the other side of the second code wheel is embedded into the groove of the holder bracket 6. The second code wheel 52 is electrically connected with the microcontroller 15, and the second code wheel 52 can be used for measuring the vertical rotation angular displacement of the second pan-tilt motor 32, so that the measurement is accurate.
In another embodiment, the camera device comprises an outer housing and a camera 9, the camera 9 being mounted in the outer housing.
The outer housing includes an imaging device left cover 7, an imaging device rear cover 8, an imaging device right cover 10, and an imaging device front cover 11. The camera right cover 10 is connected with the camera left cover 7 through a hidden buckle, the camera front cover 11 is connected with the camera left cover 7 through a chute, and the camera rear cover 10 is fixed on the camera left cover 7 through the chute. The shell body can be easily disassembled, so that the operations such as installation and maintenance of the camera 9 are facilitated.
The invention has simple structure, small occupied volume and convenient use, the cradle head bracket 6 drives the camera device to rotate along the horizontal direction under the action of the cradle head motor I31, and the camera device rotates along the vertical direction under the action of the cradle head motor II 32, so that the camera device can move along with the target face in real time, the quality of acquired images is improved, the invention can be applied to visual driving fatigue detection, the face data of a driver under different poses can be captured, and the recognition accuracy is improved.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (4)

1. A method for tracking the face of a driver, comprising the steps of:
(1) Acquiring image information containing the face of a driver in a driving environment by using a camera;
(2) If the current frame is the first frame of the video stream detection, carrying out full-image search on the acquired image by utilizing a face classifier to acquire face state information (x, y, v) x ,v y W, h), intercepting a face image and storing a template; if the current frame is not the first frame of the video stream detection, performing coarse positioning by matching the face template of the previous frame with the face coordinate position of the previous frame in the current frame, and performing coarse positioningSearching a face region classifier, performing fine positioning, and acquiring face state information and a face template;
wherein X represents X-axis coordinate information of a center point of the face regression frame, Y represents Y-axis coordinate information of a center point of the face regression frame, and v x Representing the movement speed of the face regression frame on the X axis, v y The motion speed of the face regression frame on the Y axis is represented, w represents the width of the face regression frame, and h represents the length of the face regression frame;
(3) Extracting x, y and v from face state information x ,v y Forms motion state vectors (x, y, v) x ,v y ) Estimating the motion position of the face through an iterative algorithm;
(4) Through iteration of the step (3), motion state vectors (x, y, v) of the face image of the next frame are estimated x ,v y ) Calculating the deviation (dx, dy) between the center of the face image and the center of the image acquired by the camera, wherein the formula is as follows:
wherein dx represents the deviation between the central position of the face frame and the X-axis of the center of the adopted image, dy represents the deviation between the central position of the face frame and the Y-axis of the center of the acquired image, X represents the X-axis coordinate information of the central point of the face frame, Y represents the Y-axis coordinate information of the central point of the face frame, and X 0 Representing X-axis coordinate information of central point of acquired image and y 0 Representing Y-axis coordinate information of a central point of an acquired image, wherein a1 is an error parameter;
(5) Encoding the deviation generated in the step (4), transmitting the encoded deviation to a microcontroller through serial port communication, decoding data by the microcontroller, controlling the rotation of the double-shaft holder through PWN, and correcting the deviation;
the matching method of the face template and the face coordinate position in the step (2) in the current frame comprises the following steps: carrying out gray level conversion processing on the face template, then carrying out downsampling to 8 x 8 size, and calculating a downsampled pixel average value avg, wherein the calculation formula is as follows:
wherein x is ij Respectively representing the gray values of the pixels in the ith row and the jth column;
the individual pixels are then compared to avg, which is formulated as:
wherein T represents the comparison result of the pixels, x ij Representing the ith row and jth column pixel values;
by performing a side-to-side, top-to-bottom, ranking comparison, 64 digital sequences representing the image features are generated.
2. The method according to claim 1, wherein in the step (1), the resolution of the captured image of the camera is 640 pixels by 480 pixels, and the face area occupies the area of the camera
3. The method of claim 1, wherein the iterative algorithm in step (3) estimates the face motion position by the formula:
S(k|k-1)=FS(k-1|k-1)
wherein S (k|k-1) represents a face motion state estimation vector in the kth frame image, F represents a state transition matrix, and S (K-1|k-1) represents a face motion state optimal estimation vector in the kth-1 frame image.
4. A method of face tracking for a driver according to claim 1 or 3, wherein the deviation data of step (5) is encoded, the encoded data comprising a header, a data length, x-axis deviation data, y-axis deviation data, and an ending symbol.
CN201910425924.9A 2019-05-21 2019-05-21 Driver face tracking device and method Active CN110099254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910425924.9A CN110099254B (en) 2019-05-21 2019-05-21 Driver face tracking device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910425924.9A CN110099254B (en) 2019-05-21 2019-05-21 Driver face tracking device and method

Publications (2)

Publication Number Publication Date
CN110099254A CN110099254A (en) 2019-08-06
CN110099254B true CN110099254B (en) 2023-08-25

Family

ID=67448801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910425924.9A Active CN110099254B (en) 2019-05-21 2019-05-21 Driver face tracking device and method

Country Status (1)

Country Link
CN (1) CN110099254B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112653844A (en) * 2020-12-28 2021-04-13 珠海亿智电子科技有限公司 Camera holder steering self-adaptive tracking adjustment method

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003015816A (en) * 2001-06-29 2003-01-17 Honda Motor Co Ltd Face/visual line recognizing device using stereo camera
CN101216885A (en) * 2008-01-04 2008-07-09 中山大学 Passerby face detection and tracing algorithm based on video
CN102592146A (en) * 2011-12-28 2012-07-18 浙江大学 Face detection and camera tripod control method applied to video monitoring
CN104036237A (en) * 2014-05-28 2014-09-10 南京大学 Detection method of rotating human face based on online prediction
CN105635657A (en) * 2014-11-03 2016-06-01 航天信息股份有限公司 Camera holder omnibearing interaction method and device based on face detection
CN105913028A (en) * 2016-04-13 2016-08-31 华南师范大学 Face tracking method and face tracking device based on face++ platform
CN106989251A (en) * 2017-05-11 2017-07-28 蔡子昊 A kind of high performance intelligent shooting tripod head
CN107466379A (en) * 2016-04-15 2017-12-12 深圳市大疆灵眸科技有限公司 Assembling assembly, hand-held cradle head structure and filming apparatus
CN108268825A (en) * 2016-12-31 2018-07-10 广州映博智能科技有限公司 Three-dimensional face tracking and expression recognition system based on mobile holder
CN108492315A (en) * 2018-02-09 2018-09-04 湖南华诺星空电子技术有限公司 A kind of dynamic human face tracking
CN108916587A (en) * 2018-07-05 2018-11-30 国网福建省电力有限公司 Transformer oil oil stain fluorescent image camera system based on three-axis stabilization holder principle
CN208311848U (en) * 2018-04-23 2019-01-01 绍兴广电工程有限公司 A kind of camera pan-tilt applied in building security system
CN109391775A (en) * 2018-10-22 2019-02-26 哈尔滨工业大学(深圳) A kind of intelligent shooting tripod head control method and system based on recognition of face
CN109702755A (en) * 2019-01-08 2019-05-03 中国矿业大学 A kind of holder and chassis can 360 degree rotation mobile shooting robot

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003015816A (en) * 2001-06-29 2003-01-17 Honda Motor Co Ltd Face/visual line recognizing device using stereo camera
CN101216885A (en) * 2008-01-04 2008-07-09 中山大学 Passerby face detection and tracing algorithm based on video
CN102592146A (en) * 2011-12-28 2012-07-18 浙江大学 Face detection and camera tripod control method applied to video monitoring
CN104036237A (en) * 2014-05-28 2014-09-10 南京大学 Detection method of rotating human face based on online prediction
CN105635657A (en) * 2014-11-03 2016-06-01 航天信息股份有限公司 Camera holder omnibearing interaction method and device based on face detection
CN105913028A (en) * 2016-04-13 2016-08-31 华南师范大学 Face tracking method and face tracking device based on face++ platform
CN107466379A (en) * 2016-04-15 2017-12-12 深圳市大疆灵眸科技有限公司 Assembling assembly, hand-held cradle head structure and filming apparatus
CN108268825A (en) * 2016-12-31 2018-07-10 广州映博智能科技有限公司 Three-dimensional face tracking and expression recognition system based on mobile holder
CN106989251A (en) * 2017-05-11 2017-07-28 蔡子昊 A kind of high performance intelligent shooting tripod head
CN108492315A (en) * 2018-02-09 2018-09-04 湖南华诺星空电子技术有限公司 A kind of dynamic human face tracking
CN208311848U (en) * 2018-04-23 2019-01-01 绍兴广电工程有限公司 A kind of camera pan-tilt applied in building security system
CN108916587A (en) * 2018-07-05 2018-11-30 国网福建省电力有限公司 Transformer oil oil stain fluorescent image camera system based on three-axis stabilization holder principle
CN109391775A (en) * 2018-10-22 2019-02-26 哈尔滨工业大学(深圳) A kind of intelligent shooting tripod head control method and system based on recognition of face
CN109702755A (en) * 2019-01-08 2019-05-03 中国矿业大学 A kind of holder and chassis can 360 degree rotation mobile shooting robot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于主动视觉的人脸检测与跟踪算法研究;董恩增;闫胜旭;佟吉钢;;系统仿真学报(第05期);全文 *

Also Published As

Publication number Publication date
CN110099254A (en) 2019-08-06

Similar Documents

Publication Publication Date Title
CN107659774B (en) Video imaging system and video processing method based on multi-scale camera array
US10043245B2 (en) Image processing apparatus, imaging apparatus, control method, and information processing system that execute a re-anti-shake process to remove negative influence of an anti-shake process
CN109005334B (en) Imaging method, device, terminal and storage medium
US20170295373A1 (en) Encoding image data at a head mounted display device based on pose information
Chen et al. Videoinr: Learning video implicit neural representation for continuous space-time super-resolution
CN101340518B (en) Image stabilization method for a video camera
CN100530239C (en) Video stabilizing method based on matching and tracking of characteristic
CN105100580B (en) Monitoring system and control method for monitoring system
US20110157396A1 (en) Image processing apparatus, image processing method, and storage medium
CN110248048B (en) Video jitter detection method and device
CN110764537B (en) Automatic tripod head locking system and method based on motion estimation and visual tracking
CN110520694A (en) A kind of visual odometry and its implementation
JP2014176007A (en) Image pickup device and control method therefor
CN110099254B (en) Driver face tracking device and method
CN111062987A (en) Virtual matrix type three-dimensional measurement and information acquisition device based on multiple acquisition regions
CN110892444A (en) Method for removing object to be processed in image and device for executing method
CN104902182A (en) Method and device for realizing continuous auto-focus
CN209279885U (en) Image capture device, 3D information comparison and mating object generating means
CN111862169B (en) Target follow-up method and device, cradle head camera and storage medium
US7936385B2 (en) Image pickup apparatus and imaging method for automatic monitoring of an image
CN110378183B (en) Image analysis device, image analysis method, and recording medium
CN110351508A (en) Stabilization treating method and apparatus based on RECORD mode, electronic equipment
CN109218587A (en) A kind of image-pickup method and system based on binocular camera
CN111144327B (en) Method for improving recognition efficiency of face recognition camera of self-service equipment
US20230148125A1 (en) Image processing apparatus and method, and image capturing apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhuang Qianyang

Inventor after: Zhang Kehua

Inventor after: Wang Jiayi

Inventor after: Chen Qianqian

Inventor after: Zhu Miaomiao

Inventor after: Ding Lu

Inventor after: Huang Xun

Inventor before: Zhuang Qianyang

Inventor before: Wang Jiayi

Inventor before: Chen Qianqian

Inventor before: Zhu Miaomiao

Inventor before: Ding Lu

Inventor before: Huang Xun

GR01 Patent grant
GR01 Patent grant