CN106650665B - Face tracking method and device - Google Patents

Face tracking method and device Download PDF

Info

Publication number
CN106650665B
CN106650665B CN201611216574.8A CN201611216574A CN106650665B CN 106650665 B CN106650665 B CN 106650665B CN 201611216574 A CN201611216574 A CN 201611216574A CN 106650665 B CN106650665 B CN 106650665B
Authority
CN
China
Prior art keywords
infrared image
visible light
face
light image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611216574.8A
Other languages
Chinese (zh)
Other versions
CN106650665A (en
Inventor
卢宏刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd, Beijing Megvii Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201611216574.8A priority Critical patent/CN106650665B/en
Publication of CN106650665A publication Critical patent/CN106650665A/en
Application granted granted Critical
Publication of CN106650665B publication Critical patent/CN106650665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a face tracking method and a face tracking device, wherein the face tracking method comprises the following steps: acquiring an infrared image and a visible light image; detecting a face area of a target object in the infrared image and acquiring coordinates of a central point of the face area in the infrared image; acquiring corresponding coordinates of a corresponding point of the center point of the face area in the infrared image in the visible light image according to the coordinates of the center point of the face area in the infrared image; acquiring a horizontal included angle and a vertical included angle of a corresponding point of a center point of a face area in the infrared image in the visible light image relative to the center point of the visible light image; and controlling the holder to rotate based on the horizontal included angle and the vertical included angle so as to track the face of the target object. The face tracking method can realize automatic tracking and alignment of the face of the target object, and improve the alignment efficiency and user experience of the target object. This device has similar advantages.

Description

Face tracking method and device
Technical Field
The invention relates to the technical field of image recognition, in particular to a face tracking method and a face tracking device capable of automatically tracking face directions.
Background
With the development of economy and information technology, the emergence of a plurality of monitoring demands in the fields of access control, security protection, anti-terrorism and the like puts higher demands on the intellectualization of the monitoring device, and the continuous development of computer video image technology and the development of various technologies such as control processing, communication and the like provide conditions for the intellectualization of the monitoring device. The introduction of artificial intelligence into the monitoring field is also a trend, and particularly, a monitoring system for face recognition has great market demands in the fields of identity recognition, security protection, stability maintenance and the like, and an industrial chain at the billion RMB level can be formed in prediction. How to design a set of high-quality and reasonable face recognition system has no doubtful and great driving force for the early realization of the industry.
The current monitoring device capable of performing face recognition generally needs to actively move to the front of a camera for front alignment of a recognized object, and user experience is poor because on one hand, not all recognized objects are willing to be aligned, and on the other hand, the time for actively aligning the recognized object is slow.
Therefore, it is desirable to provide a face tracking method and apparatus to at least partially solve the above-mentioned problems.
Disclosure of Invention
In this summary, concepts in a simplified form are introduced that are further described in the detailed description section. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In order to at least partially solve the above problem, the present invention provides a face tracking method, including: acquiring an infrared image and a visible light image, wherein the infrared image and the visible light image are respectively acquired by an infrared image sensor and a visible light image sensor; detecting a face area of a target object in the infrared image and acquiring coordinates of a central point of the face area in the infrared image; acquiring corresponding coordinates of the center point of the face area in the infrared image at corresponding points in the visible light image according to the coordinates of the center point of the face area in the infrared image; acquiring a horizontal included angle and a vertical included angle of the corresponding point of the center point of the face area in the infrared image in the visible light image relative to the center point of the visible light image, wherein the horizontal included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the horizontal direction of the visible light image, and the vertical included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the vertical direction of the visible light image; and controlling the holder to rotate based on the horizontal included angle and the vertical included angle so as to track the face of the target object.
Illustratively, the detecting a face region of the target object in the infrared image and acquiring coordinates of a center point of the face region in the infrared image includes: acquiring coordinates of a heat radiation point sensed by the infrared image sensor in the infrared image; marking a pixel value corresponding to the coordinate of the thermal radiation point in the infrared image as 1, and taking a region with the pixel value of 1 as a target object region in the infrared image; extracting a preset area in the target object area as a face area in the infrared image; and acquiring coordinate values of the center point of the face area in the infrared image as the coordinates of the center point of the face area in the infrared image.
For example, the obtaining the coordinate value of the center point of the face region in the infrared image as the coordinate of the center point of the face region in the infrared image includes: and intercepting a circumscribed rectangle of the face area in the infrared image, and taking the coordinates of the central point of the circumscribed rectangle as the coordinates of the central point of the face area in the infrared image.
Illustratively, the obtaining, according to the coordinates of the center point of the face region in the infrared image, the corresponding coordinates of the corresponding point of the center point of the face region in the visible light image in the infrared image includes: and transforming the coordinates of the central point of the face area in the infrared image into corresponding coordinates of the corresponding point in the visible light image based on a coordinate transformation relation.
Illustratively, the method further comprises acquiring the coordinate transformation relationship based on an infrared image including a first reference object and a visible light image including the first reference object, wherein the acquiring the coordinate transformation relationship comprises: respectively acquiring the visible light image comprising the first reference object and the infrared image comprising the first reference object by the visible light image sensor and the infrared image sensor, wherein the first reference object is arranged in the visual angle range of the visible light image sensor and the infrared image sensor, and at least one reference point is marked on the first reference object; acquiring coordinates of the at least one reference point in the visible light image including the first reference object and the infrared image including the first reference object, respectively; and acquiring the coordinate transformation relation through the coordinates of the at least one reference point in the visible light image including the first reference object and the coordinates of the at least one reference point in the infrared image including the first reference object.
According to another aspect of the present invention, there is also provided a face tracking apparatus, including: the image acquisition module is used for acquiring an infrared image and a visible light image; the infrared image human face central point coordinate acquisition module is used for detecting a human face area of a target object in the infrared image and acquiring the coordinate of the central point of the human face area in the infrared image; the visible light image corresponding coordinate acquisition module is used for acquiring the corresponding coordinates of the center point of the face area in the infrared image at the corresponding point in the visible light image according to the coordinates of the center point of the face area in the infrared image; the angle detection module is used for acquiring a horizontal included angle and a vertical included angle of a corresponding point of a center point coordinate of a face region in the infrared image in the visible light image relative to a center point of the visible light image, wherein the horizontal included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the horizontal direction of the visible light image, and the vertical included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the vertical direction of the visible light image; and the holder rotation control module controls the holder to rotate based on the horizontal included angle and the vertical included angle so as to track the face of the target object.
Illustratively, the face tracking apparatus further includes: the infrared image sensor is used for acquiring the infrared image; and a visible light image sensor for acquiring the visible light image.
Illustratively, the infrared image human face center point coordinate acquisition module comprises: the coordinate acquisition sub-module is used for acquiring the coordinates of the heat radiation point sensed by the infrared image sensor in the infrared image; the pixel marking sub-module is used for marking a pixel value corresponding to the coordinate of the thermal radiation point in the infrared image as 1, and taking an area with the pixel value of 1 as a target object area in the infrared image; the human face region extraction submodule is used for extracting a preset region in the target object region as a human face region in the infrared image; and the infrared image face central point coordinate calculation submodule is used for acquiring the coordinate value of the central point of the face area in the infrared image and taking the coordinate value as the coordinate of the central point of the face area in the infrared image.
Illustratively, the infrared image face center point coordinate calculation submodule includes an intercepting unit, and the intercepting unit is configured to intercept a circumscribed rectangle of a face region in the infrared image, and use coordinates of a center point of the circumscribed rectangle as coordinates of the center point of the face region in the infrared image.
Illustratively, the visible light image corresponding coordinate acquiring module transforms the coordinates of the center point of the face area in the infrared image into corresponding coordinates of the corresponding point in the visible light image based on a coordinate transformation relationship.
Illustratively, the face tracking apparatus further includes: a coordinate transformation relation acquisition module for acquiring the coordinate transformation relation based on an infrared image including a first reference object and a visible light image including the first reference object, wherein the coordinate transformation relation acquisition module includes:
a reference object image acquisition sub-module, configured to acquire the visible light image including the first reference object and the infrared image including the first reference object through the visible light image sensor and the infrared image sensor, respectively, where the first reference object is placed within a viewing angle range of the visible light image sensor and the infrared image sensor, and at least one reference point is marked on the first reference object; a reference point coordinate acquisition submodule for acquiring coordinates of the at least one reference point in the visible light image including the first reference object and the infrared image including the first reference object, respectively; and a coordinate transformation relation obtaining sub-module, configured to obtain the coordinate transformation relation through the coordinates of the at least one reference point in the visible light image including the first reference object and the coordinates of the at least one reference point in the infrared image including the first reference object.
Illustratively, the image acquisition module, the infrared image face center point coordinate acquisition module, the visible light image corresponding coordinate acquisition module, the angle detection module and the holder rotation control module are realized by the same processor.
According to the face tracking method and device, the thermal radiation of a human body is sensed to form an infrared image, then the coordinates of the center point of the face region are obtained from the infrared image, the coordinates of the center point of the face region in the infrared image are converted into corresponding coordinates of corresponding points in the visible light image, then the horizontal included angle and the vertical included angle of the corresponding points in the visible light image relative to the center point of the visible light image are obtained, and the holder is controlled to rotate based on the horizontal included angle and the vertical included angle, so that the face of a target object is tracked. Namely, the face tracking method and the face tracking device can realize automatic tracking alignment of the face of the identified target object without the need of actively walking the target object to the front side of the camera for alignment, thereby improving the alignment time and the user experience.
Drawings
The following drawings of embodiments of the invention are included as part of the present invention for an understanding of the invention. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention. In the drawings, there is shown in the drawings,
FIG. 1 is a schematic block diagram of an exemplary electronic device for implementing a face tracking method and apparatus in accordance with embodiments of the present invention;
FIG. 2 is a schematic flow chart of a face tracking method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a coordinate acquisition method of a center point of a face area in an infrared image according to an embodiment of the present invention;
FIG. 4 is a block diagram of a schematic structure of a face tracking apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of an infrared image human face center coordinate acquisition module according to an embodiment of the present invention;
fig. 6 is a schematic structural block diagram of a coordinate transformation relation acquisition module according to an embodiment of the present invention;
fig. 7 is a schematic structural block diagram of a face tracking apparatus according to an embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that embodiments of the invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in detail so as not to obscure the embodiments of the invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. In the drawings, the size and relative sizes of components, elements, and the like may be exaggerated for clarity. Like reference numerals refer to like elements throughout.
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
First, an exemplary electronic device 100 for implementing a face tracking method and apparatus according to an embodiment of the present invention is described with reference to fig. 1.
As shown in FIG. 1, electronic device 100 includes one or more processors 102, one or more memory devices 104, input/output devices 106, a communication interface 108, and one or more Image Signal Processors (ISPs) 110, visible light image sensors 114, and infrared image sensors 116, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are only exemplary and not limiting, and the electronic device may have other components and structures, or may not include some of the aforementioned components, for example, the communication interface 108 may be included, or the communication interface 108 may not be included, as required.
The processor 102 generally represents any type or form of processing unit capable of processing data or interpreting and executing instructions. In general, the processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions. In particular embodiments, processor 102 may receive instructions from a software application or module. These instructions may cause processor 102 to perform the functions of one or more of the example embodiments described and/or illustrated herein.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input/output device 106 may be a device used by a user to input instructions and output various information to the outside, for example, the input device may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like. The output devices may include one or more of a display, speakers, and the like.
Communication interface 108 broadly represents any type or form of adapter or communication device capable of facilitating communication between example electronic device 100 and one or more additional devices. For example, the communication interface 108 may facilitate communication between the electronic device 100 and front-end or accessory electronic devices as well as back-end servers or clouds. Examples of communication interface 108 include, but are not limited to, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, and any other suitable interface. In an embodiment, the communication interface 108 provides direct connection to a remote server/remote head end device through direct connection to a network such as the internet. In particular embodiments, communication interface 108 provides direct connection to a remote server/remote head end device through direct connection to a dedicated network, such as a video surveillance network, a skynet system network, or the like. Communication interface 108 may also indirectly provide such connection through any other suitable connection.
The Image Signal Processor (ISP)110 is configured to control the visible light image sensor 114 and the infrared image sensor 116 to perform image acquisition, adjust parameters such as digital gain, analog gain, backlight compensation, wide dynamic range, and image quality parameters such as brightness and contrast, and other parameters of AE (automatic exposure) to adjust image quality, generate corresponding visible light exposure image and infrared image based on output signals of the visible light image sensor 114 and the infrared image sensor 116, and transmit the corresponding visible light exposure image and infrared image to the processor 102 through a connection unit such as the bus system 112 for subsequent processing. At the same time, the Image Signal Processor (ISP)110 may also accept commands or instructions from the processor 102 to make adjustments to the visible light image sensor 114 and the infrared image sensor 116. The Image Signal Processor (ISP)110 may be provided in one or more as needed and the number of the visible light image sensors 114 and the infrared image sensors 116, and the Image Signal Processor (ISP)110 may be integrated with the processor 102 or the like, implemented as a System On Chip (SOC), or provided separately and connected through a connection unit such as the bus system 112.
The visible light image sensor 114 may capture or take an image (e.g., a photograph, video, etc.) of the user's desired visible light exposure and store the captured image in the storage device 104 for use by other components. An example of the visible light image sensor 114 is an RGB sensor, or an RGBW sensor or a black and white sensor (which can only capture black and white images) or the like, which forms a color or black and white image/video by sensing visible light, and thus is referred to herein as a visible light image sensor. The images/videos captured by the corresponding visible light image sensors are referred to as visible light images/videos or visible light exposure images/videos.
The infrared image sensor 116 is used to collect an infrared image, which is based on thermal radiation imaging, so that the infrared image can be collected no matter the intensity of ambient light.
Exemplary electronic devices for implementing the face tracking method and apparatus according to the embodiments of the present invention may be implemented as, for example, a smart phone, a tablet computer, an image acquisition end of an access control system, an image acquisition end of a road monitoring system, an image acquisition end of a security system, and a backend control processing end or a server end or a cloud end in various monitoring, security and protection fields, etc.
Fig. 2 is a schematic flow chart of a face tracking method according to an embodiment of the present invention. A face tracking method according to an embodiment of the present invention will be described below with reference to fig. 2.
As shown in fig. 2, the face tracking method disclosed in the embodiment of the present invention includes:
first, in step S201, an infrared image and a visible light image are acquired, and the infrared image and the visible light image are respectively acquired by an infrared image sensor and a visible light image sensor.
The infrared image is an image formed based on thermal radiation infrared sensitization, and compared with visible light, the infrared imaging is not influenced by the intensity of ambient light, so that a clear infrared image can be obtained under dark light. In the embodiment of the invention, the infrared image sensor can be controlled by the image signal processor to acquire the infrared image of the target object, namely, the infrared image sensor senses infrared rays formed by heat radiation in a visual angle range, the infrared signals are converted into electric signals, and the electric signals are processed by the image signal processor to form the infrared image. In one embodiment, the infrared image includes a target object.
The visible light image can control the visible light image sensor to obtain the visible light image of the target object through the image signal processor, namely, the visible light image sensor senses light rays within a visual angle range, converts light signals into electric signals, and the electric signals are processed by the image signal processor to form the visible light image. In one embodiment, the visible light image includes a target object.
In one embodiment, the infrared image and the visible light image are acquired by the infrared image sensor and the visible light image sensor, respectively, at the same time.
Next, in step S202, a face region of the target object in the infrared image is detected and coordinates of a center point of the face region in the infrared image are acquired.
Namely, the acquired infrared image is processed, and when a target object is detected in the infrared image, the coordinates of the center point of the face area of the target object in the infrared image are acquired. The coordinates of the center point of the face area in the infrared image are acquired as will be described in detail later.
Next, in step S203, according to the coordinates of the center point of the face region in the infrared image, the corresponding coordinates of the center point of the face region in the infrared image at the corresponding point in the visible light image are obtained.
Since the positions of the visible light image sensor and the infrared image sensor are not overlapped, the coordinate of the center point of the face area in the infrared image is not equal to the coordinate of the center point of the face area in the visible light image, and therefore in the step, the coordinate transformation is performed on the coordinate of the center point of the face area in the infrared image, and therefore the corresponding coordinate of the corresponding point of the center point of the face area in the infrared image in the visible light image is obtained. In one embodiment, the corresponding point of the center point of the face region in the infrared image in the visible light image can be regarded as the center point of the face region in the visible light image. The coordinate transformation is a transformation of coordinates of a center point of a face region in the infrared image into corresponding coordinates of the corresponding point in the visible light image based on a coordinate transformation relationship. For example, in an embodiment of the present invention, the coordinates of the center point of the face area in the infrared image are superimposed with the transformation amount and/or transformation function given by the coordinate transformation relation, so as to obtain the corresponding coordinates of the center point of the face area in the infrared image in the visible light image.
The coordinate transformation relation may give the transformation amount in only one direction, or may give the transformation amount in a plurality of directions. In one embodiment, the coordinate transformation relation may only give a transformation function in one direction, or may give a transformation function in a plurality of directions. In an embodiment of the present invention, the coordinate transformation relation may be acquired based on an infrared image including a first reference object and a visible light image including the first reference object.
Illustratively, in an example of the present invention, the coordinate variation may be obtained by:
firstly, the visible light image including the first reference object and the infrared image including the first reference object are respectively obtained through the visible light image sensor and the infrared image sensor, wherein the first reference object is arranged in the visual angle range of the visible light image sensor and the infrared image sensor, and at least one reference point is marked on the first reference object. In an example of the present invention, the first reference object is, for example, a piece of white paper, four black dots are marked at four corners of the white paper as reference points, and a heat source is applied behind each black dot so that an infrared image can be formed. In an example of the present invention, it is preferable that the infrared image sensor and the visible light image sensor select the same resolution and horizontal and vertical viewing angles and are horizontally disposed on the circuit board as close as possible to reduce an error of the coordinate transformation. And, when the reference points do not have thermal radiation, wherein a heat source is applied to the back of each of the at least one reference point to heat the at least one reference point. Of course, when the reference point itself has heat radiation, no heating is required.
The visible light image including the first reference object and the infrared image including the first reference object can be acquired by the visible light image sensor and the infrared image sensor, respectively. For example, a visible light image including the white paper and an infrared image including the white paper are acquired by the visible light image sensor and the infrared image sensor, respectively. In an example of the present invention, the visible light image sensor and the infrared image sensor may be controlled by the image signaler to capture images to respectively acquire a visible light image including the white paper and an infrared image including the white paper.
Then, the at least one reference point and the corresponding coordinates are respectively obtained from the visible light image including the first reference object and the infrared image including the first reference object. For example, coordinates of four black dots at the four corners are acquired from an infrared image including the white paper and a visible light image including the white paper. The black point coordinates can be obtained by an algorithm and a coordinate system of the image.
Finally, the coordinate transformation relation is obtained through the coordinates of the at least one reference point in the visible light image including the first reference object and the coordinates of the at least one reference point in the infrared image including the first reference object. For example, after obtaining the black dots and their coordinates in the two images, the coordinate transformation relationship may be obtained by the coordinates of the black dots in the two images. For example, in an example of the present invention, the visible light image sensor and the infrared image sensor are horizontally disposed on the same horizontal plane, and therefore, the coordinates in the vertical direction in the coordinate systems of the visible light image sensor and the infrared image sensor are identical, and the coordinates in the horizontal direction are different, so that a coordinate transformation function formula in the horizontal direction can be obtained by comparing the horizontal coordinate difference or the pixel value difference dPix of the corresponding black point in the two images, and the coordinate transformation function formula is used as the coordinate transformation relation. Illustratively, the coordinate transformation formula in the horizontal direction (expressed as the X direction) is, for example, X1 (horizontal direction coordinate value in the visible light image sensor) ═ X2 (horizontal direction coordinate value in the infrared image sensor) + dPix. It will be appreciated that the relative position of the dPix based on the visible light image sensor and the infrared sensor may be positive or negative.
It is to be understood that the above-described arrangement of the visible light image sensor and the infrared image sensor and the method for acquiring the coordinate transformation relationship are merely an example, and those skilled in the art may acquire the coordinate transformation relationship by other methods based on a similar principle. In addition, the white paper is taken as the first reference object, and four black dots are marked at four corners of the white paper as the reference points, which is also exemplary, and a person skilled in the art can use other objects as the reference objects and select one or more reference points at any position on the reference objects.
In step S204, a horizontal included angle and a vertical included angle of a corresponding point of the center point of the face region in the infrared image in the visible light image with respect to the center point of the visible light image are obtained, the horizontal included angle is an included angle of a connecting line of the corresponding point in the visible light image and the center point of the visible light image with respect to a horizontal direction of the visible light image, and the vertical included angle is an included angle of a connecting line of the corresponding point in the visible light image and the center point of the visible light image with respect to a vertical direction of the visible light image.
The method comprises the steps of acquiring a horizontal included angle and a vertical included angle of a corresponding coordinate in a visible light image relative to a coordinate of a central point of the visible light image based on a corresponding coordinate of a corresponding point of a central point of a face area in the infrared image in the visible light image and a coordinate of the central point of the visible light image, wherein the horizontal included angle is an included angle of a connecting line of the corresponding point in the visible light image and the central point of the visible light image relative to a horizontal direction of the visible light image, and the vertical included angle is an included angle of a connecting line of the corresponding point in the visible light image and the central point of the visible light image relative to a vertical direction of the visible light image.
In an example of the present invention, a horizontal angle and a vertical angle of the corresponding point in the visible light image of the target object with respect to the center point of the visible light image are calculated by a certain algorithm based on the corresponding coordinates in the visible light image and the coordinates of the center point of the visible light image.
Wherein the center point coordinates of the visible light image may be obtained based on a coordinate system or resolution of the visible light image. For example, in an example of the present invention, the coordinates are each in units of pixels, and if the resolution of the visible light image sensor is fixed to WxH, the coordinates of the center point of the visible light image are (W/2, H/2), and W, H are even numbers; or ((W-1)/2, (H-1)/2), W, H are odd numbers; or ((W +1)/2, (H +1)/2), wherein W and H are odd numbers. Of course, in other embodiments, other suitable methods may be used to obtain the data according to different settings.
In step S205, the pan/tilt head is controlled to rotate based on the horizontal angle and the vertical angle, so as to track the face of the target object.
The holder is a device for supporting the infrared image sensor/acquisition unit and the visible light image sensor/acquisition unit and adjusting the image acquisition area of the infrared image sensor/acquisition unit and the visible light image sensor/acquisition unit, and can adopt various suitable structures. After the horizontal included angle and the vertical included angle of the corresponding point in the visible light image relative to the central point of the visible light image are determined, the holder can be controlled to rotate based on the horizontal included angle and the vertical included angle, so that the image acquisition area of the visible light image sensor is adjusted, and the face of the target object is tracked. For example, the rotation of the pan/tilt head is controlled based on the horizontal angle and the vertical angle, so that the visible light image sensor is aligned to the face area of the target object, and the face area of the target object is approximately positioned at the center of the newly acquired visible light image, thereby realizing the tracking alignment of the face area of the target object.
In one embodiment, in the step of acquiring the transformation relationship, the relative positions of the visible light image sensor and the infrared image sensor are fixed (for example, horizontally arranged and as close as possible), so that the pan-tilt adjusts the acquisition regions of the infrared image sensor and the visible light image sensor during rotation of the pan-tilt and keeps the relative positions of the visible light image sensor and the infrared image sensor unchanged.
For example, the face tracking method according to the embodiment may be deployed at a face image acquisition end, for example, in a security application field or a financial application field, and may be deployed at an image acquisition end of an access control system or an image acquisition system.
Alternatively, the face tracking method according to the embodiment of the present invention may also be distributively deployed at the server side (or cloud side) and the image acquisition side. For example, in the field of financial applications, target object tracking and image acquisition may be performed at an image acquisition (e.g., a camera end), and pan-tilt control may be performed at a server end (or a cloud end).
According to the face tracking method of the embodiment, the thermal radiation of a human body is sensed to form an infrared image, then the coordinates of the central point of the face area in the infrared image are obtained from the infrared image, the coordinates of the central point of the face area in the infrared image are converted into corresponding coordinates of corresponding points in a visible light image, then the horizontal included angle and the vertical included angle of the corresponding points relative to the central point of the visible light image are obtained, and the holder is controlled to rotate based on the horizontal included angle and the vertical included angle so as to track the face of a target object. Namely, the face tracking method can realize automatic tracking alignment of the face of the identified target object without the need of actively walking the target object to the front side of the camera for alignment, thereby improving the alignment time and the user experience.
Fig. 3 is a schematic flowchart of a coordinate acquisition method of a center point of a face area in an infrared image according to an embodiment of the present invention. A method for acquiring coordinates of a center point of a face area in an infrared image according to an embodiment of the present invention is described below with reference to fig. 3.
As shown in fig. 3, the method for detecting a face area of a target object in an infrared image and acquiring coordinates of a center point of the face area in the infrared image disclosed in the embodiment of the present invention includes:
first, in step S301, coordinates of a heat radiation point sensed by the infrared image sensor in the infrared image are acquired.
The infrared image is formed by sensing heat radiation infrared rays, so that when the target object is a human body, the infrared image can be formed by sensing heat radiation infrared rays of the human body. Therefore, the human body contour can be obtained by obtaining the coordinates of the heat radiation point in the infrared image. In the embodiment of the invention, the position of the heat radiation point in the coordinate system of the infrared image can be judged through an algorithm to obtain the coordinate of the heat radiation point in the infrared image.
In step S302, the pixel value corresponding to the coordinates is marked as 1. Namely, a pixel value corresponding to the coordinates of the thermal radiation point in the infrared image is marked as 1, and an area with the pixel value of 1 is used as a target object area (human body contour) in the infrared image. And for example, an existing Graph segmentation technology (for example, Graph Cut algorithm) can be further applied to extract a region with a pixel value of 1, so as to obtain an infrared image only containing the target object region.
Illustratively, in one example, when the pixel values corresponding to the coordinates of all the heat radiation points in the infrared image are marked as 1, the pixel values of other areas are marked as 0. Or, in another embodiment, the default of the pixel values of all the pixels may be 0, and after the coordinate values of the thermal radiation points in the infrared image are obtained, the pixel values corresponding to the coordinates of all the thermal radiation points in the infrared image are marked as 1, so that the target object region may also be obtained from the infrared image.
In step S303, a predetermined region of the target object region is extracted as a face region in the infrared image.
That is, a predetermined region is set according to the relative position relationship between the face region and the target object region in the general infrared image, and after the target object region is acquired in step S302, the predetermined region of the target object region is extracted as the face region in the infrared image. In an example of the present invention, the predetermined region is a region formed by the first 50 rows of pixels with pixel values of 1 from top to bottom in the target object region, that is, a region with the first 50 rows of pixels of 1 in the target object region is extracted as a face region. Of course, in other embodiments, other numbers of rows of pixel regions may be selected as the face region, or other methods may be used to extract the face region from the target object region.
In step S304, coordinate values of a center point of a face region in the infrared image are acquired as coordinates of the center point of the face region in the infrared image.
Namely, the central point position of the face area in the infrared image is obtained. For example, in an example of the invention, a circumscribed rectangle of the face region in the infrared image may be intercepted by a suitable algorithm, and coordinates of a center point of the circumscribed rectangle may be used as coordinates of the center point of the face region in the infrared image. Of course, in other embodiments, other circumscribed or inscribed graphs of the face region may be intercepted to obtain the coordinates of the central point, or the coordinates of the central point of the face region may be obtained by other suitable methods.
For example, the coordinate acquisition method of the center point of the face area in the infrared image according to the embodiment of the present invention may be implemented in a device, an apparatus, or a system having a visible light image sensor, an infrared image sensor, and a processor.
The method for acquiring the coordinates of the center point of the face area in the infrared image can be deployed at a face image acquisition end, for example, in the field of security application or financial application, and can be deployed at an image acquisition end of an access control system or an image recognition system. Alternatively, the system can also be arranged at an image processing end or a control end, and a server end or a cloud end, namely, the image is collected at the front end, and the coordinates of the center point of the face region are obtained at the back end.
According to the coordinate obtaining method of the center point of the face area in the infrared image, the thermal radiation of a human body is sensed to form the infrared image, then the target object area is extracted from the infrared image, the face area is extracted from the target object area, and then the coordinate of the center point of the face area is obtained, so that the rotation angle of the holder can be obtained based on the coordinate of the center point of the face area, and the holder is controlled to rotate to track the face of the target object.
Fig. 4 is a schematic structural block diagram of a face tracking device according to an embodiment of the present invention. A face tracking apparatus according to an embodiment of the present invention is described below with reference to fig. 4.
As shown in fig. 4, the face tracking apparatus 400 according to the embodiment of the present invention includes an image obtaining module 410, an infrared image face center point coordinate obtaining module 420, a visible light image corresponding coordinate obtaining module 430, an angle detecting module 440, and a pan/tilt head rotation control module 450. The various modules may perform the various steps/functions of the face tracking method described above in connection with fig. 2-3, respectively. Only the main functions of the units of the face tracking apparatus 400 are described below, and details that have been described above are omitted.
The image acquisition module 410 is used for acquiring infrared images and visible light images. The image acquisition module 410 acquires an infrared image of the target object acquired by the infrared image sensor and a visible light image of the target object acquired by the visible light image sensor. The image acquisition module 410 may be implemented by the processor 102 and/or the image signal processor 110 in the electronic device shown in fig. 1 executing program instructions stored in the storage device 104, and may perform step S201 in the face tracking method according to the embodiment of the present invention.
The infrared image face center point coordinate acquisition module 420 detects a face area of a target object from the infrared image acquired by the image acquisition module 410 and acquires coordinates of a center point of the face area in the infrared image. The infrared image face center coordinate acquiring module 420 may be implemented by the processor 102 and/or the image signal processor 110 in the electronic device shown in fig. 1 executing program instructions stored in the storage device 104, and may perform step S202 in the face tracking method according to the embodiment of the present invention, and steps S301 to S304 in the infrared image face center coordinate acquiring method according to the embodiment of the present invention.
The visible light image corresponding coordinate obtaining module 430 is configured to obtain, according to the coordinates of the center point of the face region in the infrared image, the corresponding coordinates of the center point of the face region in the infrared image, which are obtained by the infrared image face center point coordinate obtaining module 420, in the corresponding point of the visible light image. In an example of the present invention, the visible-light image corresponding coordinate obtaining module 430 transforms the coordinates of the center point of the face area in the infrared image into corresponding coordinates of a corresponding point in the visible-light image based on a coordinate transformation relationship. The visible light image corresponding coordinate acquiring module 430 may be implemented by the processor 102 and/or the image signal processor 110 in the electronic device shown in fig. 1 executing program instructions stored in the storage device 104, and may perform step S203 in the face tracking method according to the embodiment of the present invention.
Angle detection module 440 is used for acquireing the central point of face region is in the infrared image corresponding point in the visible light image is relative the horizontal contained angle and the vertical included angle of the central point of visible light image, the horizontal contained angle is the corresponding point with the central point line of visible light image is relative the contained angle of the horizontal direction of visible light image, the vertical included angle is the corresponding point with the central point line of visible light image is relative the contained angle of the vertical direction of visible light image. The angle detection module 440 may be implemented by the processor 102 and/or the image signal processor 110 in the electronic device shown in fig. 1 executing program instructions stored in the storage device 104, and may perform step S204 in the face tracking method according to the embodiment of the present invention.
The pan/tilt control module 450 controls the pan/tilt to rotate based on the horizontal angle and the vertical angle to track the face of the target object. The pan/tilt/pivot control module 450 may be implemented by the processor 102 and/or the image signaler 110 in the electronic device shown in fig. 1 executing the program instructions stored in the storage device 104, and may execute step S205 in the face tracking method according to the embodiment of the present invention.
Further, in an example of the present invention, the face tracking apparatus as shown in fig. 4 may further include a coordinate transformation relation obtaining module 460, where the coordinate transformation relation obtaining module 460 is configured to obtain the coordinate transformation relation based on the infrared image including the first reference object and the visible light image including the first reference object. The coordinate transformation relation obtaining module 460 may be implemented by the processor 102 and/or the image signaler 110 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104, and may perform the aforementioned coordinate transformation relation obtaining method to obtain the coordinate transformation relation, so that the visible-light-image corresponding coordinate obtaining module 430 transforms the coordinates of the center point of the face area in the infrared image into corresponding coordinates of a corresponding point in the visible-light image based on the coordinate transformation relation.
Illustratively, as shown in fig. 5, in an embodiment of the present invention, the infrared image face center coordinate capturing module 420 includes a coordinate capturing sub-module 4201, a pixel marking sub-module 4202, a face region extracting sub-module 4203, and an infrared image face center coordinate calculating sub-module 4204.
The coordinate obtaining sub-module 4201 is used for obtaining the coordinates of the heat radiating point sensed by the infrared image sensor in the infrared image. The coordinate acquisition sub-module 4201 may be implemented by the processor 102/image signal processor 110 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104, and may perform step S301 in the coordinate acquisition method of the center point of the face area in the infrared image according to the embodiment of the present invention.
The pixel labeling sub-module 4202 is configured to label a pixel value corresponding to the coordinate of the thermal radiation point in the infrared image to obtain a target object region. Illustratively, the pixel labeling sub-module 4202 labels a pixel value corresponding to the coordinate of the thermal radiation point in the infrared image as 1 and takes a region with a pixel value of 1 as a target object region in the infrared image. The pixel labeling sub-module 4202 may be implemented by the processor 102/image signal processor 110 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104, and may perform step S302 in the coordinate acquisition method of the center point of the face area in the infrared image according to the embodiment of the present invention.
The face region extraction sub-module 4203 is configured to extract a predetermined region in the target object region as a face region in the infrared image. The face region extraction sub-module 4203 may be implemented by the processor 102/image signal processor 110 in the electronic device shown in fig. 1 executing program instructions stored in the storage 104, and may perform step S303 in the coordinate acquisition method of the center point of the face region in the infrared image according to the embodiment of the present invention.
The infrared image face central point coordinate calculation submodule 4204 is configured to obtain coordinate values of a central point of a face region in the infrared image, and use the coordinate values as coordinates of the central point of the face region. In an example of the present invention, the infrared image face center point coordinate calculation sub-module 4204 includes an intercepting unit, and the intercepting unit is configured to intercept a circumscribed rectangle of a face region in the infrared image, and use coordinates of a center point of the circumscribed rectangle as coordinates of a center point of the face region in the infrared image. The infrared face center point coordinate calculation sub-module 4204 may be implemented by the processor 102/image signal processor 110 in the electronic device shown in fig. 1 executing program instructions stored in the storage device 104, and may perform step S304 in the coordinate acquisition method of the center point of the face area in the infrared image according to the embodiment of the present invention.
Further, in an embodiment of the present invention, as shown in fig. 6, the coordinate transformation relation obtaining module 460 includes a reference object image obtaining sub-module 4601, a reference point coordinate obtaining sub-module 4602, and a coordinate transformation relation obtaining sub-module 4603.
The reference object image acquiring sub-module 4601 is configured to acquire a visible light image including the first reference object and an infrared image including the first reference object through the visible light image sensor and the infrared image sensor, respectively, where the first reference object is placed within a viewing angle range of the visible light image sensor and the infrared image sensor, and at least one reference point is marked on the first reference object. The reference object image acquisition sub-module 4601 can be implemented by the processor 102/image signal processor 110 in the electronic apparatus shown in fig. 1 executing program instructions stored in the storage device 104.
The reference point coordinate acquiring sub-module 4602 is configured to acquire coordinates of the at least one reference point in the visible light image including the first reference object and the infrared image including the first reference object. The reference point coordinate acquisition sub-module 4602 can be implemented by the processor 102/image signal processor 110 in the electronic apparatus shown in fig. 1 executing program instructions stored in the storage device 104.
The coordinate transformation relation obtaining sub-module 4603 is configured to obtain the coordinate transformation relation by the coordinates of the at least one reference point in the visible light image including the first reference object and the coordinates of the at least one reference point in the infrared image including the first reference object. The coordinate transformation relation acquisition sub-module 4603 can be implemented by the processor 102/image signal processor 110 in the electronic apparatus shown in fig. 1 executing program instructions stored in the storage device 104.
According to the face tracking device, the heat radiation of a human body is sensed to form an infrared image, then the coordinates of the central point of the face region are obtained from the infrared image, the coordinates of the central point of the face region in the infrared image are converted into corresponding coordinates of corresponding points in the visible light image, then the horizontal included angle and the vertical included angle of the corresponding points in the visible light image relative to the central point of the visible light image are obtained, and the holder is controlled to rotate based on the horizontal included angle and the vertical included angle so as to track the face of the target object. The face tracking device can automatically track and align the face of the identified target object without the need that the target object actively walks to the front side of the camera for alignment, thereby improving the alignment time and the user experience.
Fig. 7 is a schematic structural block diagram of a face tracking apparatus according to an embodiment of the present invention. A face tracking device system according to an embodiment of the present invention is described below with reference to fig. 7.
As shown in fig. 7, the face tracking device 700 according to an embodiment of the present invention includes an infrared image sensor 710, a visible light image sensor 720, a pan/tilt head 730, a processor 740, and a storage 750.
The infrared image sensor 710 is used to capture an infrared image, for example, an infrared image of a target object. The visible light image sensor 720 is used to capture a visible light image, for example, a visible light exposure image of a target object. The visible light image sensor 720 may be an RGB sensor or a black and white sensor. The infrared image sensor 710 and the visible light image sensor 720 can capture pictures and also can capture videos. Of course, the process of collecting data by the infrared image sensor 710 and the visible light image sensor 720 can be implemented by the processor 102 or the image signal processor 110 in the electronic device shown in fig. 1 executing program instructions stored in the storage device 104 to obtain picture or video data.
Further, in the embodiment of the present invention, the infrared image sensor 710 and the visible light image sensor 720 preferably have the same resolution, horizontal viewing angle and vertical viewing angle, and are horizontally disposed on the circuit board as close as possible to reduce the error of the coordinate transformation.
The holder 730 is used for supporting the infrared image sensor 710 and the visible light image sensor 720, and adjusting the image acquisition regions of the infrared image sensor 710 and the visible light image sensor 720 through rotation in the horizontal direction and the vertical direction, so as to realize automatic tracking alignment of the face of the target object.
The storage means 750 stores program codes for implementing the respective steps in the face tracking method according to an embodiment of the present invention.
The processor 740 is configured to run the program codes stored in the storage device 750 to execute the corresponding steps of the face tracking method according to the embodiment of the present invention, and is configured to implement the image acquisition module 410, the infrared image face center point coordinate acquisition module 420, the visible light image corresponding coordinate acquisition module 430, the angle detection module 440, the pan-tilt rotation control module 450, and the coordinate transformation relation acquisition module 460 in the face tracking device according to the embodiment of the present invention; and a coordinate acquisition sub-module 4201, a pixel labeling sub-module 4202, a face region extraction sub-module 4203, and an infrared image face center coordinate calculation sub-module 4204 in the infrared image face center coordinate acquisition module 420, and a reference object image acquisition sub-module 4601, a reference point coordinate acquisition sub-module 4602, and a coordinate transformation relation acquisition sub-module 4603 in the coordinate transformation relation 460.
In one embodiment, the following steps are performed when the program code is executed by the processor 740:
acquiring an infrared image and a visible light image, wherein the infrared image and the visible light image are respectively acquired by an infrared image sensor and a visible light image sensor;
detecting a face area of a target object in the infrared image and acquiring coordinates of a central point of the face area in the infrared image;
acquiring corresponding coordinates of a corresponding point of the center point of the face area in the infrared image in the visible light image according to the coordinates of the center point of the face area in the infrared image;
acquiring a horizontal included angle and a vertical included angle of the corresponding point of the center point of the face area in the infrared image in the visible light image relative to the center point of the visible light image, wherein the horizontal included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the horizontal direction of the visible light image, and the vertical included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the vertical direction of the visible light image; and
and controlling the holder to rotate based on the horizontal included angle and the vertical included angle so as to track the face of the target object.
In one embodiment, processor 740 may be implemented as a Central Processing Unit (CPU) and/or an Image Signal Processor (ISP).
Illustratively, the face tracking device 700 according to embodiments of the present invention may be embodied as a device, apparatus or system having an infrared image sensor, a visible light image sensor, a pan-tilt, a memory, a communication interface/unit and a processor.
The face tracking device 700 according to the embodiment of the present invention may be deployed in a face detection recognition system, for example, in the field of security application, in an access control system or a road monitoring system; in the field of financial applications, it can be used in identity authentication systems for banks, exchanges, etc.
In addition, according to the embodiment of the present invention, a storage medium is further provided, on which program instructions are stored, and when the program instructions are executed by a computer or a processor, the program instructions are configured to execute corresponding steps of the face tracking method and the coordinate acquisition method of the center point of the face region in the infrared image according to the embodiment of the present invention, and are configured to implement corresponding modules/sub-modules of the face tracking device, the infrared image face center point coordinate acquisition module, and the coordinate transformation relation acquisition module according to the embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer readable storage medium can be any combination of one or more computer readable storage media, for example, one computer readable storage medium contains computer readable program code for acquiring a horizontal angle and a vertical angle of a corresponding point of a center point of a face region in the infrared image in the visible light image with respect to a center point of the visible light image, and another computer readable storage medium contains computer readable program code for controlling rotation of a pan/tilt head based on the horizontal angle and the vertical angle.
In one embodiment, the computer program instructions may implement the functional modules in the face tracking device according to the embodiment of the present invention when executed by a computer and/or may execute the face tracking method according to the embodiment of the present invention.
In one embodiment, the computer program instructions, when executed by a computer, perform the steps of: acquiring an infrared image and a visible light image, wherein the infrared image and the visible light image are respectively acquired by an infrared image sensor and a visible light image sensor; detecting a face area of a target object in the infrared image and acquiring coordinates of a central point of the face area in the infrared image; acquiring corresponding coordinates of a corresponding point of the center point of the face area in the infrared image in the visible light image according to the coordinates of the center point of the face area in the infrared image; acquiring a horizontal included angle and a vertical included angle of the corresponding point of the center point of the face area in the infrared image in the visible light image relative to the center point of the visible light image, wherein the horizontal included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the horizontal direction of the visible light image, and the vertical included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the vertical direction of the visible light image; and controlling the holder to rotate based on the horizontal included angle and the vertical included angle so as to track the face of the target object.
The modules in the face tracking apparatus according to the embodiment of the present invention may be implemented by the processor of the face tracking device according to the embodiment of the present invention running computer program instructions stored in the memory, or may be implemented by the computer instructions stored in the computer readable storage medium of the computer program product according to the embodiment of the present invention when the computer instructions are run by the computer.
According to the face tracking method and device, the face tracking equipment and the storage medium provided by the embodiment of the invention, the infrared image is formed by sensing the heat radiation of a human body, then the coordinates of the central point of the face region are obtained from the infrared image, the coordinates of the central point of the face region in the infrared image are converted into the corresponding coordinates of the corresponding point in the visible light image, then the horizontal included angle and the vertical included angle of the corresponding point of the central point of the face region in the visible light image relative to the central point of the visible light image are obtained, and the rotation of the holder is controlled based on the horizontal included angle and the vertical included angle, so that the face of a target object is tracked. Namely, the face tracking method and the face tracking device can realize automatic tracking alignment of the face of the identified target object without the need of actively walking the target object to the front side of the camera for alignment, thereby improving the alignment time and the user experience.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules in an item analysis apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A face tracking method, comprising:
acquiring an infrared image and a visible light image, wherein the infrared image and the visible light image are respectively collected by an infrared image sensor and a visible light image sensor, and the infrared image sensor and the visible light image sensor are positioned on the same circuit board and have the same resolution and horizontal visual angle and vertical visual angle;
detecting a human body region of a target object in the infrared image, and acquiring a human face region and coordinates of a central point of the human face region from the human body region, wherein the human face region of the target object in the infrared image is a preset region of the target object in the infrared image;
acquiring corresponding coordinates of a corresponding point of the center point of the face area in the infrared image in the visible light image according to the coordinates of the center point of the face area in the infrared image;
acquiring a horizontal included angle and a vertical included angle of the corresponding point of the center point of the face area in the infrared image in the visible light image relative to the center point of the visible light image, wherein the horizontal included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the horizontal direction of the visible light image, and the vertical included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the vertical direction of the visible light image; and
and controlling the holder to rotate based on the horizontal included angle and the vertical included angle so as to track the face of the target object.
2. The method according to claim 1, wherein the detecting a human body region of the target object in the infrared image and obtaining the human body region and coordinates of a center point of the human face region from the human body region comprises:
acquiring coordinates of a heat radiation point sensed by the infrared image sensor in the infrared image;
marking a pixel value corresponding to the coordinate of the thermal radiation point in the infrared image as 1, and taking a region with the pixel value of 1 as a target object region in the infrared image;
extracting a preset area in the target object area as a face area in the infrared image; and
and acquiring coordinate values of the center points of the face regions in the infrared images, and taking the coordinate values as the coordinates of the center points of the face regions in the infrared images.
3. The method according to claim 2, wherein the obtaining coordinate values of the center point of the face region in the infrared image as the coordinate values of the center point of the face region in the infrared image comprises:
and intercepting a circumscribed rectangle of the face area in the infrared image, and taking the coordinates of the central point of the circumscribed rectangle as the coordinates of the central point of the face area in the infrared image.
4. The method for tracking the face of the user according to claim 1, wherein the obtaining the corresponding coordinates of the corresponding point of the center point of the face region in the infrared image in the visible light image according to the coordinates of the center point of the face region in the infrared image comprises:
and transforming the coordinates of the central point of the face area in the infrared image into corresponding coordinates of the corresponding point in the visible light image based on a coordinate transformation relation.
5. The method of claim 4, further comprising: acquiring the coordinate transformation relation based on an infrared image including a first reference object and a visible light image including the first reference object,
wherein the step of obtaining the coordinate transformation relationship comprises:
respectively acquiring the visible light image comprising the first reference object and the infrared image comprising the first reference object by the visible light image sensor and the infrared image sensor, wherein the first reference object is arranged in the visual angle range of the visible light image sensor and the infrared image sensor, and at least one reference point is marked on the first reference object;
acquiring coordinates of the at least one reference point in the visible light image including the first reference object and the infrared image including the first reference object, respectively; and
and acquiring the coordinate transformation relation through the coordinates of the at least one reference point in the visible light image comprising the first reference object and the coordinates of the at least one reference point in the infrared image comprising the first reference object.
6. A face tracking device, comprising:
the system comprises an image acquisition module, a processing module and a display module, wherein the image acquisition module is used for acquiring an infrared image and a visible light image, the infrared image and the visible light image are respectively acquired by an infrared image sensor and a visible light image sensor, and the infrared image sensor and the visible light image sensor are positioned on the same circuit board and have the same resolution and horizontal and vertical visual angles;
the infrared image human face central point coordinate acquisition module is used for detecting a human body area of a target object in the infrared image and acquiring the human face area and coordinates of a central point of the human face area from the human body area, wherein the human face area of the target object in the infrared image is a preset area of the target object in the infrared image;
the visible light image corresponding coordinate acquisition module is used for acquiring the corresponding coordinates of the center point of the face area in the infrared image at the corresponding point in the visible light image according to the coordinates of the center point of the face area in the infrared image;
the angle detection module is used for acquiring a horizontal included angle and a vertical included angle of a corresponding point of a center point of a face area in the infrared image in the visible light image relative to the center point of the visible light image, wherein the horizontal included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the horizontal direction of the visible light image, and the vertical included angle is an included angle of a connecting line of the corresponding point and the center point of the visible light image relative to the vertical direction of the visible light image; and
and the holder rotation control module controls the holder to rotate based on the horizontal included angle and the vertical included angle so as to track the face of the target object.
7. The face tracking device of claim 6, further comprising:
the infrared image sensor is used for acquiring the infrared image; and
a visible light image sensor to acquire the visible light image.
8. The face tracking device according to claim 7, wherein the infrared image face center point coordinate obtaining module comprises:
the coordinate acquisition sub-module is used for acquiring the coordinates of the heat radiation point sensed by the infrared image sensor in the infrared image;
the pixel marking sub-module is used for marking a pixel value corresponding to the coordinate of the thermal radiation point in the infrared image as 1, and taking an area with the pixel value of 1 as a target object area in the infrared image;
the human face region extraction submodule is used for extracting a preset region in the target object region as a human face region in the infrared image; and
and the infrared image face central point coordinate calculation submodule is used for acquiring the coordinate value of the central point of the face area in the infrared image and taking the coordinate value as the coordinate of the central point of the face area in the infrared image.
9. The face tracking device according to claim 8, wherein the infrared image face center point coordinate calculation submodule comprises an intercepting unit, and the intercepting unit is configured to intercept a circumscribed rectangle of a face region in the infrared image, and use coordinates of a center point of the circumscribed rectangle as coordinates of a center point of the face region in the infrared image.
10. The face tracking device according to claim 7, wherein the visible light image corresponding coordinate acquiring module transforms the coordinates of the center point of the face area in the infrared image into the corresponding coordinates of the corresponding point in the visible light image based on a coordinate transformation relationship.
11. The face tracking device of claim 10, further comprising: a coordinate transformation relation acquisition module for acquiring the coordinate transformation relation based on an infrared image including a first reference object and a visible light image including the first reference object,
wherein the coordinate transformation relation obtaining module includes:
a reference object image acquisition sub-module, configured to acquire the visible light image including the first reference object and the infrared image including the first reference object through the visible light image sensor and the infrared image sensor, respectively, where the first reference object is placed within a viewing angle range of the visible light image sensor and the infrared image sensor, and at least one reference point is marked on the first reference object;
a reference point coordinate acquisition submodule for acquiring coordinates of the at least one reference point in the visible light image including the first reference object and the infrared image including the first reference object, respectively; and
and the coordinate transformation relation acquisition submodule is used for acquiring the coordinate transformation relation through the coordinates of the at least one reference point in the visible light image comprising the first reference object and the coordinates of the at least one reference point in the infrared image comprising the first reference object.
12. The face tracking device according to any one of claims 6 to 11, wherein the image acquisition module, the infrared image face center point coordinate acquisition module, the visible light image corresponding coordinate acquisition module, the angle detection module and the pan-tilt rotation control module are implemented by the same processor.
CN201611216574.8A 2016-12-26 2016-12-26 Face tracking method and device Active CN106650665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611216574.8A CN106650665B (en) 2016-12-26 2016-12-26 Face tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611216574.8A CN106650665B (en) 2016-12-26 2016-12-26 Face tracking method and device

Publications (2)

Publication Number Publication Date
CN106650665A CN106650665A (en) 2017-05-10
CN106650665B true CN106650665B (en) 2021-02-12

Family

ID=58828324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611216574.8A Active CN106650665B (en) 2016-12-26 2016-12-26 Face tracking method and device

Country Status (1)

Country Link
CN (1) CN106650665B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875479A (en) * 2017-08-15 2018-11-23 北京旷视科技有限公司 The acquisition methods and device of facial image
CN109960401B (en) * 2017-12-26 2020-10-23 广景视睿科技(深圳)有限公司 Dynamic projection method, device and system based on face tracking
CN108596110A (en) * 2018-04-26 2018-09-28 北京京东金融科技控股有限公司 Image-recognizing method and device, electronic equipment, storage medium
CN111854107A (en) * 2019-04-24 2020-10-30 杭州萤石软件有限公司 Intelligent control method and device of air purifier and air purifier
CN110244775A (en) * 2019-04-29 2019-09-17 广州市景沃电子有限公司 Automatic tracking method and device based on mobile device clamping holder
CN110411570B (en) * 2019-06-28 2020-08-28 武汉高德智感科技有限公司 Infrared human body temperature screening method based on human body detection and human body tracking technology
CN111242984B (en) * 2020-02-13 2021-01-19 珠海安联锐视科技股份有限公司 Target tracking method based on moving head camera
CN111539295A (en) * 2020-04-17 2020-08-14 中移(杭州)信息技术有限公司 Face recognition method, device, system, network equipment and storage medium
CN111881712B (en) * 2020-05-12 2021-03-19 郑海英 Medical instrument real-time tracking platform based on signal processing
CN111815676A (en) * 2020-07-06 2020-10-23 浙江大华技术股份有限公司 Face tracking method, device, equipment and storage device
CN111998959B (en) * 2020-07-20 2022-07-22 北京百度网讯科技有限公司 Temperature calibration method and device based on real-time temperature measurement system and storage medium
CN112380965B (en) * 2020-11-11 2024-04-09 浙江大华技术股份有限公司 Face recognition method and multi-camera
CN113063500B (en) * 2021-03-30 2022-04-22 新疆爱华盈通信息技术有限公司 Face temperature measurement method, face temperature measurement instrument and storage medium
CN113112668A (en) * 2021-04-15 2021-07-13 新疆爱华盈通信息技术有限公司 Face recognition-based holder tracking method, holder and entrance guard recognition machine
CN113516089B (en) * 2021-07-27 2024-04-12 中国平安人寿保险股份有限公司 Face image recognition method, device, equipment and readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520838A (en) * 2008-02-27 2009-09-02 中国科学院自动化研究所 Automatic-tracking and automatic-zooming method for acquiring iris images
CN102098499B (en) * 2011-03-24 2013-01-30 杭州华三通信技术有限公司 Pan/ tilt/ zoom (PTZ) camera control method, device and system thereof
CN103024281A (en) * 2013-01-11 2013-04-03 重庆大学 Infrared and visible video integration system
WO2014116882A1 (en) * 2013-01-23 2014-07-31 Leap Motion, Inc. Systems and methods of tracking object movements in three-dimensional space
CN103413395B (en) * 2013-08-15 2016-04-13 北京声迅电子股份有限公司 Flue gas intelligent detecting prewarning method and device
CN103905733B (en) * 2014-04-02 2018-01-23 哈尔滨工业大学深圳研究生院 A kind of method and system of monocular cam to real time face tracking

Also Published As

Publication number Publication date
CN106650665A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN106650665B (en) Face tracking method and device
CN106446873B (en) Face detection method and device
US10719954B2 (en) Method and electronic device for extracting a center position of an infrared spot
US10719958B2 (en) Methods and systems for fusion display of thermal infrared and visible image
JP6257840B2 (en) System and method for liveness analysis
US8059870B2 (en) Time-of-flight sensor-assisted iris capture system and method
US9058644B2 (en) Local image enhancement for text recognition
KR20140013407A (en) Apparatus and method for tracking object
US9269009B1 (en) Using a front-facing camera to improve OCR with a rear-facing camera
EP3168810A1 (en) Image generating method and apparatus
US20130147977A1 (en) Iris deblurring method based on global and local iris image statistics
CN108141568B (en) OSD information generation camera, synthesis terminal device and sharing system
CN109413411B (en) Black screen identification method and device of monitoring line and server
WO2019061659A1 (en) Method and device for removing eyeglasses from facial image, and storage medium
CN111582118A (en) Face recognition method and device
CN112102207A (en) Method and device for determining temperature, electronic equipment and readable storage medium
CN109785439B (en) Face sketch image generation method and related products
CN107316318A (en) Aerial target automatic testing method based on multiple subarea domain Background fitting
JP6772059B2 (en) Electronic control devices, electronic control systems and electronic control methods
EP3175609B1 (en) Processing data representing an image
CN108875472B (en) Image acquisition device and face identity verification method based on image acquisition device
JP6030890B2 (en) Image processing unit, image processing method, and stand type scanner
KR20180125278A (en) Apparatus and method for detecting pedestrian
US11205064B1 (en) Measuring quality of depth images in real time
CN113011222B (en) Living body detection system, living body detection method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant before: MEGVII INC.

Applicant before: Beijing aperture Science and Technology Ltd.

GR01 Patent grant
GR01 Patent grant