WO2019033575A1 - Electronic device, face tracking method and system, and storage medium - Google Patents

Electronic device, face tracking method and system, and storage medium Download PDF

Info

Publication number
WO2019033575A1
WO2019033575A1 PCT/CN2017/108760 CN2017108760W WO2019033575A1 WO 2019033575 A1 WO2019033575 A1 WO 2019033575A1 CN 2017108760 W CN2017108760 W CN 2017108760W WO 2019033575 A1 WO2019033575 A1 WO 2019033575A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
frames
adjacent
similarity
image
Prior art date
Application number
PCT/CN2017/108760
Other languages
French (fr)
Chinese (zh)
Inventor
戴磊
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019033575A1 publication Critical patent/WO2019033575A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an electronic device, a method and system for tracking a face, and a storage medium.
  • the general method is based on the position of the face, and the face of the same person is judged to be the face of the same person when the face of the current frame is closest to the coordinates of the face center of the previous frame. . Since only the x and y coordinates are used, and the depth information is not used, when a short occlusion occurs during the tracking process, or when a certain number of faces are not detected, it may cause the near face to be mistaken for The follow-up (or vice versa) of a distant face causes an error in face tracking.
  • the purpose of the present application is to provide an electronic device, a method and system for tracking a face, and a storage medium, which are intended to accurately perform face tracking when a short occlusion occurs or a small number of face detections are missing.
  • the present application provides an electronic device including a memory and a processor connected to the memory, wherein the memory stores a face tracking system operable on the processor, When the face tracking system is executed by the processor, the following steps are implemented:
  • a face position determining step acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time sequence, and determining a face region in the adjacent two frames of the face image;
  • the similarity calculation step calculating the adjacent two frames of face images according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of the face in the face;
  • Face tracking judging step performing face tracking based on the similarity of faces in adjacent two frames of face images.
  • the present application further provides a method for face tracking, and the method for face tracking includes:
  • a face position determining step acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time sequence, and determining a face region in the adjacent two frames of the face image;
  • the similarity calculation step calculating the adjacent two frames according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W of the face region The similarity of faces in the image;
  • Face tracking judging step performing face tracking based on the similarity of faces in adjacent two frames of face images.
  • the present application provides a face tracking system, and the face tracking system includes:
  • a determining module configured to obtain a time-series face image of the shooting, take two adjacent frames of the face image from the time series, and determine a face region in the adjacent two frames of the face image;
  • a calculation module configured to calculate the adjacent two frames of the face image according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of the face in the face;
  • the tracking module is configured to perform face tracking based on the similarity of faces in the adjacent two frames of the face image.
  • the present application also provides a computer readable storage medium on which a face tracking system is stored, and when the face tracking system is executed by a processor, the steps are implemented:
  • a face position determining step acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time sequence, and determining a face region in the adjacent two frames of the face image;
  • the similarity calculation step calculating the adjacent two frames of face images according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of the face in the face;
  • Face tracking judging step performing face tracking based on the similarity of faces in adjacent two frames of face images.
  • the beneficial effects of the present application are: in the face tracking, in the face tracking, in addition to the x and y coordinates, the face size is further increased as the judgment basis of the same target face, because the person is in the three directions of x, y, and z. Generally, there is no high moving speed, so the face size of the same person does not change greatly. Therefore, for each face of the T-1 frame, the most similarity is found in all faces of the T frame. As a follow-up to his face, the face reduces the possibility of erroneous tracking in the event of a short occlusion or a small number of face detections, enabling accurate face tracking.
  • FIG. 1 is a schematic diagram of an optional application environment of each embodiment of the present application.
  • FIG. 2 is a schematic flow chart of an embodiment of a method for tracking a face of an applicant.
  • first”, “second” and the like in the present application are only used for description. The purpose is not to be construed as indicating or implying its relative importance or implicitly indicating the number of technical features indicated. Thus, features defining “first” and “second” may include at least one of the features, either explicitly or implicitly.
  • the technical solutions between the various embodiments may be combined with each other, but must be based on the realization of those skilled in the art, and when the combination of the technical solutions is contradictory or impossible to implement, it should be considered that the combination of the technical solutions does not exist. Nor is it within the scope of protection required by this application.
  • FIG. 1 it is a schematic diagram of an application environment of a preferred embodiment of the method for face tracking of the applicant.
  • the application environment diagram includes an electronic device 1 and a camera device 2.
  • the electronic device 1 can perform data interaction with the camera device 2 through a suitable technology such as a network or a near field communication technology.
  • the imaging device 2 may be a camera including a TUBE vacuum tube sensor, a CCD (Charge Coupled Device) charge coupling module sensor, or a CMOS metal oxide semiconductor sensor, and is not limited thereto.
  • the camera device 2 includes one or more devices installed in a specific place (for example, an office place and a monitoring area), and captures a video in real time for a target entering the specific place, and transmits the captured video to the electronic device 1 in real time through the network.
  • the electronic device 1 is an apparatus capable of automatically performing numerical calculation and/or information processing in accordance with an instruction set or stored in advance.
  • the electronic device 1 may be a computer, a single network server, a server group composed of multiple network servers, or a cloud-based cloud composed of a large number of hosts or network servers, where cloud computing is a type of distributed computing.
  • a super virtual computer consisting of a group of loosely coupled computers.
  • the electronic device 1 may include, but is not limited to, a memory 11 communicably connected to each other through a system bus, a processor 12, and a network interface 13, and the memory 11 stores a face tracking executable on the processor 12. system. It should be noted that FIG. 1 only shows the electronic device 1 having the components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the memory 11 may further include a memory and at least one type of readable storage medium, and the memory provides a cache for the operation of the electronic device 1.
  • the readable storage medium can also be used to store a real-time captured face image and a library of face image samples received by the electronic device 1.
  • the readable storage medium may be, for example, a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory (SRAM), a read only memory (ROM), an electric A non-volatile storage medium that can erase a programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, or the like.
  • the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1; in other embodiments, the readable storage medium may also be an external storage device of the electronic device 1.
  • a plug-in hard disk equipped with an electronic device 1 a smart memory card (SMC), a Secure Digital (SD) card, a flash card, and the like.
  • the memory 11 can also be used to store an operating system installed in the electronic device 1 and various types of application software, such as a program code of a face tracking system in an embodiment of the present application.
  • the processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments.
  • CPU Central Processing Unit
  • the device 12 is generally used to control the overall operation of the electronic device 1, for example, performing control and processing related to data interaction or communication with the camera device 2.
  • the processor 12 is configured to run program code or process data stored in the memory 11, such as running a face tracking system or the like.
  • the network interface 13 may comprise a wireless network interface or a wired network interface, which is typically used to establish a communication connection between the electronic device 1 and other electronic devices.
  • the network interface 13 is mainly used to connect the electronic device 1 with one or more camera devices 2 to establish a data transmission channel and a communication connection.
  • the face tracking system is stored in the memory 11 and includes at least one computer readable instruction stored in the memory 11, the at least one computer readable instruction being executable by the processor 12 to implement the methods of various embodiments of the present application. And the at least one computer readable instruction can be divided into different logic modules according to different functions implemented by the various parts thereof.
  • the embodiment includes a determining module, a calculating module and a tracking module.
  • a face position determining step acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time sequence, and determining a face region in the adjacent two frames of the face image;
  • the image including the face image is selected based on the feature of the face, and the selected image is used as a face image of a time series.
  • the method of selecting an image including a face image based on a feature of a face includes: based on a traditional method (eg, a contour rule based on a face, an organ distribution rule, a symmetry rule or a motion rule, etc.), based on a geometric feature Methods (such as using machine learning to find facial features), methods based on correlation matching (such as template matching or iso-strength methods, etc.), representation-based methods (such as using statistical analysis and machine learning techniques to find faces) And related features of non-face images), methods based on statistical theory (such as neural networks or support vector machines, etc.).
  • a traditional method eg, a contour rule based on a face, an organ distribution rule, a symmetry rule or a motion rule, etc.
  • a geometric feature Methods such as using machine learning to find facial features
  • correlation matching such as template matching or iso-strength methods, etc.
  • representation-based methods such as using statistical analysis and machine learning techniques to find faces
  • the face area can be large or small. For a face image taken in a distant scene, the face area is small, and for a close-up shot face image, the face area is large.
  • the face area is a minimum area including a human face, and is preferably a rectangular area including a human face. Of course, it may be an area including a human face of other shapes, such as a circular area, and is not limited thereto.
  • the similarity calculation step calculating the adjacent two frames of face images according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of the face in the face;
  • Face tracking judging step performing face tracking based on the similarity of faces in adjacent two frames of face images.
  • the similarity calculation formula of the adjacent two frames of the face image is as follows:
  • the S i,j is a similarity
  • the w x , w y , w w , w h are the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j, respectively.
  • the weights, w x , w y , w w , w h ⁇ [0,1], the weights may be the same or different, where:
  • the weights w x , w y , w w , w h of the adjacent two frames of the face i, the x-direction distance, the y-direction distance, the width difference, and the height difference of the face j are both 0.25.
  • the size of the face region in the real-time image is reflected, and the similarity of the face is calculated based on the x, y, and z directions of the face image.
  • the face size is further increased as the judgment basis of the same target face, because the person is in x, y, and z.
  • the direction generally does not have a very high moving speed, so the face size of the same person does not change greatly. Therefore, for each face of the T-1 frame, the similarity is the largest among all the faces of the T frame.
  • this algorithm reduces the possibility of false tracking in the event of a short occlusion or a small number of face detections, enabling accurate face tracking.
  • the face tracking determining step includes: when the similarity of the face in the adjacent two frames of the face image is greater than or equal to a preset threshold, It is determined that the face in the adjacent two frames of the face image is the same person's face.
  • the preset threshold is 0.85
  • the face in the adjacent two frames of the face image is determined as The face of the same person.
  • each of the adjacent two frames of the face image identifies a face region
  • the formula is calculated according to the similarity degree. Calculating the similarity value of each face region in the two images and each face region in the other face image, and determining the matching faces in the two images according to the similarity value, that is, the similarity of the faces is greater than A face equal to the preset threshold is the face of the same person.
  • FIG. 2 is a schematic flowchart of an embodiment of a method for tracking a face of an applicant, and the method for tracking a face includes the following steps:
  • a face position determining step acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time series, and determining a face region in the adjacent two frames of the face image;
  • the image including the face image is selected based on the feature of the face, and the selected image is used as a face image of a time series.
  • the method of selecting an image including a face image based on a feature of a face includes: based on a traditional method (eg, a contour rule based on a face, an organ distribution rule, a symmetry rule or a motion rule, etc.), based on a geometric feature Methods (such as using machine learning to find facial features), methods based on correlation matching (such as template matching or iso-strength methods, etc.), representation-based methods (such as using statistical analysis and machine learning techniques to find faces) And related features of non-face images), methods based on statistical theory (such as neural networks or support vector machines, etc.).
  • a traditional method eg, a contour rule based on a face, an organ distribution rule, a symmetry rule or a motion rule, etc.
  • a geometric feature Methods such as using machine learning to find facial features
  • correlation matching such as template matching or iso-strength methods, etc.
  • representation-based methods such as using statistical analysis and machine learning techniques to find faces
  • the face area can be large or small. For a face image taken in a distant scene, the face area is small, and for a close-up shot face image, the face area is large.
  • the face area is a minimum area including a human face, and is preferably a rectangular area including a human face. Of course, it may be an area including a human face of other shapes, such as a circular area, and is not limited thereto.
  • the similarity calculation step calculating the adjacent two frames according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of faces in the face image;
  • the face tracking judging step performing face tracking based on the similarity of the faces in the adjacent two frames of the face image.
  • the similarity calculation formula of the adjacent two frames of the face image is as follows:
  • the S i,j is a similarity
  • the w x , w y , w w , w h are the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j, respectively.
  • the weights, w x , w y , w w , w h ⁇ [0,1], the weights may be the same or different, where:
  • the weights w x , w y , w w , w h of the adjacent two frames of the face i, the x-direction distance, the y-direction distance, the width difference, and the height difference of the face j are both 0.25.
  • the face size is further increased as the judgment basis of the same target face, because the person is in x, y, and z.
  • the direction generally does not have a very high moving speed, so the face size of the same person does not change greatly. Therefore, for each face of the T-1 frame, the similarity is the largest among all the faces of the T frame.
  • this algorithm reduces the possibility of false tracking in the event of a short occlusion or a small number of face detections, enabling accurate face tracking.
  • the face tracking determining step includes: when the similarity of the face in the adjacent two frames of the face image is greater than or equal to a preset threshold, It is determined that the face in the adjacent two frames of the face image is the same person's face.
  • the preset threshold is 0.85
  • the face in the adjacent two frames of the face image is determined as The face of the same person.
  • each of the adjacent two frames of the face image identifies a face region
  • the formula is calculated according to the similarity degree. Calculating the similarity value of each face region in the two images and each face region in the other face image, and determining the matching faces in the two images according to the similarity value, that is, the similarity of the faces is greater than A face equal to the preset threshold is the face of the same person.
  • the present application also provides a computer readable storage medium storing a face tracking system on a computer readable storage medium, the steps of the method for implementing face tracking described above when the face tracking system is executed by a processor.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

An electronic device, a face tracking method and system, and a storage medium. The method comprises: a face position determining step: obtaining a time sequence of photographed face images, selecting two adjacent frames of face images from the time sequence, and determining face regions in the two adjacent frames of face images (S1); a similarity calculating step: calculating the similarity of faces in the two adjacent frames of face images based on X and Y coordinate values of center points of the face regions in the two adjacent frames of face images, and the heights H and widths W of the face regions (S2); and a face tracking determining step: performing face tracking based on the similarity of the faces in the two adjacent frames of face images (S3). The method can accurately track a face in case of transient shielding of the face or in absence of face detection of a few people.

Description

电子装置、人脸追踪的方法、系统及存储介质Electronic device, method and system for face tracking, and storage medium
优先权申明Priority claim
本申请基于巴黎公约申明享有2017年08月17日递交的申请号为CN 201710709124.0、名称为“电子装置、人脸追踪的方法及存储介质”中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。This application is based on the priority of the Paris Convention for the Chinese patent application entitled "Electronic Device, Face Tracking Method and Storage Medium", filed on August 17, 2017, with the application number CN 201710709124.0, the overall content of the Chinese patent application It is incorporated herein by reference.
技术领域Technical field
本申请涉及图像处理技术领域,尤其涉及一种电子装置、人脸追踪的方法、系统及存储介质。The present application relates to the field of image processing technologies, and in particular, to an electronic device, a method and system for tracking a face, and a storage medium.
背景技术Background technique
目前,在对同一个人的人脸进行追踪时,一般的做法是基于人脸的位置,当前帧的某个人脸与上一帧的某个人脸中心坐标最接近时,判断为同一个人的人脸。由于只使用了x、y坐标,而没有使用深度信息,当追踪过程中发生短暂的遮挡,或者某几帧的人脸没有被检测到时,有可能会造成近处的人脸被误认为是远处的人脸的后续(或者相反),导致人脸追踪出现错误。At present, when tracking the face of the same person, the general method is based on the position of the face, and the face of the same person is judged to be the face of the same person when the face of the current frame is closest to the coordinates of the face center of the previous frame. . Since only the x and y coordinates are used, and the depth information is not used, when a short occlusion occurs during the tracking process, or when a certain number of faces are not detected, it may cause the near face to be mistaken for The follow-up (or vice versa) of a distant face causes an error in face tracking.
发明内容Summary of the invention
本申请的目的在于提供一种电子装置、人脸追踪的方法、系统及存储介质,旨在在发生短暂遮挡或少数人脸检测缺失时准确地进行人脸追踪。The purpose of the present application is to provide an electronic device, a method and system for tracking a face, and a storage medium, which are intended to accurately perform face tracking when a short occlusion occurs or a small number of face detections are missing.
为实现上述目的,本申请提供一种电子装置,所述电子装置包括存储器及与所述存储器连接的处理器,所述存储器中存储有可在所述处理器上运行的人脸追踪系统,所述人脸追踪系统被所述处理器执行时实现如下步骤:To achieve the above object, the present application provides an electronic device including a memory and a processor connected to the memory, wherein the memory stores a face tracking system operable on the processor, When the face tracking system is executed by the processor, the following steps are implemented:
人脸位置确定步骤:获取拍摄的一时间序列的人脸图像,从该时间序列中取相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域;a face position determining step: acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time sequence, and determining a face region in the adjacent two frames of the face image;
相似度计算步骤:根据该相邻两帧人脸图像中的人脸区域中心点的X、Y坐标值,以及人脸区域的高度H、宽度W值,计算得到该相邻两帧人脸图像中人脸的相似度;The similarity calculation step: calculating the adjacent two frames of face images according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of the face in the face;
人脸追踪判断步骤:基于相邻两帧人脸图像中人脸的相似度进行人脸追踪。Face tracking judging step: performing face tracking based on the similarity of faces in adjacent two frames of face images.
为实现上述目的,本申请还提供一种人脸追踪的方法,所述人脸追踪的方法包括:To achieve the above object, the present application further provides a method for face tracking, and the method for face tracking includes:
人脸位置确定步骤:获取拍摄的一时间序列的人脸图像,从该时间序列中取相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域;a face position determining step: acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time sequence, and determining a face region in the adjacent two frames of the face image;
相似度计算步骤:根据该相邻两帧人脸图像中的人脸区域中心点的X、Y坐标值,以及人脸区域的高度H、宽度W值,计算得到该相邻两帧人脸 图像中人脸的相似度;The similarity calculation step: calculating the adjacent two frames according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W of the face region The similarity of faces in the image;
人脸追踪判断步骤:基于相邻两帧人脸图像中人脸的相似度进行人脸追踪。Face tracking judging step: performing face tracking based on the similarity of faces in adjacent two frames of face images.
为实现上述目的,本申请提供一种人脸追踪系统,所述人脸追踪系统包括:To achieve the above objective, the present application provides a face tracking system, and the face tracking system includes:
确定模块,用于获取拍摄的一时间序列的人脸图像,从该时间序列中取相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域;a determining module, configured to obtain a time-series face image of the shooting, take two adjacent frames of the face image from the time series, and determine a face region in the adjacent two frames of the face image;
计算模块,用于根据该相邻两帧人脸图像中的人脸区域中心点的X、Y坐标值,以及人脸区域的高度H、宽度W值,计算得到该相邻两帧人脸图像中人脸的相似度;a calculation module, configured to calculate the adjacent two frames of the face image according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of the face in the face;
追踪模块,用于基于相邻两帧人脸图像中人脸的相似度进行人脸追踪。The tracking module is configured to perform face tracking based on the similarity of faces in the adjacent two frames of the face image.
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有人脸追踪系统,所述人脸追踪系统被处理器执行时实现步骤:The present application also provides a computer readable storage medium on which a face tracking system is stored, and when the face tracking system is executed by a processor, the steps are implemented:
人脸位置确定步骤:获取拍摄的一时间序列的人脸图像,从该时间序列中取相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域;a face position determining step: acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time sequence, and determining a face region in the adjacent two frames of the face image;
相似度计算步骤:根据该相邻两帧人脸图像中的人脸区域中心点的X、Y坐标值,以及人脸区域的高度H、宽度W值,计算得到该相邻两帧人脸图像中人脸的相似度;The similarity calculation step: calculating the adjacent two frames of face images according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of the face in the face;
人脸追踪判断步骤:基于相邻两帧人脸图像中人脸的相似度进行人脸追踪。Face tracking judging step: performing face tracking based on the similarity of faces in adjacent two frames of face images.
本申请的有益效果是:本申请在人脸追踪中,在使用x、y坐标之外,进一步增加了人脸大小作为同一目标人脸的判断依据,因为人在x、y、z三个方向一般都不会有很高的移动速度,所以同一人的人脸大小也不会发生很大变化,因此对于T-1帧的每个人脸,在T帧的所有人脸中寻找相似度最大的人脸作为他的追踪后继,使用这种算法,减少了在发生短暂遮挡或少数人脸检测缺失时错误追踪的可能性,能够准确地进行人脸追踪。The beneficial effects of the present application are: in the face tracking, in the face tracking, in addition to the x and y coordinates, the face size is further increased as the judgment basis of the same target face, because the person is in the three directions of x, y, and z. Generally, there is no high moving speed, so the face size of the same person does not change greatly. Therefore, for each face of the T-1 frame, the most similarity is found in all faces of the T frame. As a follow-up to his face, the face reduces the possibility of erroneous tracking in the event of a short occlusion or a small number of face detections, enabling accurate face tracking.
附图说明DRAWINGS
图1为本申请各个实施例一可选的应用环境示意图;1 is a schematic diagram of an optional application environment of each embodiment of the present application;
图2为本申请人脸追踪的方法一实施例的流程示意图。2 is a schematic flow chart of an embodiment of a method for tracking a face of an applicant.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the objects, technical solutions, and advantages of the present application more comprehensible, the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without departing from the inventive scope are the scope of the present application.
需要说明的是,在本申请中涉及“第一”、“第二”等的描述仅用于描述 目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。It should be noted that the descriptions referring to “first”, “second” and the like in the present application are only used for description. The purpose is not to be construed as indicating or implying its relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly. In addition, the technical solutions between the various embodiments may be combined with each other, but must be based on the realization of those skilled in the art, and when the combination of the technical solutions is contradictory or impossible to implement, it should be considered that the combination of the technical solutions does not exist. Nor is it within the scope of protection required by this application.
参阅图1所示,是本申请人脸追踪的方法的较佳实施例的应用环境示意图。该应用环境示意图包括电子装置1及摄像装置2。电子装置1可以通过网络、近场通信技术等适合的技术与摄像装置2进行数据交互。Referring to FIG. 1 , it is a schematic diagram of an application environment of a preferred embodiment of the method for face tracking of the applicant. The application environment diagram includes an electronic device 1 and a camera device 2. The electronic device 1 can perform data interaction with the camera device 2 through a suitable technology such as a network or a near field communication technology.
所述摄像装置2可以是包含TUBE真空管式传感器、CCD(Charge Coupled Device)电荷藕合组件传感器或者CMOS金属氧化物半导体传感器的摄像头等,此处不做过多限定。摄像装置2包括一个或者多个,安装于特定场所(例如办公场所、监控区域),对进入该特定场所的目标实时拍摄得到视频,通过网络将拍摄得到的视频实时传输至电子装置1。The imaging device 2 may be a camera including a TUBE vacuum tube sensor, a CCD (Charge Coupled Device) charge coupling module sensor, or a CMOS metal oxide semiconductor sensor, and is not limited thereto. The camera device 2 includes one or more devices installed in a specific place (for example, an office place and a monitoring area), and captures a video in real time for a target entering the specific place, and transmits the captured video to the electronic device 1 in real time through the network.
所述电子装置1是一种能够按照事先设定或者存储的指令,自动进行数值计算和/或信息处理的设备。所述电子装置1可以是计算机、也可以是单个网络服务器、多个网络服务器组成的服务器组或者基于云计算的由大量主机或者网络服务器构成的云,其中云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个超级虚拟计算机。The electronic device 1 is an apparatus capable of automatically performing numerical calculation and/or information processing in accordance with an instruction set or stored in advance. The electronic device 1 may be a computer, a single network server, a server group composed of multiple network servers, or a cloud-based cloud composed of a large number of hosts or network servers, where cloud computing is a type of distributed computing. A super virtual computer consisting of a group of loosely coupled computers.
在本实施例中,电子装置1可包括,但不仅限于,可通过系统总线相互通信连接的存储器11、处理器12、网络接口13,存储器11存储有可在处理器12上运行的人脸追踪系统。需要指出的是,图1仅示出了具有组件11-13的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。In this embodiment, the electronic device 1 may include, but is not limited to, a memory 11 communicably connected to each other through a system bus, a processor 12, and a network interface 13, and the memory 11 stores a face tracking executable on the processor 12. system. It should be noted that FIG. 1 only shows the electronic device 1 having the components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
其中,存储器11还可以包括内存及至少一种类型的可读存储介质,内存为电子装置1的运行提供缓存。其可读存储介质还可用于存储电子装置1接收到的实时拍摄的人脸图像及人脸图像样本库。可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等的非易失性存储介质。在一些实施例中,可读存储介质可以是电子装置1的内部存储单元,例如该电子装置1的硬盘;在另一些实施例中,该可读存储介质也可以是电子装置1的外部存储设备,例如电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。本实施例中,存储器11还可用于存储安装于电子装置1的操作系统和各类应用软件,例如本申请一实施例中的人脸追踪系统的程序代码等。The memory 11 may further include a memory and at least one type of readable storage medium, and the memory provides a cache for the operation of the electronic device 1. The readable storage medium can also be used to store a real-time captured face image and a library of face image samples received by the electronic device 1. The readable storage medium may be, for example, a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory (SRAM), a read only memory (ROM), an electric A non-volatile storage medium that can erase a programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, or the like. In some embodiments, the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1; in other embodiments, the readable storage medium may also be an external storage device of the electronic device 1. For example, a plug-in hard disk equipped with an electronic device 1, a smart memory card (SMC), a Secure Digital (SD) card, a flash card, and the like. In this embodiment, the memory 11 can also be used to store an operating system installed in the electronic device 1 and various types of application software, such as a program code of a face tracking system in an embodiment of the present application.
所述处理器12在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理 器12通常用于控制所述电子装置1的总体操作,例如执行与所述摄像装置2进行数据交互或者通信相关的控制和处理等。本实施例中,所述处理器12用于运行所述存储器11中存储的程序代码或者处理数据,例如运行人脸追踪系统等。The processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. This treatment The device 12 is generally used to control the overall operation of the electronic device 1, for example, performing control and processing related to data interaction or communication with the camera device 2. In this embodiment, the processor 12 is configured to run program code or process data stored in the memory 11, such as running a face tracking system or the like.
所述网络接口13可包括无线网络接口或有线网络接口,该网络接口13通常用于在所述电子装置1与其他电子设备之间建立通信连接。本实施例中,网络接口13主要用于将电子装置1与一个或多个摄像装置2相连,以建立数据传输通道和通信连接。The network interface 13 may comprise a wireless network interface or a wired network interface, which is typically used to establish a communication connection between the electronic device 1 and other electronic devices. In this embodiment, the network interface 13 is mainly used to connect the electronic device 1 with one or more camera devices 2 to establish a data transmission channel and a communication connection.
所述人脸追踪系统存储在存储器11中,包括至少一个存储在存储器11中的计算机可读指令,该至少一个计算机可读指令可被处理器器12执行,以实现本申请各实施例的方法;以及,该至少一个计算机可读指令依据其各部分所实现的功能不同,可被划为不同的逻辑模块,本实施例包括确定模块、计算模块及追踪模块。The face tracking system is stored in the memory 11 and includes at least one computer readable instruction stored in the memory 11, the at least one computer readable instruction being executable by the processor 12 to implement the methods of various embodiments of the present application. And the at least one computer readable instruction can be divided into different logic modules according to different functions implemented by the various parts thereof. The embodiment includes a determining module, a calculating module and a tracking module.
在一实施例中,上述人脸追踪系统被所述处理器12执行时实现如下步骤:In an embodiment, when the face tracking system is executed by the processor 12, the following steps are implemented:
人脸位置确定步骤:获取拍摄的一时间序列的人脸图像,从该时间序列中取相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域;a face position determining step: acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time sequence, and determining a face region in the adjacent two frames of the face image;
其中,对于在实时拍摄的动态视频的每一帧图像中,基于人脸的特征选出包括人脸图像的图像,选出来的图像作为一时间序列的人脸图像。Wherein, in each frame image of the dynamic video captured in real time, the image including the face image is selected based on the feature of the face, and the selected image is used as a face image of a time series.
具体地,基于人脸的特征选出包括人脸图像的图像的方法包括:基于传统方法(例如基于人脸的轮廓规则、器官分布规则、对称性规则或运动规则等等)、基于几何特征的方法(例如利用机器学习的方法寻找脸部特征)、基于相关匹配的方法(例如模板匹配法或等强度线法等)、基于表象的方法(例如利用统计分析和机器学习的技术来寻找人脸和非人脸图像的有关特性)、基于统计理论的方法(例如神经网络或支持向量机等)。Specifically, the method of selecting an image including a face image based on a feature of a face includes: based on a traditional method (eg, a contour rule based on a face, an organ distribution rule, a symmetry rule or a motion rule, etc.), based on a geometric feature Methods (such as using machine learning to find facial features), methods based on correlation matching (such as template matching or iso-strength methods, etc.), representation-based methods (such as using statistical analysis and machine learning techniques to find faces) And related features of non-face images), methods based on statistical theory (such as neural networks or support vector machines, etc.).
本实施例中,对于该时间序列的人脸图像,获取其中相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域。人脸区域可大可小,对于远景拍摄的人脸图像,其人脸区域小,对于近景拍摄的人脸图像,其人脸区域大。人脸区域为包括人脸的最小区域,优选为包括人脸的矩形区域,当然也可以是其他形状的包括人脸的区域,例如圆形区域等,此处不做过多限定。In this embodiment, for the face image of the time series, two adjacent frames of the face image are acquired, and the face region is determined in the adjacent two frames of the face image. The face area can be large or small. For a face image taken in a distant scene, the face area is small, and for a close-up shot face image, the face area is large. The face area is a minimum area including a human face, and is preferably a rectangular area including a human face. Of course, it may be an area including a human face of other shapes, such as a circular area, and is not limited thereto.
相似度计算步骤:根据该相邻两帧人脸图像中的人脸区域中心点的X、Y坐标值,以及人脸区域的高度H、宽度W值,计算得到该相邻两帧人脸图像中人脸的相似度;The similarity calculation step: calculating the adjacent two frames of face images according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of the face in the face;
人脸追踪判断步骤:基于相邻两帧人脸图像中人脸的相似度进行人脸追踪。Face tracking judging step: performing face tracking based on the similarity of faces in adjacent two frames of face images.
本实施例中,相邻两帧人脸图像的相似度计算公式如下:In this embodiment, the similarity calculation formula of the adjacent two frames of the face image is as follows:
Figure PCTCN2017108760-appb-000001
所述Si,j为相似度,所述wx,wy,ww,wh分别为相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重, wx,wy,ww,wh∈[0,1],各个权重可以相同也可以不相同,其中:
Figure PCTCN2017108760-appb-000001
The S i,j is a similarity, and the w x , w y , w w , w h are the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j, respectively. The weights, w x , w y , w w , w h ∈[0,1], the weights may be the same or different, where:
Figure PCTCN2017108760-appb-000002
为人脸i和人脸j中心点之间x方向距离;
Figure PCTCN2017108760-appb-000002
The distance between the face i and the center point of the face j in the x direction;
Figure PCTCN2017108760-appb-000003
为人脸i和人脸j中心点之间y方向距离;
Figure PCTCN2017108760-appb-000003
The distance between the face i and the center point of the face j in the y direction;
Figure PCTCN2017108760-appb-000004
为人脸i和人脸j的宽度差异;
Figure PCTCN2017108760-appb-000004
The difference in width between face i and face j;
Figure PCTCN2017108760-appb-000005
为人脸i和人脸j的高度差异。
Figure PCTCN2017108760-appb-000005
The difference in height between face i and face j.
优选地,相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重wx,wy,ww,wh均为0.25。Preferably, the weights w x , w y , w w , w h of the adjacent two frames of the face i, the x-direction distance, the y-direction distance, the width difference, and the height difference of the face j are both 0.25.
对于同一人的人脸,离摄像头越近则拍摄得到的人脸区域在图像中所占面积比例越大,即人脸区域的高度H与宽度W值的乘积越大,离摄像头越远则拍摄得到的人脸区域在图像中所占面积比例越小,即人脸区域的高度H与宽度W值的乘积越小,因此,本实施例中,图像的深度信息(即z方向的变化)通过实时图像中人脸区域的大小体现,基于人脸图像的x、y、z三个方向计算人脸的相似度。For the face of the same person, the closer the camera is, the larger the proportion of the face area captured in the image, that is, the larger the product of the height H of the face area and the width W value, the farther away from the camera is taken The smaller the proportion of the area occupied by the face area in the image, that is, the smaller the product of the height H of the face area and the width W value, therefore, in this embodiment, the depth information of the image (ie, the change in the z direction) passes. The size of the face region in the real-time image is reflected, and the similarity of the face is calculated based on the x, y, and z directions of the face image.
与现有技术相比,本实施例在人脸追踪中,在使用x、y坐标之外,进一步增加了人脸大小作为同一目标人脸的判断依据,因为人在x、y、z三个方向一般都不会有很高的移动速度,所以同一人的人脸大小也不会发生很大变化,因此对于T-1帧的每个人脸,在T帧的所有人脸中寻找相似度最大的人脸作为他的追踪后继,使用这种算法,减少了在发生短暂遮挡或少数人脸检测缺失时错误追踪的可能性,能够准确地进行人脸追踪。Compared with the prior art, in the face tracking, in the face tracking, in addition to the x and y coordinates, the face size is further increased as the judgment basis of the same target face, because the person is in x, y, and z. The direction generally does not have a very high moving speed, so the face size of the same person does not change greatly. Therefore, for each face of the T-1 frame, the similarity is the largest among all the faces of the T frame. As a follow-up to his follow-up, this algorithm reduces the possibility of false tracking in the event of a short occlusion or a small number of face detections, enabling accurate face tracking.
在一优选的实施例中,在上述图1的实施例的基础上,所述人脸追踪判断步骤包括:当该相邻两帧人脸图像中人脸的相似度大于等于预设阈值时,判断该相邻两帧人脸图像中的人脸为同一人的人脸。In a preferred embodiment, on the basis of the foregoing embodiment of FIG. 1, the face tracking determining step includes: when the similarity of the face in the adjacent two frames of the face image is greater than or equal to a preset threshold, It is determined that the face in the adjacent two frames of the face image is the same person's face.
本实施例中,当该相邻两帧人脸图像中人脸的相似度大于等于预设阈值时,优选地,预设阈值为0.85,判断该相邻两帧人脸图像中的人脸为同一人的人脸。In this embodiment, when the similarity of the face in the adjacent two frames of the face image is greater than or equal to a preset threshold, preferably, the preset threshold is 0.85, and the face in the adjacent two frames of the face image is determined as The face of the same person.
当该相邻两帧人脸图像中,每张图像分别识别出一个人脸区域时,直接根据计算得到的一个相似度值判断该相邻两帧人脸图像中的人脸是否为同一人的人脸。在其他实施例中,当该相邻两帧人脸图像中,从图像中识别出的人脸区域不只一个时(例如有些图像中识别出2个、3个人脸区域),根据相似度计算公式计算得到两张图像中每个人脸区域与另一张人脸图像中每个人脸区域的相似度值,再根据相似度值确定两张图像中相匹配的人脸,即人脸的相似度大于等于预设阈值的人脸为同一人的人脸。 When each of the adjacent two frames of the face image identifies a face region, it is directly determined according to the calculated similarity value whether the face in the adjacent two frames of the face image is the same person. human face. In other embodiments, when there are more than one face region recognized from the image in the adjacent two frames of the face image (for example, two or three face regions are recognized in some images), the formula is calculated according to the similarity degree. Calculating the similarity value of each face region in the two images and each face region in the other face image, and determining the matching faces in the two images according to the similarity value, that is, the similarity of the faces is greater than A face equal to the preset threshold is the face of the same person.
如图2所示,图2为本申请人脸追踪的方法一实施例的流程示意图,该人脸追踪的方法包括以下步骤:As shown in FIG. 2, FIG. 2 is a schematic flowchart of an embodiment of a method for tracking a face of an applicant, and the method for tracking a face includes the following steps:
S1,人脸位置确定步骤:获取拍摄的一时间序列的人脸图像,从该时间序列中取相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域;S1, a face position determining step: acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time series, and determining a face region in the adjacent two frames of the face image;
其中,对于在实时拍摄的动态视频的每一帧图像中,基于人脸的特征选出包括人脸图像的图像,选出来的图像作为一时间序列的人脸图像。Wherein, in each frame image of the dynamic video captured in real time, the image including the face image is selected based on the feature of the face, and the selected image is used as a face image of a time series.
具体地,基于人脸的特征选出包括人脸图像的图像的方法包括:基于传统方法(例如基于人脸的轮廓规则、器官分布规则、对称性规则或运动规则等等)、基于几何特征的方法(例如利用机器学习的方法寻找脸部特征)、基于相关匹配的方法(例如模板匹配法或等强度线法等)、基于表象的方法(例如利用统计分析和机器学习的技术来寻找人脸和非人脸图像的有关特性)、基于统计理论的方法(例如神经网络或支持向量机等)。Specifically, the method of selecting an image including a face image based on a feature of a face includes: based on a traditional method (eg, a contour rule based on a face, an organ distribution rule, a symmetry rule or a motion rule, etc.), based on a geometric feature Methods (such as using machine learning to find facial features), methods based on correlation matching (such as template matching or iso-strength methods, etc.), representation-based methods (such as using statistical analysis and machine learning techniques to find faces) And related features of non-face images), methods based on statistical theory (such as neural networks or support vector machines, etc.).
本实施例中,对于该时间序列的人脸图像,获取其中相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域。人脸区域可大可小,对于远景拍摄的人脸图像,其人脸区域小,对于近景拍摄的人脸图像,其人脸区域大。人脸区域为包括人脸的最小区域,优选为包括人脸的矩形区域,当然也可以是其他形状的包括人脸的区域,例如圆形区域等,此处不做过多限定。In this embodiment, for the face image of the time series, two adjacent frames of the face image are acquired, and the face region is determined in the adjacent two frames of the face image. The face area can be large or small. For a face image taken in a distant scene, the face area is small, and for a close-up shot face image, the face area is large. The face area is a minimum area including a human face, and is preferably a rectangular area including a human face. Of course, it may be an area including a human face of other shapes, such as a circular area, and is not limited thereto.
S2,相似度计算步骤:根据该相邻两帧人脸图像中的人脸区域中心点的X、Y坐标值,以及人脸区域的高度H、宽度W值,计算得到该相邻两帧人脸图像中人脸的相似度;S2, the similarity calculation step: calculating the adjacent two frames according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of faces in the face image;
S3,人脸追踪判断步骤:基于相邻两帧人脸图像中人脸的相似度进行人脸追踪。S3. The face tracking judging step: performing face tracking based on the similarity of the faces in the adjacent two frames of the face image.
本实施例中,相邻两帧人脸图像的相似度计算公式如下:In this embodiment, the similarity calculation formula of the adjacent two frames of the face image is as follows:
Figure PCTCN2017108760-appb-000006
所述Si,j为相似度,所述wx,wy,ww,wh分别为相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重,wx,wy,ww,wh∈[0,1],各个权重可以相同也可以不相同,其中:
Figure PCTCN2017108760-appb-000006
The S i,j is a similarity, and the w x , w y , w w , w h are the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j, respectively. The weights, w x , w y , w w , w h ∈[0,1], the weights may be the same or different, where:
Figure PCTCN2017108760-appb-000007
为人脸i和人脸j中心点之间x方向距离;
Figure PCTCN2017108760-appb-000007
The distance between the face i and the center point of the face j in the x direction;
Figure PCTCN2017108760-appb-000008
为人脸i和人脸j中心点之间y方向距离;
Figure PCTCN2017108760-appb-000008
The distance between the face i and the center point of the face j in the y direction;
Figure PCTCN2017108760-appb-000009
为人脸i和人脸j的宽度差异;
Figure PCTCN2017108760-appb-000009
The difference in width between face i and face j;
Figure PCTCN2017108760-appb-000010
为人脸i和人脸j的高度差异。
Figure PCTCN2017108760-appb-000010
The difference in height between face i and face j.
优选地,相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重wx,wy,ww,wh均为0.25。Preferably, the weights w x , w y , w w , w h of the adjacent two frames of the face i, the x-direction distance, the y-direction distance, the width difference, and the height difference of the face j are both 0.25.
对于同一人的人脸,离摄像头越近则拍摄得到的人脸区域在图像中所占 面积比例越大,即人脸区域的高度H与宽度W值的乘积越大,离摄像头越远则拍摄得到的人脸区域在图像中所占面积比例越小,即人脸区域的高度H与宽度W值的乘积越小,因此,本实施例中,图像的深度信息(即z方向的变化)通过实时图像中人脸区域的大小体现,基于人脸图像的x、y、z三个方向计算人脸的相似度。For the same person's face, the closer to the camera, the captured face area is occupied by the image. The larger the area ratio, that is, the larger the product of the height H of the face region and the width W value, the farther away from the camera, the smaller the proportion of the captured face area in the image, that is, the height H of the face region and The smaller the product of the width W value is, therefore, in this embodiment, the depth information of the image (ie, the change in the z direction) is reflected by the size of the face region in the real-time image, based on the x, y, and z directions of the face image. Calculate the similarity of faces.
与现有技术相比,本实施例在人脸追踪中,在使用x、y坐标之外,进一步增加了人脸大小作为同一目标人脸的判断依据,因为人在x、y、z三个方向一般都不会有很高的移动速度,所以同一人的人脸大小也不会发生很大变化,因此对于T-1帧的每个人脸,在T帧的所有人脸中寻找相似度最大的人脸作为他的追踪后继,使用这种算法,减少了在发生短暂遮挡或少数人脸检测缺失时错误追踪的可能性,能够准确地进行人脸追踪。Compared with the prior art, in the face tracking, in the face tracking, in addition to the x and y coordinates, the face size is further increased as the judgment basis of the same target face, because the person is in x, y, and z. The direction generally does not have a very high moving speed, so the face size of the same person does not change greatly. Therefore, for each face of the T-1 frame, the similarity is the largest among all the faces of the T frame. As a follow-up to his follow-up, this algorithm reduces the possibility of false tracking in the event of a short occlusion or a small number of face detections, enabling accurate face tracking.
在一优选的实施例中,在上述图2的实施例的基础上,所述人脸追踪判断步骤包括:当该相邻两帧人脸图像中人脸的相似度大于等于预设阈值时,判断该相邻两帧人脸图像中的人脸为同一人的人脸。In a preferred embodiment, on the basis of the foregoing embodiment of FIG. 2, the face tracking determining step includes: when the similarity of the face in the adjacent two frames of the face image is greater than or equal to a preset threshold, It is determined that the face in the adjacent two frames of the face image is the same person's face.
本实施例中,当该相邻两帧人脸图像中人脸的相似度大于等于预设阈值时,优选地,预设阈值为0.85,判断该相邻两帧人脸图像中的人脸为同一人的人脸。In this embodiment, when the similarity of the face in the adjacent two frames of the face image is greater than or equal to a preset threshold, preferably, the preset threshold is 0.85, and the face in the adjacent two frames of the face image is determined as The face of the same person.
当该相邻两帧人脸图像中,每张图像分别识别出一个人脸区域时,直接根据计算得到的一个相似度值判断该相邻两帧人脸图像中的人脸是否为同一人的人脸。在其他实施例中,当该相邻两帧人脸图像中,从图像中识别出的人脸区域不只一个时(例如有些图像中识别出2个、3个人脸区域),根据相似度计算公式计算得到两张图像中每个人脸区域与另一张人脸图像中每个人脸区域的相似度值,再根据相似度值确定两张图像中相匹配的人脸,即人脸的相似度大于等于预设阈值的人脸为同一人的人脸。When each of the adjacent two frames of the face image identifies a face region, it is directly determined according to the calculated similarity value whether the face in the adjacent two frames of the face image is the same person. human face. In other embodiments, when there are more than one face region recognized from the image in the adjacent two frames of the face image (for example, two or three face regions are recognized in some images), the formula is calculated according to the similarity degree. Calculating the similarity value of each face region in the two images and each face region in the other face image, and determining the matching faces in the two images according to the similarity value, that is, the similarity of the faces is greater than A face equal to the preset threshold is the face of the same person.
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有人脸追踪系统,所述人脸追踪系统被处理器执行时实现上述的人脸追踪的方法的步骤。The present application also provides a computer readable storage medium storing a face tracking system on a computer readable storage medium, the steps of the method for implementing face tracking described above when the face tracking system is executed by a processor.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the embodiments of the present application are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better. Implementation. Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, The optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。 The above is only a preferred embodiment of the present application, and is not intended to limit the scope of the patent application, and the equivalent structure or equivalent process transformations made by the specification and the drawings of the present application, or directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of this application.

Claims (20)

  1. 一种电子装置,其特征在于,所述电子装置包括存储器及与所述存储器连接的处理器,所述存储器中存储有可在所述处理器上运行的人脸追踪系统,所述人脸追踪系统被所述处理器执行时实现如下步骤:An electronic device, comprising: a memory and a processor coupled to the memory, wherein the memory stores a face tracking system operable on the processor, the face tracking The system implements the following steps when executed by the processor:
    人脸位置确定步骤:获取拍摄的一时间序列的人脸图像,从该时间序列中取相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域;a face position determining step: acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time sequence, and determining a face region in the adjacent two frames of the face image;
    相似度计算步骤:根据该相邻两帧人脸图像中的人脸区域中心点的X、Y坐标值,以及人脸区域的高度H、宽度W值,计算得到该相邻两帧人脸图像中人脸的相似度;The similarity calculation step: calculating the adjacent two frames of face images according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of the face in the face;
    人脸追踪判断步骤:基于相邻两帧人脸图像中人脸的相似度进行人脸追踪。Face tracking judging step: performing face tracking based on the similarity of faces in adjacent two frames of face images.
  2. 根据权利要求1所述的电子装置,其特征在于,所述相似度计算步骤包括:The electronic device according to claim 1, wherein the similarity calculation step comprises:
    Figure PCTCN2017108760-appb-100001
    所述Si,j为相似度,所述wx,wy,ww,wh分别为相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重,wx,wy,ww,wh∈[0,1],其中:
    Figure PCTCN2017108760-appb-100001
    The S i,j is a similarity, and the w x , w y , w w , w h are the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j, respectively. Weight, w x , w y , w w , w h ∈[0,1], where:
    Figure PCTCN2017108760-appb-100002
    为人脸i和人脸j中心点之间x方向距离;
    Figure PCTCN2017108760-appb-100002
    The distance between the face i and the center point of the face j in the x direction;
    Figure PCTCN2017108760-appb-100003
    为人脸i和人脸j中心点之间y方向距离;
    Figure PCTCN2017108760-appb-100003
    The distance between the face i and the center point of the face j in the y direction;
    Figure PCTCN2017108760-appb-100004
    为人脸i和人脸j的宽度差异;
    Figure PCTCN2017108760-appb-100004
    The difference in width between face i and face j;
    Figure PCTCN2017108760-appb-100005
    为人脸i和人脸j的高度差异。
    Figure PCTCN2017108760-appb-100005
    The difference in height between face i and face j.
  3. 根据权利要求2所述的电子装置,其特征在于,所述相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重均为0.25。The electronic device according to claim 2, wherein the weights of the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j are both 0.25.
  4. 根据权利要求1至3任一项所述的电子装置,其特征在于,所述人脸追踪判断步骤包括:The electronic device according to any one of claims 1 to 3, wherein the face tracking determining step comprises:
    当该相邻两帧人脸图像中人脸的相似度大于等于预设阈值时,判断该相邻两帧人脸图像中的人脸为同一人的人脸。When the similarity of the face in the adjacent two frames of the face image is greater than or equal to a preset threshold, the face in the adjacent two frames of the face image is determined to be the same person's face.
  5. 根据权利要求4所述的电子装置,其特征在于,所述预设阈值为0.85。The electronic device of claim 4, wherein the predetermined threshold is 0.85.
  6. 一种人脸追踪的方法,其特征在于,所述人脸追踪的方法包括:A method for face tracking, characterized in that the method for face tracking includes:
    人脸位置确定步骤:获取拍摄的一时间序列的人脸图像,从该时间序列中取相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域;a face position determining step: acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time sequence, and determining a face region in the adjacent two frames of the face image;
    相似度计算步骤:根据该相邻两帧人脸图像中的人脸区域中心点的X、Y坐标值,以及人脸区域的高度H、宽度W值,计算得到该相邻两帧人脸 图像中人脸的相似度;The similarity calculation step: calculating the adjacent two frames according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W of the face region The similarity of faces in the image;
    人脸追踪判断步骤:基于相邻两帧人脸图像中人脸的相似度进行人脸追踪。Face tracking judging step: performing face tracking based on the similarity of faces in adjacent two frames of face images.
  7. 根据权利要求6所述的人脸追踪的方法,其特征在于,所述相似度计算步骤包括:The method of face tracking according to claim 6, wherein the similarity calculation step comprises:
    Figure PCTCN2017108760-appb-100006
    所述Si,j为相似度,所述wx,wy,ww,wh分别为相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重,wx,wy,ww,wh∈[0,1],其中:
    Figure PCTCN2017108760-appb-100006
    The S i,j is a similarity, and the w x , w y , w w , w h are the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j, respectively. Weight, w x , w y , w w , w h ∈[0,1], where:
    Figure PCTCN2017108760-appb-100007
    为人脸i和人脸j中心点之间x方向距离;
    Figure PCTCN2017108760-appb-100007
    The distance between the face i and the center point of the face j in the x direction;
    Figure PCTCN2017108760-appb-100008
    为人脸i和人脸j中心点之间y方向距离;
    Figure PCTCN2017108760-appb-100008
    The distance between the face i and the center point of the face j in the y direction;
    Figure PCTCN2017108760-appb-100009
    为人脸i和人脸j的宽度差异;
    Figure PCTCN2017108760-appb-100009
    The difference in width between face i and face j;
    Figure PCTCN2017108760-appb-100010
    为人脸i和人脸j的高度差异。
    Figure PCTCN2017108760-appb-100010
    The difference in height between face i and face j.
  8. 根据权利要求7所述的人脸追踪的方法,其特征在于,所述相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重均为0.25。The method of claim 7 according to claim 7, wherein the weights of the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j are both 0.25.
  9. 根据权利要求6至8任一项所述的人脸追踪的方法,其特征在于,所述人脸追踪判断步骤包括:The method for face tracking according to any one of claims 6 to 8, wherein the face tracking determining step comprises:
    当该相邻两帧人脸图像中人脸的相似度大于等于预设阈值时,判断该相邻两帧人脸图像中的人脸为同一人的人脸。When the similarity of the face in the adjacent two frames of the face image is greater than or equal to a preset threshold, the face in the adjacent two frames of the face image is determined to be the same person's face.
  10. 根据权利要求9所述的人脸追踪的方法,其特征在于,所述预设阈值为0.85。The method of face tracking according to claim 9, wherein the preset threshold is 0.85.
  11. 一种人脸追踪系统,其特征在于,所述人脸追踪系统包括:A face tracking system, characterized in that the face tracking system comprises:
    确定模块,用于获取拍摄的一时间序列的人脸图像,从该时间序列中取相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域;a determining module, configured to obtain a time-series face image of the shooting, take two adjacent frames of the face image from the time series, and determine a face region in the adjacent two frames of the face image;
    计算模块,用于根据该相邻两帧人脸图像中的人脸区域中心点的X、Y坐标值,以及人脸区域的高度H、宽度W值,计算得到该相邻两帧人脸图像中人脸的相似度;a calculation module, configured to calculate the adjacent two frames of the face image according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of the face in the face;
    追踪模块,用于基于相邻两帧人脸图像中人脸的相似度进行人脸追踪。The tracking module is configured to perform face tracking based on the similarity of faces in the adjacent two frames of the face image.
  12. 根据权利要求11所述的人脸追踪系统,其特征在于,所述计算模块具体用于:The face tracking system according to claim 11, wherein the calculation module is specifically configured to:
    Figure PCTCN2017108760-appb-100011
    所述Si,j为相似度,所述wx,wy,ww,wh分别为相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重, wx,wy,ww,wh∈[0,1],其中:
    Figure PCTCN2017108760-appb-100011
    The S i,j is a similarity, and the w x , w y , w w , w h are the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j, respectively. Weight, w x , w y , w w , w h ∈[0,1], where:
    Figure PCTCN2017108760-appb-100012
    为人脸i和人脸j中心点之间x方向距离;
    Figure PCTCN2017108760-appb-100012
    The distance between the face i and the center point of the face j in the x direction;
    Figure PCTCN2017108760-appb-100013
    为人脸i和人脸j中心点之间y方向距离;
    Figure PCTCN2017108760-appb-100013
    The distance between the face i and the center point of the face j in the y direction;
    Figure PCTCN2017108760-appb-100014
    为人脸i和人脸j的宽度差异;
    Figure PCTCN2017108760-appb-100014
    The difference in width between face i and face j;
    Figure PCTCN2017108760-appb-100015
    为人脸i和人脸j的高度差异。
    Figure PCTCN2017108760-appb-100015
    The difference in height between face i and face j.
  13. 根据权利要求12所述的人脸追踪系统,其特征在于,所述相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重均为0.25。The face tracking system according to claim 12, wherein the weights of the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j are both 0.25.
  14. 根据权利要求11至13任一项所述的人脸追踪系统,其特征在于,所述追踪模块具体用于:The face tracking system according to any one of claims 11 to 13, wherein the tracking module is specifically configured to:
    当该相邻两帧人脸图像中人脸的相似度大于等于预设阈值时,判断该相邻两帧人脸图像中的人脸为同一人的人脸。When the similarity of the face in the adjacent two frames of the face image is greater than or equal to a preset threshold, the face in the adjacent two frames of the face image is determined to be the same person's face.
  15. 根据权利要求14所述的人脸追踪系统,其特征在于,所述预设阈值为0.85。The face tracking system of claim 14, wherein the predetermined threshold is 0.85.
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有人脸追踪系统,所述人脸追踪系统被处理器执行时实现步骤:A computer readable storage medium, wherein the computer readable storage medium stores a face tracking system, and when the face tracking system is executed by the processor, the steps are:
    人脸位置确定步骤:获取拍摄的一时间序列的人脸图像,从该时间序列中取相邻两帧人脸图像,在该相邻两帧人脸图像中确定人脸区域;a face position determining step: acquiring a face image of a time series of the shooting, taking two adjacent frames of the face image from the time sequence, and determining a face region in the adjacent two frames of the face image;
    相似度计算步骤:根据该相邻两帧人脸图像中的人脸区域中心点的X、Y坐标值,以及人脸区域的高度H、宽度W值,计算得到该相邻两帧人脸图像中人脸的相似度;The similarity calculation step: calculating the adjacent two frames of face images according to the X and Y coordinate values of the center point of the face region in the adjacent two frames of the face image, and the height H and the width W value of the face region The similarity of the face in the face;
    人脸追踪判断步骤:基于相邻两帧人脸图像中人脸的相似度进行人脸追踪。Face tracking judging step: performing face tracking based on the similarity of faces in adjacent two frames of face images.
  17. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述相似度计算步骤包括:The computer readable storage medium of claim 16, wherein the similarity calculation step comprises:
    Figure PCTCN2017108760-appb-100016
    所述Si,j为相似度,所述wx,wy,ww,wh分别为相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重,wx,wy,ww,wh∈[0,1],其中:
    Figure PCTCN2017108760-appb-100016
    The S i,j is a similarity, and the w x , w y , w w , w h are the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j, respectively. Weight, w x , w y , w w , w h ∈[0,1], where:
    Figure PCTCN2017108760-appb-100017
    为人脸i和人脸j中心点之间x方向距离;
    Figure PCTCN2017108760-appb-100017
    The distance between the face i and the center point of the face j in the x direction;
    Figure PCTCN2017108760-appb-100018
    为人脸i和人脸j中心点之间y方向距离;
    Figure PCTCN2017108760-appb-100018
    The distance between the face i and the center point of the face j in the y direction;
    Figure PCTCN2017108760-appb-100019
    为人脸i和人脸j的宽度差异;
    Figure PCTCN2017108760-appb-100019
    The difference in width between face i and face j;
    Figure PCTCN2017108760-appb-100020
    为人脸i和人脸j的高度差异。
    Figure PCTCN2017108760-appb-100020
    The difference in height between face i and face j.
  18. 根据权利要求17所述的计算机可读存储介质,其特征在于,所述相邻两帧人脸i、人脸j的x方向距离、y方向距离、宽度差异、高度差异的权重均为0.25。The computer readable storage medium according to claim 17, wherein the weights of the x-direction distance, the y-direction distance, the width difference, and the height difference of the adjacent two frames of the face i and the face j are both 0.25.
  19. 根据权利要求16至18任一项所述的计算机可读存储介质,其特征在于,所述人脸追踪判断步骤包括:The computer readable storage medium according to any one of claims 16 to 18, wherein the face tracking determining step comprises:
    当该相邻两帧人脸图像中人脸的相似度大于等于预设阈值时,判断该相邻两帧人脸图像中的人脸为同一人的人脸。When the similarity of the face in the adjacent two frames of the face image is greater than or equal to a preset threshold, the face in the adjacent two frames of the face image is determined to be the same person's face.
  20. 根据权利要求19所述的计算机可读存储介质,其特征在于,所述预设阈值为0.85。 The computer readable storage medium of claim 19, wherein the predetermined threshold is 0.85.
PCT/CN2017/108760 2017-08-17 2017-10-31 Electronic device, face tracking method and system, and storage medium WO2019033575A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710709124.0 2017-08-17
CN201710709124.0A CN107633208B (en) 2017-08-17 2017-08-17 Electronic device, the method for face tracking and storage medium

Publications (1)

Publication Number Publication Date
WO2019033575A1 true WO2019033575A1 (en) 2019-02-21

Family

ID=61099694

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108760 WO2019033575A1 (en) 2017-08-17 2017-10-31 Electronic device, face tracking method and system, and storage medium

Country Status (2)

Country Link
CN (1) CN107633208B (en)
WO (1) WO2019033575A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866773A (en) * 2020-08-21 2021-05-28 海信视像科技股份有限公司 Display device and camera tracking method in multi-person scene
CN113766260A (en) * 2021-08-24 2021-12-07 武汉瓯越网视有限公司 Face automatic exposure optimization method, storage medium, electronic device and system
CN114268737A (en) * 2021-12-06 2022-04-01 张岩 Automatic trigger method for shooting, certificate identification method, equipment and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161206A (en) * 2018-11-07 2020-05-15 杭州海康威视数字技术股份有限公司 Image capturing method, monitoring camera and monitoring system
TWI714318B (en) * 2019-10-25 2020-12-21 緯創資通股份有限公司 Face recognition method and face recognition apparatus
CN111579083B (en) * 2020-05-13 2022-06-07 芋头科技(杭州)有限公司 Body temperature measuring method and device based on infrared image face detection
CN112884961B (en) * 2021-01-21 2022-11-29 吉林省吉科软信息技术有限公司 Face recognition gate system for epidemic situation prevention and control
CN116152872A (en) * 2021-11-18 2023-05-23 北京眼神智能科技有限公司 Face tracking method, device, storage medium and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040017933A1 (en) * 2002-04-12 2004-01-29 Canon Kabushiki Kaisha Face detection and tracking in a video sequence
CN1794264A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and system of real time detecting and continuous tracing human face in video frequency sequence
CN103679125A (en) * 2012-09-24 2014-03-26 致伸科技股份有限公司 Human face tracking method
CN105069408A (en) * 2015-07-24 2015-11-18 上海依图网络科技有限公司 Video portrait tracking method based on human face identification in complex scenario
CN106778482A (en) * 2016-11-15 2017-05-31 东软集团股份有限公司 Face tracking methods and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7369687B2 (en) * 2002-11-21 2008-05-06 Advanced Telecommunications Research Institute International Method for extracting face position, program for causing computer to execute the method for extracting face position and apparatus for extracting face position
CN103268616B (en) * 2013-04-18 2015-11-25 北京工业大学 The moveable robot movement human body tracing method of multi-feature multi-sensor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040017933A1 (en) * 2002-04-12 2004-01-29 Canon Kabushiki Kaisha Face detection and tracking in a video sequence
CN1794264A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and system of real time detecting and continuous tracing human face in video frequency sequence
CN103679125A (en) * 2012-09-24 2014-03-26 致伸科技股份有限公司 Human face tracking method
CN105069408A (en) * 2015-07-24 2015-11-18 上海依图网络科技有限公司 Video portrait tracking method based on human face identification in complex scenario
CN106778482A (en) * 2016-11-15 2017-05-31 东软集团股份有限公司 Face tracking methods and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866773A (en) * 2020-08-21 2021-05-28 海信视像科技股份有限公司 Display device and camera tracking method in multi-person scene
CN112866773B (en) * 2020-08-21 2023-09-26 海信视像科技股份有限公司 Display equipment and camera tracking method in multi-person scene
CN113766260A (en) * 2021-08-24 2021-12-07 武汉瓯越网视有限公司 Face automatic exposure optimization method, storage medium, electronic device and system
CN114268737A (en) * 2021-12-06 2022-04-01 张岩 Automatic trigger method for shooting, certificate identification method, equipment and storage medium

Also Published As

Publication number Publication date
CN107633208B (en) 2018-12-18
CN107633208A (en) 2018-01-26

Similar Documents

Publication Publication Date Title
WO2019033575A1 (en) Electronic device, face tracking method and system, and storage medium
WO2019033574A1 (en) Electronic device, dynamic video face recognition method and system, and storage medium
US10796179B2 (en) Living face verification method and device
WO2019100608A1 (en) Video capturing device, face recognition method, system, and computer-readable storage medium
WO2021217934A1 (en) Method and apparatus for monitoring number of livestock, and computer device and storage medium
WO2018232837A1 (en) Tracking photography method and tracking apparatus for moving target
WO2020184207A1 (en) Object tracking device and object tracking method
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
WO2021174789A1 (en) Feature extraction-based image recognition method and image recognition device
WO2021036373A1 (en) Target tracking method and device, and computer readable storage medium
WO2020233397A1 (en) Method and apparatus for detecting target in video, and computing device and storage medium
WO2016070300A1 (en) System and method for detecting genuine user
TWI798815B (en) Target re-identification method, device, and computer readable storage medium
CN110956131B (en) Single-target tracking method, device and system
US20200275017A1 (en) Tracking system and method thereof
CN109447022B (en) Lens type identification method and device
WO2020164284A1 (en) Method and apparatus for recognising living body based on planar detection, terminal, and storage medium
CN106506982B (en) method and device for obtaining photometric parameters and terminal equipment
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN110245643B (en) Target tracking shooting method and device and electronic equipment
US20200211202A1 (en) Fall detection method, fall detection apparatus and electronic device
CN109753886B (en) Face image evaluation method, device and equipment
WO2022048578A1 (en) Image content detection method and apparatus, electronic device, and readable storage medium
CN112418153B (en) Image processing method, device, electronic equipment and computer storage medium
CN112215036B (en) Cross-mirror tracking method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17921496

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 24.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17921496

Country of ref document: EP

Kind code of ref document: A1