CN111709391A - Human face and human body matching method, device and equipment - Google Patents

Human face and human body matching method, device and equipment Download PDF

Info

Publication number
CN111709391A
CN111709391A CN202010597640.0A CN202010597640A CN111709391A CN 111709391 A CN111709391 A CN 111709391A CN 202010597640 A CN202010597640 A CN 202010597640A CN 111709391 A CN111709391 A CN 111709391A
Authority
CN
China
Prior art keywords
target
human
human body
face
tracking frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010597640.0A
Other languages
Chinese (zh)
Other versions
CN111709391B (en
Inventor
蔡可杰
陆冠宇
何凯
陈青松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Unisinsight Technology Co Ltd
Original Assignee
Chongqing Unisinsight Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Unisinsight Technology Co Ltd filed Critical Chongqing Unisinsight Technology Co Ltd
Priority to CN202010597640.0A priority Critical patent/CN111709391B/en
Publication of CN111709391A publication Critical patent/CN111709391A/en
Application granted granted Critical
Publication of CN111709391B publication Critical patent/CN111709391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human face and body matching method, which comprises the following steps: acquiring a human face target and a human body target in a picture, and a human face tracking frame and a human body tracking frame; acquiring matching parameters based on the face tracking frame and the human body tracking frame, wherein the matching parameters comprise the contact ratio C of the face target and the human body targetijSpatial correlation DijAnd a time dependence Sij(ii) a Obtaining the total matching degree F between the human body target and the human face target according to the matching parametersij. The method provided by the invention can solve the problem of matching the human face target and the human body target under the condition that the human face or the human body target is shielded in a high-density scene.

Description

Human face and human body matching method, device and equipment
Technical Field
The invention relates to the field of face recognition, in particular to a face and human body matching method, device and equipment.
Background
The video structuralization has important significance in the security protection field, the face recognition system is used as one of the video structuralization, the face or human body related attributes in the image can be automatically analyzed through a face recognition algorithm, and after a case can be greatly reduced, the labor intensity of the video is looked up in a manual mode. The face recognition system is divided into face attribute analysis and human body attribute analysis. In the application of a face recognition algorithm, a face target and a human body target in a scene need to be detected and tracked respectively. And finally, when attribute analysis is carried out, attribute analysis is carried out on the human face and the corresponding human body as a whole, and finally, the structural information of a complete human is output. Therefore, the human face and the human body need to be matched and treated as a target. But the human face and the human body can be partially shielded and overlapped in a high-density scene. The matching of the human face and the human body is often wrong, and further, the result of the attribute analysis is also wrong, so that the overall index of the algorithm is influenced.
Disclosure of Invention
In view of the above disadvantages of the prior art, an object of the present invention is to provide a method, an apparatus and a device for matching human faces and human bodies, which are used to solve the drawbacks of the prior art.
In order to achieve the above objects and other related objects, the present invention provides a human face and body matching method, including:
acquiring a human face target and a human body target in a picture, and a human face tracking frame and a human body tracking frame;
acquiring matching parameters based on the face tracking frame and the human body tracking frame, wherein the matching parameters comprise the contact ratio C of the face target and the human body targetijSpatial correlation DijAnd a time dependence Sij
Obtaining the total matching degree F between the human body target and the human face target according to the matching parametersij
Fij=αCij+βDij+γSij
Where α is a coincidence degree coefficient, β is a spatial correlation coefficient, and γ is a temporal correlation coefficient.
Optionally, a coincidence degree C of the human face target and the human body target is obtainedijThe method comprises the following steps:
acquiring the position of the face tracking frame and the position of the human body tracking frame;
acquiring the contact ratio C of the human face target and the human body target according to the size and the position of the human face tracking frame and the size and the position of the human body tracking frameij
Cij=|(xi-xj)·(yi-yj)|/(wi·hi+wj·hj)
Wherein x isiRepresenting the abscissa, x, of the human tracking frame in a coordinate systemjRepresenting the abscissa, y, of the face-tracking frame in a coordinate systemiRepresenting the ordinate, y, of the human tracking frame in a coordinate systemjRepresenting the ordinate, w, of the face-tracking frame in the coordinate systemiLength, w, of the human tracking framejLength, h, of the face tracking boxiWidth, h, of the human tracking framejRepresenting the width of the face tracking box.
Optionally, obtaining a spatial correlation D between the human face target and the human body targetijThe method comprises the following steps:
gridding the human body tracking frame to obtain a grid area consisting of a plurality of sub-grids;
assigning a weight to each subgrid in the network region;
obtaining a face space distribution probability density matrix W based on the assigned network area;
obtaining the distribution vector G of the face tracking frame in the human body tracking frame according to the position of the center of the face tracking frame in the grid areaij
Obtaining the spatial correlation D of the human face target and the human body target according to the human face spatial distribution probability density matrix and the distribution vector of the human face tracking frame in the human body tracking frameij
Dij=Gij×WT
Wherein, WTIs a transposed matrix of W.
Optionally, 1 is filled in the sub-grid where the center of the face tracking frame is located, 0 is filled in the rest sub-networks, and the distribution vector of the face tracking frame in the human body tracking frame is obtained based on the filled network areaGij
Optionally, obtaining a time correlation S between the human face target and the human body targetijThe method comprises the following steps:
acquiring a motion vector of the human face target and a motion vector of the human body target;
acquiring the similarity between the motion vector of the human face target and the motion vector of the human body target;
taking the similarity as the time correlation S of the human face target and the human body targetij
Optionally, a difference method is adopted to obtain the motion vector of the human face target and the motion vector of the human body target.
Optionally, the average motion vector of the same human face object or/and human body object in 2 consecutive frames is used as its current motion vector.
Optionally, the euclidean distance is used to obtain the similarity between the motion vector of the human face target and the motion vector of the human body target.
To achieve the above and other related objects, the present invention provides a human face and body matching device, comprising:
the target detection module is used for acquiring a human face target and a human body target in the picture, and a human face tracking frame and a human body tracking frame;
a parameter obtaining module, configured to obtain matching parameters based on the face tracking frame and the human body tracking frame, where the matching parameters include a contact ratio C between the face target and the human body targetijSpatial correlation DijAnd a time dependence Sij
The matching module is used for obtaining the overall matching degree between the human body target and the human face target according to the matching parameters;
Fij=αCij+βDij+γSij
where α is a coincidence degree coefficient, β is a spatial correlation coefficient, and γ is a temporal correlation coefficient.
To achieve the above and other related objects, the present invention provides an apparatus comprising: a processor and a memory;
the memory is configured to store a computer program and the processor is configured to execute the computer program stored by the memory to cause the apparatus to perform the method.
As described above, the human face and body matching method, device and equipment of the invention have the following beneficial effects:
the invention discloses a human face and human body matching method, which comprises the steps of obtaining a human face target, a human body target, a human face tracking frame and a human body tracking frame in a picture; acquiring matching parameters based on the face tracking frame and the human body tracking frame, wherein the matching parameters comprise the contact ratio C of the face target and the human body targetijSpatial correlation DijAnd a time dependence Sij(ii) a Obtaining the total matching degree F between the human body target and the human face target according to the matching parametersij. The method provided by the invention can solve the problem of matching the human face target and the human body target under the condition that the human face or the human body target is shielded in a high-density scene.
Drawings
FIG. 1 is a flowchart of a human face and body matching method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a 596 th frame of a high-density monitoring scenario in accordance with an embodiment of the present invention;
FIG. 3 is a diagram illustrating a 597 th frame of a high-density monitoring scene according to an embodiment of the invention;
fig. 4 is a schematic diagram of a human face and body matching device according to an embodiment of the invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
As shown in fig. 1, the present embodiment provides a human face and human body matching method, including:
s11, acquiring a human face target and a human body target in the picture, and a human face tracking frame and a human body tracking frame;
s12 obtaining matching parameters based on the face tracking frame and the human body tracking frame, wherein the matching parameters comprise the contact ratio C of the face target and the human body targetijSpatial correlation DijAnd a time dependence Sij
S13, obtaining the total matching degree F between the human body target and the human face target according to the matching parametersij
Fij=αCij+βDij+γSij
Where α is a coincidence degree coefficient, β is a spatial correlation coefficient, and γ is a temporal correlation coefficient.
According to the distribution of the human face and the human body space position, generally speaking, the human body tracking frame and the human face tracking frame of the same person are mutually superposed in an image coordinate system, and the contact ratio of the human face tracking frame and the human body tracking frame can be used as a factor for judging whether the human face and the human body are matched. If the contact ratio is 0, the possibility of matching the human face target is eliminated, further operation on the human face target and matching operation on the current human body target are stopped, and the complexity of matching operation is reduced. Therefore, in an embodiment, the coincidence degree C of the human face target and the human body target is obtained by the following methodijThe method comprises the following steps:
acquiring the position of the face tracking frame and the position of the human body tracking frame;
acquiring the contact ratio C of the human face target and the human body target according to the size and the position of the human face tracking frame and the size and the position of the human body tracking frameij
Cij=|(xi-xj)·(yi-yj)|/(wi·hi+wj·hj) (formula 1)
Wherein x isiRepresenting the abscissa, x, of the human tracking frame in a coordinate systemjRepresenting the abscissa, y, of the face-tracking frame in a coordinate systemiRepresenting the ordinate, y, of the human tracking frame in a coordinate systemjRepresenting the ordinate, w, of the face-tracking frame in the coordinate systemiLength, w, of the human tracking framejLength, h, of the face tracking boxiWidth, h, of the human tracking framejRepresenting the width of the face tracking box.
In one embodiment, according to the prior knowledge that the distribution of the human face and the human body in the space has regularity, the correlation of the human face target and the human body target in the space distribution is calculated. Based on a human body tracking frame returned by a human body detection algorithm, the human body tracking frame is divided into n-by-n grid areas, and each sub-grid area is endowed with different weights. Specifically, a 3 by 3 grid area may be selected, and the final scoring matrix is shown below. The score of the face target in a different region is the score of the region. It will be appreciated that the final score DijAnd when the position of the face tracking frame is in the middle of the upper third of the human body tracking frame, the score is highest, and when the position of the face tracking frame is in the lower third of the human body tracking frame, the score is lowest. The face target with the highest score on the spatial distribution index with the human body target can be screened out through the template. For the invention DijRepresenting the spatial correlation between the human face target and the human body target, and obtaining a calculation formula according to a self-defined template, wherein the probability density matrix of the human face spatial distribution is as follows:
Figure BDA0002557699050000051
for computational convenience, the probability density matrix is represented in a vector manner:
W=[w00w01w02w10w11w12w20w21w22]
face tracking frame PiIf the center of (x, y, w, h) is in a certain sub-grid of the 3 × 3 grid of the human body tracking frame, the sub-grid is filled with 1, and the rest sub-areas are filled with 0, so that the distribution vector G of the tracking frame of the human body target i in the tracking frame of the human face target j is obtainedij. The final result, WTIs a transposed matrix of W to obtain DijThe calculation formula of (a) is as follows:
Dij=Gij×WT(formula 2)
It will be appreciated that the human body and the human face are different parts of the overall object, and that the motion between the two has uniform characteristics. It is thus possible to calculate whether the human target and the human target have a uniform motion vector by the distance of the motion vectors of the human target and the human target. The motion vector of the moving target, that is, the motion vector of the human face target and the motion vector of the human body target, is calculated by using a difference method, and specifically, the average motion vector of the same target in 2 consecutive frames can be used as the current motion vector. Suppose that the human body tracking frame has its position in the image coordinate system denoted as Pi(xi,yi,wi,hi) Wherein x isiTracing the abscissa, y, of the upper left corner of the frame for the human bodyiFor tracing the ordinate, w, of the upper left corner of the frameiFor the width of the human body tracking frame, hiTracking the height of the frame for the person; the coordinate of the face tracking frame in the image coordinate system is expressed as Pj(xj,yj,wj,hj) Wherein x isjTracing the abscissa, y, of the upper left corner of the frame for the facejFor the face, the ordinate, w, of the upper left corner of the framejFor the width of the face tracking frame, hjThe height of the frame is tracked for the face. Calculating the continuous 2-frame image of the human target i according to a kinematic formulaMean motion vector V between imagesiAs shown in equation 3, where the superscript n denotes the current frame number and Δ t denotes the frame interval between consecutive 2 frames. Similarly, the motion vector V of the face target j can be calculatedj
Figure BDA0002557699050000052
And judging the time correlation between the human body target and the human face target by calculating the distance between the motion vector of the human body target and the motion vector of the human face target. The invention measures the similarity S between two vectors by adopting Euclidean distanceijAnd the similarity is the time correlation between the human face target and the human body target, and the calculation formula is as follows:
Sij=|Vi-Vjl (equation 4)
In a specific embodiment, a face recognition algorithm is run, and the width and height of the video image are 1920 x 1080. The camera aims at the scene of dense pedestrian flow at the exit of the railway station, so that the fact that under the high-density scene of the railway station, the human body and the human face with a plurality of targets can be seriously shielded in the video can be found, the two tracking frames are mutually overlapped, and the matching difficulty of the human body target and the human face target is further increased. The following specifically describes the matching process of the human face target and the human body target in this embodiment. The human body tracking frame and the human face tracking frame are enclosed by rectangular frames.
For example, 2 human face targets and 3 human face targets in two consecutive image sequences are selected (as shown in fig. 2 and fig. 3). The coordinates of the 2 human targets selected at the frame number 596 are (786,434), (967,528). When the coordinates of the selected 3 human face objects are (820,560), (1073,590) and (593,367) and the frame number is 597, the corresponding human body object coordinates are (795,439) and (980,549), and the human face object coordinates are (829,566), (1094,630) and (640,412).
Calculating the contact ratio between the human target tracking frame and the human face target tracking frame by adopting CijIndicating the degree of coincidence between two objects, where i is the ID of the human object and j is the ID of the human face object. If there is a duplication between two tracking framesDegree of contact CijIf 0, it means that the face target j is directly ignored to directly match the next face target j + 1. The coincidence degree C between the selected 2 human body targets and the 3 human face targets can be respectively calculated according to the formula 100=1,C01=0.2,C02=0,C10=0.1,C11=1, C12=0。
Calculating the spatial correlation between the human face target tracking frame and the human body target tracking frame by DijRepresenting the spatial correlation between the human face target and the human body target, and calculating the spatial correlation D between the human body target i and the human face target j according to the self-defined templateij. The probability density matrix value of the human face and human body spatial distribution determined in this embodiment is as shown in table 1, and the conversion is performed in a vector expression manner of W ═ 0.1,0.4,0.1,0.05,0.2,0.05,0.03,0.04,0.03]According to the formula 2, the spatial correlation between the human body i and the human face j can be calculated, and D can be calculated00=0.4,D01=0.03,D02=0,D10=0,D11=0.2,D12=0。
TABLE 1
0.1 0.4 0.1
0.05 0.2 0.05
0.03 0.04 0.03
Calculating Euclidean distance between motion vectors of human body target and human face target by SijTo illustrate the calculation of the time correlation between the human target i and the human target j, the processing frame rate of the algorithm module in this embodiment is 8 frames per second, i.e. Δ t is 0.125 seconds. The motion vector, V, of the human face object in two consecutive frames of images as shown in FIGS. 2 and 30=(9,5),V1(13, 21). The motion vector of the human target is V0=(9,6),V1=(21,40),V2(47, 45). According to formula 4, the similarity between the motion vectors of the human face target and the human body target can be calculated. S00=0.6,S01=0.1,S02=0,S10=0,S11=0.4,S12=0。
By combining the above index results, according to formula 5,
Fij=αCij+βDij+γSij(formula 5)
Different weight coefficients are distributed to different indexes, and the total score of each human face target which is overlapped with the periphery of the human body target is calculated. The weighting factor α, β, and γ assigned according to the empirical value are 0.4, and 0.2, respectively. And respectively calculating the comprehensive score between the two types of targets. And then sorting in a descending order according to the comprehensive scores, and selecting the face target with the highest score as the best matching face target j of the current human body target i. As can be seen from table 2, the final matching result and the actual human face and human body in the video are correct, which indicates that the method can cope with the correct matching between the human face target and the human body target in the high-density scene.
TABLE 2 scoring table of human face and human body matching evaluation function
Figure BDA0002557699050000071
Note: in the table, TURE indicates that the targets i and j match successfully, and FALSE indicates that the targets i and j match fail.
Compared with the traditional method for matching based on the space distance between the human face target and the human body target, the method provided by the invention can solve the problem of matching the human face target and the human body target under the condition that the human face or the human body target is shielded in a high-density scene. The method is particularly suitable for scenes with large pedestrian traffic, such as stations and airports, and the problem of error analysis of the pedestrian attributes caused by matching errors is solved.
As shown in fig. 4, a human face and body matching device includes:
the target detection module 41 is used for acquiring a human face target and a human body target in the picture, and a human face tracking frame and a human body tracking frame;
a parameter obtaining module 42, configured to obtain matching parameters based on the face tracking frame and the human body tracking frame, where the matching parameters include a contact ratio C between the face target and the human body targetijSpatial correlation DijAnd a time dependence Sij
The matching module 43 is configured to obtain an overall matching degree between the human body target and the human face target according to the matching parameters;
Fij=αCij+βDij+γSij
where α is a coincidence degree coefficient, β is a spatial correlation coefficient, and γ is a temporal correlation coefficient.
Since the embodiment of the apparatus portion and the embodiment of the method portion correspond to each other, please refer to the description of the embodiment of the method portion for the content of the embodiment of the apparatus portion, which is not repeated here.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may comprise any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, etc.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A human face and human body matching method is characterized by comprising the following steps:
acquiring a human face target and a human body target in a picture, and a human face tracking frame and a human body tracking frame;
acquiring matching parameters based on the face tracking frame and the human body tracking frame, wherein the matching parameters comprise the contact ratio C of the face target and the human body targetijSpatial correlation DijAnd a time dependence Sij
Obtaining the total matching degree F between the human body target and the human face target according to the matching parametersij
Fij=αCij+βDij+γSij
Where α is a coincidence degree coefficient, β is a spatial correlation coefficient, and γ is a temporal correlation coefficient.
2. The human face and human body matching method according to claim 1, wherein a coincidence degree C of the human face target and the human body target is obtainedijThe method comprises the following steps:
acquiring the position of the face tracking frame and the position of the human body tracking frame;
acquiring the contact ratio C of the human face target and the human body target according to the size and the position of the human face tracking frame and the size and the position of the human body tracking frameij
Cij=|(xi-xj)·(yi-yj)/|(wi·hi+wj·hj)
Wherein x isiRepresenting the abscissa, x, of the human tracking frame in a coordinate systemjRepresenting the abscissa, y, of the face-tracking frame in a coordinate systemiRepresenting the ordinate, y, of the human tracking frame in a coordinate systemjRepresenting the ordinate, w, of the face-tracking frame in the coordinate systemiLength, w, of the human tracking framejLength, h, of the face tracking boxiWidth, h, of the human tracking framejRepresenting the width of the face tracking box.
3. The method of claim 1The human face and human body matching method is characterized in that the spatial correlation D of the human face target and the human body target is obtainedijThe method comprises the following steps:
gridding the human body tracking frame to obtain a grid area consisting of a plurality of sub-grids;
assigning a weight to each subgrid in the network region;
obtaining a face space distribution probability density matrix W based on the assigned network area;
obtaining the distribution vector G of the face tracking frame in the human body tracking frame according to the position of the center of the face tracking frame in the grid areaij
Obtaining the spatial correlation D of the human face target and the human body target according to the human face spatial distribution probability density matrix and the distribution vector of the human face tracking frame in the human body tracking frameij
Dij=Gij×WT
Wherein, WTIs a transposed matrix of W.
4. The human face and human body matching method according to claim 3, wherein the sub-grid where the center of the human face tracking frame is located is filled with 1, the rest sub-networks are filled with 0, and the distribution vector G of the human face tracking frame in the human body tracking frame is obtained based on the filled network areaij
5. The human face and body matching method according to claim 1, wherein the time correlation S between the human face target and the human body target is obtainedijThe method comprises the following steps:
acquiring a motion vector of the human face target and a motion vector of the human body target;
acquiring the similarity between the motion vector of the human face target and the motion vector of the human body target;
taking the similarity as the time correlation S of the human face target and the human body targetij
6. The human face and body matching method according to claim 5, wherein the motion vector of the human face target and the motion vector of the human body target are obtained by a difference method.
7. The human face and human body matching method according to claim 6, characterized in that the average motion vector of the same human face target or/and human body target in 2 consecutive frames is used as its current motion vector.
8. The human face and human body matching method according to claim 5, wherein the Euclidean distance is adopted to obtain the similarity between the motion vector of the human face target and the motion vector of the human body target.
9. A human face and body matching device is characterized by comprising:
the target detection module is used for acquiring a human face target and a human body target in the picture, and a human face tracking frame and a human body tracking frame;
a parameter obtaining module, configured to obtain matching parameters based on the face tracking frame and the human body tracking frame, where the matching parameters include a contact ratio C between the face target and the human body targetijSpatial correlation DijAnd a time dependence Sij
The matching module is used for obtaining the overall matching degree between the human body target and the human face target according to the matching parameters;
wherein, Fij=αCij+βDij+γSij
Where α is a coincidence degree coefficient, β is a spatial correlation coefficient, and γ is a temporal correlation coefficient.
10. An apparatus, comprising: a processor and a memory;
the memory for storing a computer program, the processor for executing the computer program stored by the memory to cause the apparatus to perform the method of any of claims 1 to 8.
CN202010597640.0A 2020-06-28 2020-06-28 Human face and human body matching method, device and equipment Active CN111709391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010597640.0A CN111709391B (en) 2020-06-28 2020-06-28 Human face and human body matching method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010597640.0A CN111709391B (en) 2020-06-28 2020-06-28 Human face and human body matching method, device and equipment

Publications (2)

Publication Number Publication Date
CN111709391A true CN111709391A (en) 2020-09-25
CN111709391B CN111709391B (en) 2022-12-02

Family

ID=72543717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010597640.0A Active CN111709391B (en) 2020-06-28 2020-06-28 Human face and human body matching method, device and equipment

Country Status (1)

Country Link
CN (1) CN111709391B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381448A (en) * 2020-11-30 2021-02-19 深圳云天励飞技术股份有限公司 Teaching quality assessment method and device based on human face space-time characteristics and electronic equipment
CN113793363A (en) * 2021-09-27 2021-12-14 重庆紫光华山智安科技有限公司 Target tracking method and related device
CN114783043A (en) * 2022-06-24 2022-07-22 杭州安果儿智能科技有限公司 Child behavior track positioning method and system
WO2022198821A1 (en) * 2021-03-25 2022-09-29 深圳市商汤科技有限公司 Method and apparatus for performing matching between human face and human body, and electronic device, storage medium and program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian
CN109523573A (en) * 2018-11-23 2019-03-26 上海新世纪机器人有限公司 The tracking and device of target object
CN109819208A (en) * 2019-01-02 2019-05-28 江苏警官学院 A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring
US20190205694A1 (en) * 2017-12-28 2019-07-04 Qualcomm Incorporated Multi-resolution feature description for object recognition
CN110427905A (en) * 2019-08-08 2019-11-08 北京百度网讯科技有限公司 Pedestrian tracting method, device and terminal
CN111209869A (en) * 2020-01-08 2020-05-29 重庆紫光华山智安科技有限公司 Target following display method, system, equipment and medium based on video monitoring

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190205694A1 (en) * 2017-12-28 2019-07-04 Qualcomm Incorporated Multi-resolution feature description for object recognition
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian
CN109523573A (en) * 2018-11-23 2019-03-26 上海新世纪机器人有限公司 The tracking and device of target object
CN109819208A (en) * 2019-01-02 2019-05-28 江苏警官学院 A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring
CN110427905A (en) * 2019-08-08 2019-11-08 北京百度网讯科技有限公司 Pedestrian tracting method, device and terminal
CN111209869A (en) * 2020-01-08 2020-05-29 重庆紫光华山智安科技有限公司 Target following display method, system, equipment and medium based on video monitoring

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOJING CHEN等: "Reference set based appearance model for tracking across non-overlapping cameras", 《2013 SEVENTH INTERNATIONAL CONFERENCE ON DISTRIBUTED SMART CAMERAS (ICDSC)》 *
李玲: "基于相关性滤波器的实时跟踪算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381448A (en) * 2020-11-30 2021-02-19 深圳云天励飞技术股份有限公司 Teaching quality assessment method and device based on human face space-time characteristics and electronic equipment
CN112381448B (en) * 2020-11-30 2023-10-13 深圳云天励飞技术股份有限公司 Teaching quality assessment method and device based on face space-time characteristics and electronic equipment
WO2022198821A1 (en) * 2021-03-25 2022-09-29 深圳市商汤科技有限公司 Method and apparatus for performing matching between human face and human body, and electronic device, storage medium and program
CN113793363A (en) * 2021-09-27 2021-12-14 重庆紫光华山智安科技有限公司 Target tracking method and related device
CN114783043A (en) * 2022-06-24 2022-07-22 杭州安果儿智能科技有限公司 Child behavior track positioning method and system

Also Published As

Publication number Publication date
CN111709391B (en) 2022-12-02

Similar Documents

Publication Publication Date Title
CN111709391B (en) Human face and human body matching method, device and equipment
US8179440B2 (en) Method and system for object surveillance and real time activity recognition
CN110503108A (en) Architecture against regulations recognition methods and device, storage medium, computer equipment
CN109684969B (en) Gaze position estimation method, computer device, and storage medium
CN110598610A (en) Target significance detection method based on neural selection attention
CN105809716B (en) Foreground extraction method integrating superpixel and three-dimensional self-organizing background subtraction method
JP7292492B2 (en) Object tracking method and device, storage medium and computer program
KR101409810B1 (en) Real-time object tracking method in moving camera by using particle filter
CN106570480A (en) Posture-recognition-based method for human movement classification
CN109726746A (en) A kind of method and device of template matching
CN112541434B (en) Face recognition method based on central point tracking model
CN113591763B (en) Classification recognition method and device for face shapes, storage medium and computer equipment
CN103955682A (en) Behavior recognition method and device based on SURF interest points
CN111414803A (en) Face recognition method and device and electronic equipment
CN115588190A (en) Mature fruit identification and picking point positioning method and device
CN112132130A (en) Real-time license plate detection method and system for whole scene
CN110427912A (en) A kind of method for detecting human face and its relevant apparatus based on deep learning
CN111353385B (en) Pedestrian re-identification method and device based on mask alignment and attention mechanism
CN109784230A (en) A kind of facial video image quality optimization method, system and equipment
CN105930789B (en) Human bodys' response based on logarithm theorem in Euclid space bag of words
CN114332602A (en) Commodity identification method of intelligent container
CN116958873A (en) Pedestrian tracking method, device, electronic equipment and readable storage medium
CN106295693B (en) A kind of image-recognizing method and device
CN109657083B (en) Method and device for establishing textile picture feature library
CN116342519A (en) Image processing method based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant