CN110969045B - Behavior detection method and device, electronic equipment and storage medium - Google Patents

Behavior detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110969045B
CN110969045B CN201811139415.1A CN201811139415A CN110969045B CN 110969045 B CN110969045 B CN 110969045B CN 201811139415 A CN201811139415 A CN 201811139415A CN 110969045 B CN110969045 B CN 110969045B
Authority
CN
China
Prior art keywords
face
behavior
area
identity information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811139415.1A
Other languages
Chinese (zh)
Other versions
CN110969045A (en
Inventor
袁延金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201811139415.1A priority Critical patent/CN110969045B/en
Publication of CN110969045A publication Critical patent/CN110969045A/en
Application granted granted Critical
Publication of CN110969045B publication Critical patent/CN110969045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a behavior detection method, a behavior detection device, electronic equipment and a storage medium, wherein the method comprises the following steps: obtaining a target picture; in the target picture, determining a human face area and a human body area; carrying out face recognition on the face area and determining identity information corresponding to the face area; performing behavior detection on the human body area, and determining a behavior type corresponding to the human body area; calculating the overlapping degree of a human face area and the human body area; if the overlapping degree is larger than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information. In the behavior detection method, when the behavior is detected, the specific behavior type can be detected, and the behavior type can be matched with the identity information, so that the identity information of the person who specifically generates the behavior can be determined. It is possible to know who in particular has taken place the action. Thus, the analysis and the recording of the behaviors of the individual can be better carried out.

Description

Behavior detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a behavior detection method, a behavior detection device, an electronic device, and a storage medium.
Background
Due to the continuous development of computers and artificial intelligence technologies, a large number of human behavior data can be analyzed through the computers, so that researchers or managers can be better helped, behaviors of people, corresponding behavior logics and the like can be analyzed, a more reasonable personnel management strategy can be realized, and for example, a reasonable teaching mode and a teaching plan can be better formulated through the detection of student behaviors.
In the prior art, when a person performs behavior detection, in many cases, only the behavior detection is performed simply, and the corresponding identity of the person who performs the behavior cannot be accurately matched. Therefore, the behavior analysis and detection of the target personnel cannot be performed pertinently, and the application range of the behavior detection technology is greatly limited.
Disclosure of Invention
The embodiment of the invention aims to provide a behavior detection method, a behavior detection device, electronic equipment and a storage medium, so as to accurately match identity information of a person generating behavior when the behavior detection is performed. The specific technical scheme is as follows:
the embodiment of the invention provides a behavior detection method, which comprises the following steps:
Obtaining a target picture;
determining a human face area and a human body area in the target picture;
performing face recognition on the face area, and determining identity information corresponding to the face area;
performing behavior detection on the human body area, and determining a behavior type corresponding to the human body area;
calculating the overlapping degree of the human face area and the human body area;
and if the overlapping degree is larger than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information.
Optionally, the determining the face area and the body area in the target picture includes:
detecting a human face and a human body contained in the target picture respectively;
determining the human face through a first rectangular frame, and taking an image range determined by four vertex coordinates of the first rectangular frame as the human face area;
determining the human body through a second rectangular frame, and taking the image range determined by four vertex coordinates of the second rectangular frame as the human body area;
the calculating the overlapping degree of the human face region and the human body region comprises the following steps:
and calculating the overlapping degree of the first rectangular frame and the second rectangular frame.
Optionally, the performing face recognition on the face area, determining identity information corresponding to the face area, includes:
according to the face area, a face model corresponding to the face area is established;
comparing the face model with a preset face model stored in a preset database, and determining a target preset face model matched with the face model;
and taking the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
Optionally, after the calculating the overlapping degree of the face region and the human body region, the method further includes:
determining correction parameters corresponding to the behavior types according to a preset algorithm, wherein the preset algorithm is used for determining different correction parameters corresponding to different behavior types;
if the overlapping degree is greater than a preset threshold, establishing a corresponding relationship between the behavior type and the identity information, including:
and correcting the overlapping degree through the correction parameters, and if the corrected overlapping degree is larger than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information.
Optionally, the determining, according to a preset algorithm, a correction parameter corresponding to the behavior type includes:
if the behavior type is the behavior of the shooting position facing the front of the body, the correction parameter is a first correction parameter; and/or,
if the behavior type is the behavior of the shooting position facing the side face of the body, the correction parameter is a second correction parameter; and/or,
and if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
Optionally, the establishing the correspondence between the behavior type and the identity information includes:
determining a target person corresponding to the face area through the identity information;
and taking the behavior type as the behavior type of the target person.
The embodiment of the invention also provides a behavior detection device, which comprises:
the acquisition module is used for acquiring the target picture;
the identification module is used for determining a human face area and a human body area in the target picture;
the first detection module is used for carrying out face recognition on the face area and determining identity information corresponding to the face area;
the second detection module is used for detecting the behaviors of the human body area and determining the corresponding behavior type of the human body area;
The computing module is used for computing the overlapping degree of the human face area and the human body area;
and the matching module is used for establishing a corresponding relation between the behavior type and the identity information if the overlapping degree is larger than a preset threshold value.
Optionally, the identification module is specifically configured to:
detecting a human face and a human body contained in the target picture respectively;
determining the human face through a first rectangular frame, and taking an image range determined by four vertex coordinates of the first rectangular frame as the human face area;
determining the human body through a second rectangular frame, and taking the image range determined by four vertex coordinates of the second rectangular frame as the human body area;
the computing module is specifically configured to:
and calculating the overlapping degree of the first rectangular frame and the second rectangular frame.
Optionally, the first detection module is specifically configured to:
according to the face area, a face model corresponding to the face area is established;
comparing the face model with a preset face model stored in a preset database, and determining a target preset face model matched with the face model;
and taking the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
Optionally, the behavior detection device of the embodiment of the present invention further includes:
the correction module is used for determining correction parameters corresponding to the behavior types according to a preset algorithm, wherein the preset algorithm is used for determining different correction parameters corresponding to different behavior types;
the matching module is specifically configured to correct the overlapping degree according to the correction parameter, and if the corrected overlapping degree is greater than a preset threshold, establish a corresponding relationship between the behavior type and the identity information.
Optionally, the correction module is specifically configured to:
if the behavior type is the behavior of the shooting position facing the front of the body, the correction parameter is a first correction parameter; and/or,
if the behavior type is the behavior of the shooting position facing the side face of the body, the correction parameter is a second correction parameter; and/or,
and if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
Optionally, the matching module is specifically configured to:
determining a target person corresponding to the face area through the identity information;
and taking the behavior type as the behavior type of the target person.
The embodiment of the invention also provides electronic equipment, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface, and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing any one of the behavior detection methods when executing the program stored in the memory.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and the computer program realizes the behavior detection method when being executed by a processor.
The embodiments of the present invention also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform any of the above-described behavior detection methods.
According to the behavior detection method, the device, the electronic equipment and the storage medium, after the target picture is obtained, the face area and the human body area can be respectively determined in the target picture, and then the face area and the human body area are respectively identified and detected, so that the identity information corresponding to the face area and the behavior type corresponding to the human body area can be determined. And then calculating the overlapping degree of the human face area and the human body area, and if the overlapping degree is larger than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information. Therefore, when the behavior is detected, not only the specific behavior type can be detected, but also the behavior type can be matched with the identity information, so that the identity information of the person who specifically generates the behavior can be determined. It is possible to know who in particular has taken place the action. Thus, the analysis and the recording of the behaviors of the individual can be better carried out. Of course, it is not necessary for any one product or method of practicing the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a is a flowchart of a behavior detection method according to an embodiment of the present invention;
FIG. 1b is a flowchart of another method for detecting behavior according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a face area and a human body area according to an embodiment of the present invention;
FIG. 3a is a block diagram of a behavior detection apparatus according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of another embodiment of a behavior detection apparatus according to the present invention;
fig. 4 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1a, fig. 1a is a flowchart of a behavior detection method according to an embodiment of the present invention, including:
step 110, a target picture is acquired.
The behavior detection method provided by the embodiment of the invention can be applied to various electronic devices with data processing, such as computers, servers and the like, and particularly can be applied to monitoring devices, such as cameras, or NVRs (Network Video Recorder, network hard disk recorders), DVRs (Digital Video Recorder, digital video recorders), servers, monitoring systems and the like which are connected with the cameras.
The electronic device can obtain the target picture in a variety of ways. For example, the electronic device may acquire the target picture captured by the video capturing device through a video capturing device such as a camera, or a snapshot machine, which is connected to the electronic device, or acquire the input target picture through other input modes, or the electronic device may acquire the target picture from its own memory, for example, a hard disk, or other devices. The target picture comprises all the people and objects which can be shot in the shooting range. Therefore, when the target picture contains one or more persons needing behavior detection, the target picture at least contains images of the faces and all or part of the body of the one or more persons.
In practical application, according to different application scenarios, the obtained target picture may be different, for example, when the student performs behavior detection, the target picture may be an image of the student in the classroom during class; the target picture may be a real-time image of the public area when performing behavior detection on persons in other public areas.
Step 120, in the target picture, a face area and a body area are determined.
After the electronic device acquires the target picture, the electronic device can recognize and detect the target picture in various modes, so that the face area and the human body area are determined respectively.
Specifically, the electronic device may determine the face area in the target picture by using various existing face detection methods, such as a head-shoulder separation method, an overall contour method, a mosaic method, and so on. The face region refers to an image region containing the face of a person.
And when a plurality of faces are in the target picture, the face area of each face can be determined. Preferably, in order to improve the accuracy of face detection, a deep learning mode may be used to detect the target picture, so that each face area can be determined more quickly and accurately in the target picture. The determined face region may be a coordinate range, for example, the face region may be determined by a frame selection manner, and the frame-selected coordinate range is the face region.
Similarly, the electronic device may also adopt a plurality of existing detection methods, for example, a deep learning method, to determine the human body region in the target picture. The human body region refers to an image region containing a body part of a person. When the body parts of a plurality of persons exist in the target picture, a plurality of human body areas are correspondingly determined. The human body region may also be a coordinate range, for example, the human body region may be determined by a frame selection manner, and the frame-selected coordinate range is the human body region.
And 130, carrying out face recognition on the face area, and determining identity information corresponding to the face area.
After the face area is determined from the target picture, various face recognition methods can be adopted to recognize the face in the face area. For example, the characteristic point parameters of the face can be obtained in the face region in the target picture, and then the characteristic point parameters are matched in a preset database containing the personal identity information, so that the identity information corresponding to the face region is determined.
Of course, various face recognition technologies exist in the prior art, and corresponding identity information can be determined through faces in the face area. Various methods in the prior art can be applied to the embodiments of the present invention as long as the functions in the embodiments of the present invention can be implemented. The identity information may contain a variety of information types, such as name, contact, and so forth. In particular, in different application scenarios, the identity information may not be identical, for example, when detecting the behaviour of the student, the identity information may include name, age, class, etc.
In the embodiment of the present invention, in order to more accurately and more quickly determine identity information corresponding to a face area, step 130, performing face recognition on the face area, and determining the identity information corresponding to the face area may include:
step 131, according to the face area, a face model corresponding to the face area is established.
After the electronic equipment obtains the face area determined in the target picture, a modeling method can be adopted according to the face area to establish a face model corresponding to the face area. For example, corresponding feature points and related parameters such as the ratio of five sense organs, the relative position, etc. may be acquired for the face in the face region. And establishing a face model corresponding to the face region through the feature points and related parameters. The face model is a digital model capable of reflecting face characteristics.
Step 132, comparing the face model with the preset face model stored in the preset database, and determining a target preset face model matched with the face model.
The preset database stores a large number of preset face models and identity information corresponding to each preset face model.
For example, when a behavior detection room is performed for a student, identity information and face images of the student may be acquired in advance. And then generating a preset face model corresponding to each student according to the acquired face image, and storing the preset face model corresponding to each student and the corresponding identity information in a preset database, wherein the identity information of each student in the preset database can correspond to the preset face model of the student.
Alternatively, when performing behavior detection on company employees, identity information and face images of the respective employees may be acquired in advance. And generating a preset face model corresponding to each employee, and storing the preset face model and corresponding identity information in a preset database.
After the face model corresponding to the face region is established, the face model can be compared with the preset face model stored in the preset database, so that a target preset face model matched with the face model can be determined in a large number of preset face models. For example, if the feature point parameter of the face model is the same as the feature point parameter of a preset face model, the preset face model may be considered as a target preset face model that matches the face model. In the embodiment of the invention, in order to more accurately realize the comparison between the face model and the preset face model in the preset database, the neural network model obtained by training through a deep learning method can be adopted to realize the comparison between the face model and the preset face model, so that the comparison can be more accurately and rapidly completed.
And 133, taking the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
After determining the target preset face model matched with the face model, the face in the face area corresponding to the face model can be represented, and the person corresponding to the target preset face model is the same person. Therefore, the identity information corresponding to the target preset face model can be used as the identity information corresponding to the face area. Thereby, the identity information of the face in the face area is determined.
And 140, performing behavior detection on the human body area to determine the behavior type corresponding to the human body area.
The electronic equipment detects the behavior of the human body in the human body region by various behavior detection methods aiming at each human body region determined in the target picture, so as to obtain the behavior type, and each human body region can obtain a corresponding behavior type after the behavior detection.
Specifically, a deep manner may be used to perform behavior detection on a human body region.
By training the neural network with a large number of samples, a target behavior detection network that can detect the type of motion of a person can be obtained. For example, each action type may train the target behavior detection network through a large number of sample pictures, thereby enabling the target behavior detection network to identify various types of action types.
The target picture, or a human body picture including a human body region extracted from the target picture, may be input to the target behavior detection network. Then, the target behavior detection network can detect the characteristics of the gesture, the motion, the body motion amplitude and the like of the target person in the target picture or the human body picture based on the characteristics of each action type obtained by learning each action type in the original training process, so as to determine the behavior type of the target person.
The detected behavior types are not exactly the same in different application scenarios. For example, when performing behavior detection on a student, behavior types may include: student orientation, student posture, student listening state, student reading state, student writing state, student hand lifting state, dozing, mobile phone playing, table lying prone, and the like. When conducting behavior detection on corporate employees, then the behavior types may include: work, no work, standing work, etc.
And step 150, calculating the overlapping degree of the human face area and the human body area.
In the previous step, the face region and the body region, which are both image ranges in the target picture, have been determined. Therefore, the overlapping range, namely the overlapping degree, of the human face area and the human body area can be determined through calculation of the geometric relationship. For example, if the face region and the body region are both coordinate ranges selected by frames, the overlapping area or the overlapping ratio of the two coordinate ranges may be used as the overlapping degree of the face region and the body region. For example, the ratio of the area of the overlap of the face region and the human body region to the area of the face region may be used as the overlap degree.
The higher the overlapping degree of the determined face region and the human body region is, the closer the connection between the face region and the human body region is, and the higher the possibility that the face region and the human body region belong to the same person is. Similarly, if the overlapping degree of the face area and the human body area is lower, it means that the face area and the human body area may belong to one person differently.
Step 160, if the overlapping degree is greater than the preset threshold, establishing a corresponding relationship between the behavior type and the identity information.
The preset threshold may be selected as desired, and may be, for example, generally 0.3, 0.40, 0.5, and so on.
If the overlapping degree is larger than the preset threshold value, the overlapping degree is higher, and the face area and the human body area can be considered to belong to the same person. The determined behavior type may therefore be mapped to the corresponding identity information. Namely, establishing a corresponding relation between the behavior type and the identity information. Through the corresponding relation, the specific behavior of each person can be determined, so that the record, namely the subsequent analysis, can be better carried out.
For example, when performing behavior detection for students, a face area and a body area are respectively determined in a target picture. Through this face region, can confirm student's identity information, include: name: small a, class: first three-one shift. Through detecting human body area, confirm student's action type and be: playing with a mobile phone. And determining that the overlapping degree is 0.6 by calculating the overlapping degree. When the preset threshold is 0.5, the overlapping degree is larger than the preset threshold, which means that the determined face area and the determined body area belong to the same student. Therefore, the corresponding relation between the behavior type and the identity information can be established, namely, the fact that the first name is small A and the class is the first three classes of students playing mobile phones is determined.
If the overlapping degree is smaller than or equal to the preset threshold value, the overlapping degree is lower, and the human face area and the human body area can be considered to be not of the same person. Therefore, the determined behavior type cannot be corresponding to the identity information corresponding to the face region. Therefore, the detection result of this time can be discarded. And (4) re-detecting or after acquiring a new target picture, continuing to re-detect the new target picture according to the steps.
In the embodiment of the invention, when the behavior is detected, the specific behavior type can be detected, and the behavior type can be matched with the identity information, so that the identity information of the person who specifically generates the behavior is determined, and the person who specifically generates the behavior can be known, so that the analysis and the recording of the behavior are better carried out for the individual.
Optionally, referring to fig. 1b, after calculating the overlapping degree of the face region and the human body region, the method further includes:
step 151, determining correction parameters corresponding to the behavior types according to a preset algorithm, where the preset algorithm is used to determine different correction parameters corresponding to different behavior types.
In practical application, there are many behaviors that may occur due to the person in the target picture, that is, the body of the person may have various different gestures or actions. Therefore, when different behaviors occur, different actions and gestures may have different effects on the positional relationship of the face and the body thereof. For example, when performing behavior detection on students in one classroom, the student a of the former row is lying down on a table, and the student B of the latter row is sitting and speaking, and the two students are in the same column. After the target picture is obtained in this way, the face region corresponding to the face of student B and the body region corresponding to the body of student A are learned. The overlap may also be high, but the two students are not the same person. Therefore, if the face area and the human body area are simply determined by the overlapping degree, a judgment error may be caused.
In order to solve the above-mentioned problem, in the embodiment of the present invention, after determining the behavior type corresponding to the human body range in the target picture. The correction parameters corresponding to the behavior types can be determined according to a preset algorithm. The preset algorithm may be in various forms, for example, the preset algorithm may be a comparison table, and the comparison table stores correction parameters corresponding to different behavior types respectively.
The area range of the correction parameter is generally between 0 and 1, and may be equal to 0 or 1. If the behavior type is a low head table, the corresponding correction parameter is 0.5. When the behavior type is sitting, the corresponding correction parameter is 1. If the behavior type is low, the corresponding correction parameter is 0. In practical application, different correction parameters can be set for different behavior types according to practical situations. In the embodiment of the present invention, the specific numerical value of the correction parameter is not specifically limited.
When the behavior detection method provided by the embodiment of the invention is actually used, if correction parameters are respectively set for each behavior type, certain resource waste is sometimes caused, and the operation efficiency of the electronic equipment is reduced. Therefore, in order to simplify the method flow and improve the operation efficiency of the electronic device, optionally, in the embodiment of the present invention, step 151, according to a preset algorithm, determines a correction parameter corresponding to the behavior type may include:
If the behavior type is the behavior of the shooting position facing the front of the body, the correction parameter is a first correction parameter. And/or the number of the groups of groups,
if the behavior type is the behavior of the shooting position facing the side of the body, the correction parameter is a second correction parameter. And/or the number of the groups of groups,
if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
Through analysis of a large number of actual behaviors, the matching situation of the human face area and the human body area can be found, and the matching situation has a direct relation with the orientation of the body when the behaviors occur. For example, when a person's body is facing a photographing position, there is a high possibility that the person's face is photographed at the same time. When the back of the body of the person who acts is oriented to the shooting position, the possibility of shooting the face of the person at the same time is low, so that even if the overlapping degree of the face area and the human body area of the person is high, the probability that the face area and the human body area belong to the same person is low.
Based on the above principle, in the embodiment of the present invention, the behavior types may be classified into 3 types according to the orientation of the body when the behavior occurs. Namely, the behavior of the photographing position with the front face of the body facing the photographing position with the side face of the body facing the photographing position, and the behavior of the photographing position with the back face of the body facing the photographing position. And corresponding correction parameters can be preset for the 3 behavior types respectively. For example, the first correction parameter may be 1; the second correction parameter may be 0.7; the third correction parameter may be 0.4. Of course, specific numerical values may be set as necessary.
By the above mode, the corresponding correction parameters do not need to be set for each behavior type. Therefore, on the premise of ensuring accurate determination, the process is simplified, and the efficiency of the electronic equipment is improved.
Step 160, if the overlapping degree is greater than the preset threshold, establishes a correspondence between the behavior type and the identity information, including:
and 160, correcting the overlapping degree through the correction parameters, and if the corrected overlapping degree is larger than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information.
When the correction parameter is determined, the degree of overlap can be corrected by the correction parameter. Specifically, the overlapping degree is generally corrected by directly multiplying the overlapping degree by a correction parameter. For example, the ratio of the area of the overlap of the face region and the human body region to the area of the face region is set as the overlap, and the overlap is 0.5. The determined correction parameter is 0.8. The overlap is corrected to 0.5 x 0.8=0.4. I.e. the corrected overlap is 0.4.
After determining the corrected overlap, the corrected overlap may be compared with a preset threshold. The preset threshold may be selected as desired, and may be, for example, generally 0.3, 0.40, 0.5, and so on.
If the corrected overlapping degree is larger than the preset threshold value, the overlapping degree is higher, and the face area and the human body area can be considered to belong to the same person. The determined behavior type may therefore be mapped to the corresponding identity information. Namely, establishing a corresponding relation between the behavior type and the identity information. Through the corresponding relation, the specific behavior of each person can be determined, so that the record, namely the subsequent analysis, can be better carried out.
For example, when performing behavior detection for students, a face area and a body area are respectively determined in a target picture. Through this face region, can confirm student's identity information, include: name: small a, class: first three-one shift. Through detecting human body area, confirm student's action type and be: playing with a mobile phone. And calculating the overlapping degree, correcting the overlapping degree, and determining that the corrected overlapping degree is 0.6. The preset threshold is 0.5. The corrected overlap is greater than a preset threshold. Thereby indicating that the determined face area and body area belong to the same student. Therefore, the corresponding relation between the behavior type and the identity information can be established, namely, the fact that the first name is small A and the class is the first three classes of students playing mobile phones is determined.
If the corrected overlapping degree is smaller than or equal to the preset threshold value, the overlapping degree is lower, and the human face area and the human body area can be considered to be not of the same person. Therefore, the determined behavior type cannot be corresponding to the identity information corresponding to the face region. Therefore, the detection result of this time can be discarded. And (4) re-detecting or after acquiring a new target picture, continuing to re-detect the new target picture according to the steps.
In practical application, in order to more conveniently and rapidly establish the corresponding relation between the identity information and the behavior type. In the embodiment of the present invention, in step 160, establishing a correspondence between the behavior type and the identity information includes:
step 161, determining a target person corresponding to the face area through the identity information.
After the electronic equipment determines the identity information corresponding to the face area in the target picture, the electronic equipment can determine the target person with the identity information according to the identity information, and the target person is located in the target person corresponding to the face area in the target picture. I.e. from the identity information, a specific person is determined.
And 162, taking the behavior type as the behavior type of the target person.
After a particular target person is determined, the determined behavior type may be used as what the target person has, or what is happening. So that the detected behavior type can be mapped to each specific person.
In the embodiment of the invention, after the target picture is acquired, the human face area and the human body area can be respectively determined in the target picture, and then the human face area and the human body area are respectively identified and detected, so that the identity information corresponding to the human face area and the behavior type corresponding to the human body area can be determined. And then calculating the overlapping degree of the human face region and the human body region, correcting the overlapping degree according to the determined behavior type, comparing the corrected overlapping degree with a preset threshold value, and if the corrected overlapping degree is larger than the preset threshold value, establishing a corresponding relation between the behavior type and the identity information. Therefore, when the behavior is detected, not only the specific behavior type can be detected, but also the behavior type can be matched with the identity information, so that the identity information of the person who specifically generates the behavior can be determined. It is possible to know who in particular has taken place the action. Thus, the analysis and the recording of the behaviors of the individual can be better carried out. In addition, in the embodiment of the invention, the accuracy of matching the behavior type and the identity information is improved by correcting the overlapping degree, so that the reliability of behavior detection in a personnel-intensive environment is improved.
In combination with the above embodiment, in order to quickly determine the corresponding area or range when determining the face area and the body area from the target image, in the embodiment of the present invention, step 120 may include:
step 121, detecting a face and a human body contained in the target picture respectively.
After the electronic equipment acquires the target picture, the electronic equipment can respectively identify the face and the human body in the target picture by adopting corresponding identification methods, so that the face and the human body are respectively identified in the target picture. The specific method is the same as that in the above embodiment, and will not be described here again.
And step 122, determining a face through the first rectangular frame, and taking the image range determined by the four vertex coordinates of the first rectangular frame as a face area.
And step 123, determining a human body through the second rectangular frame, and taking the image range determined by the four vertex coordinates of the second rectangular frame as a human body area.
Referring to fig. 2, in a target picture 201 acquired by the electronic device in fig. 2, a human face 211 and a human body 212 have been determined, respectively.
The electronic device can respectively frame-select the face area and the human body area in a frame selection mode. Specifically, as shown in fig. 2, the first rectangular box 202 may box the image range where the face 211 is located. And, the four vertex coordinates of the first rectangular frame 202 are (x 1, y 1), (x 2, y 1), (x 1, y 2), (x 2, y 2), respectively. The extent of the first rectangular frame 202 can be clearly shown by the four vertex coordinates. The range of the first rectangular frame 202 is the face area.
Likewise, the second rectangular box 203 may box the image range in which the human body 212 is located. And, four vertex coordinates of the second rectangular frame 203 are (a 1, b 1), (a 2, b 1), (a 1, b 2), (a 2, b 2), respectively. The range of the second rectangular frame 203 can be clearly represented by the four vertex coordinates. The second rectangular frame 203 is a human body region.
Accordingly, in step 150, calculating the overlapping degree of the face region and the human body region may include:
the degree of overlap of the first rectangular frame 202 and the second rectangular frame 203 is calculated.
As in the above embodiment, the overlapping degree may be the overlapping area or overlapping ratio of the first rectangular frame 202 and the second rectangular frame 203.
Preferably, the ratio of the area of the overlap of the face region and the human body region to the area of the face region may be generally used as the degree of overlap. The overlapping degree can be obtained by dividing the overlapping area of the first rectangular frame 202 and the second rectangular frame 203 by the first rectangular frame 202.
Specifically, the geometric relationship can be obtained:
the area of the first rectangular frame 202 is: s1= (x 2-x 1) × (y 2-y 1);
the overlapping area of the first rectangular frame 202 and the second rectangular frame 203 is: s2= (x 2-x 1) × (y 2-b 1);
So that the overlap h=s2/S1, and (0 < =h < =1).
Through the above process, the overlapping degree of the first rectangular frame 202 and the second rectangular frame 203 can be rapidly and conveniently calculated, and the operation efficiency of the electronic device is improved.
Referring to fig. 3a, fig. 3a is a block diagram of a behavior detection device according to an embodiment of the present invention, where the block diagram includes:
an acquisition module 301, configured to acquire a target picture;
the identifying module 302 is configured to determine a face area and a human body area in the target picture;
the first detection module 303 is configured to perform face recognition on the face area, and determine identity information corresponding to the face area;
the second detection module 304 is configured to perform behavior detection on the human body area, and determine a behavior type corresponding to the human body area;
a calculating module 305, configured to calculate a degree of overlap between the face region and the human body region;
and the matching module 306 is configured to establish a correspondence between the behavior type and the identity information if the overlapping degree is greater than a preset threshold.
In the embodiment of the invention, when the behavior is detected, the specific behavior type can be detected, and the behavior type can be matched with the identity information, so that the identity information of the person who specifically generates the behavior is determined, and the person who specifically generates the behavior can be known, so that the analysis and the recording of the behavior are better carried out for the individual.
Optionally, referring to fig. 3b, the behavior detection apparatus of the embodiment of the present invention preferably includes:
the correction module 307 is configured to determine correction parameters corresponding to the behavior types according to a preset algorithm, where the preset algorithm is used to determine different correction parameters corresponding to different behavior types;
the matching module 306 is specifically configured to correct the degree of overlap according to the correction parameter, and if the corrected degree of overlap is greater than a preset threshold, establish a correspondence between the behavior type and the identity information.
In the embodiment of the invention, when the behavior is detected, not only the specific behavior type can be detected, but also the behavior type can be matched with the identity information, so that the identity information of the person who specifically generates the behavior can be determined. It is possible to know who in particular has taken place the action. Thus, the analysis and the recording of the behaviors of the individual can be better carried out. In addition, in the embodiment of the invention, the accuracy of matching the behavior type and the identity information is improved by correcting the overlapping degree, so that the reliability of behavior detection in a personnel-intensive environment is improved.
Optionally, in the behavior detection apparatus provided in the embodiment of the present invention, the identifying module 302 is specifically configured to:
Detecting a human face and a human body contained in the target picture respectively;
determining the human face through a first rectangular frame, and taking an image range determined by four vertex coordinates of the first rectangular frame as the human face area;
determining the human body through a second rectangular frame, and taking the image range determined by four vertex coordinates of the second rectangular frame as the human body area;
the computing module 305 is specifically configured to:
and calculating the overlapping degree of the first rectangular frame and the second rectangular frame.
Optionally, in the behavior detection apparatus provided in the embodiment of the present invention, the first detection module 303 is specifically configured to:
according to the face area, a face model corresponding to the face area is established;
comparing the face model with a preset face model stored in a preset database, and determining a target preset face model matched with the face model;
and taking the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
Optionally, in the behavior detection apparatus provided in the embodiment of the present invention, the correction module 307 is specifically configured to:
If the behavior type is the behavior of the shooting position facing the front of the body, the correction parameter is a first correction parameter; and/or the number of the groups of groups,
if the behavior type is the behavior of the shooting position facing the side face of the body, the correction parameter is a second correction parameter; and/or the number of the groups of groups,
and if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
Optionally, in the behavior detection apparatus provided in the embodiment of the present invention, the matching module 306 is specifically configured to:
determining a target person corresponding to the face area through the identity information;
and taking the behavior type as the behavior type of the target person.
The embodiment of the invention also provides an electronic device, as shown in fig. 4, which comprises a processor 401, a communication interface 402, a memory 403 and a communication bus 404, wherein the processor 401, the communication interface 402 and the memory 403 complete communication with each other through the communication bus 404,
a memory 403 for storing a computer program;
the processor 401, when executing the program stored in the memory 403, implements the following steps:
obtaining a target picture;
determining a human face area and a human body area in the target picture;
Performing face recognition on the face area, and determining identity information corresponding to the face area;
performing behavior detection on the human body area, and determining a behavior type corresponding to the human body area;
calculating the overlapping degree of the human face area and the human body area;
if the overlapping degree is larger than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information.
The communication bus mentioned for the above-mentioned electronic devices may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include RAM (Random Access Memory ) or NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a CPU (Central Processing Unit ), NP (Network Processor, network processor), etc.; but also DSP (Digital Signal Processor ), ASIC (Application Specific Integrated Circuit, application specific integrated circuit), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
In yet another embodiment of the present invention, a computer readable storage medium is provided, in which instructions are stored, which when run on a computer, cause the computer to perform the behavior detection method according to any one of the above embodiments.
In a further embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the behavior detection method of any of the above embodiments.
For the apparatus, computer-readable storage medium and computer program product embodiments, the description is relatively simple, as the method content involved is substantially similar to the method embodiments described above, and reference will be made to part of the description of the method embodiments for relevant points.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (12)

1. A behavior detection method, comprising:
obtaining a target picture;
determining a human face area and a human body area in the target picture;
performing face recognition on the face area, and determining identity information corresponding to the face area;
performing behavior detection on the human body area, and determining a behavior type corresponding to the human body area;
calculating the overlapping degree of the human face area and the human body area;
if the overlapping degree is larger than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information;
after the calculating the overlapping degree of the face region and the human body region, the method further includes:
determining correction parameters corresponding to the behavior types according to a preset algorithm, wherein the preset algorithm is used for determining different correction parameters corresponding to different behavior types, the behavior types comprise body orientations when behaviors occur, and different body orientations correspond to different correction parameters;
If the overlapping degree is greater than a preset threshold, establishing a corresponding relationship between the behavior type and the identity information, including:
and correcting the overlapping degree through the correction parameters, and if the corrected overlapping degree is larger than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information.
2. The method according to claim 1, wherein determining a face region and a body region in the target picture includes:
detecting a human face and a human body contained in the target picture respectively;
determining the human face through a first rectangular frame, and taking an image range determined by four vertex coordinates of the first rectangular frame as the human face area;
determining the human body through a second rectangular frame, and taking the image range determined by four vertex coordinates of the second rectangular frame as the human body area;
the calculating the overlapping degree of the human face region and the human body region comprises the following steps:
and calculating the overlapping degree of the first rectangular frame and the second rectangular frame.
3. The method of claim 1, wherein the performing face recognition on the face area, and determining identity information corresponding to the face area, comprises:
According to the face area, a face model corresponding to the face area is established;
comparing the face model with a preset face model stored in a preset database, and determining a target preset face model matched with the face model;
and taking the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
4. The method according to claim 1, wherein the determining, according to a preset algorithm, the correction parameter corresponding to the behavior type includes:
if the behavior type is the behavior of the shooting position facing the front of the body, the correction parameter is a first correction parameter; and/or,
if the behavior type is the behavior of the shooting position facing the side face of the body, the correction parameter is a second correction parameter; and/or,
and if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
5. The method according to claim 1, wherein the establishing the correspondence between the behavior type and the identity information includes:
determining a target person corresponding to the face area through the identity information;
And taking the behavior type as the behavior type of the target person.
6. A behavior detection apparatus, characterized by comprising:
the acquisition module is used for acquiring the target picture;
the identification module is used for determining a human face area and a human body area in the target picture;
the first detection module is used for carrying out face recognition on the face area and determining identity information corresponding to the face area;
the second detection module is used for detecting the behaviors of the human body area and determining the corresponding behavior type of the human body area;
the computing module is used for computing the overlapping degree of the human face area and the human body area;
the matching module is used for establishing a corresponding relation between the behavior type and the identity information if the overlapping degree is larger than a preset threshold value;
the correction module is used for determining correction parameters corresponding to the behavior types according to a preset algorithm, wherein the preset algorithm is used for determining different correction parameters corresponding to different behavior types, the behavior types comprise body orientations when behaviors occur, and the different body orientations correspond to different correction parameters;
the matching module is specifically configured to correct the overlapping degree according to the correction parameter, and if the corrected overlapping degree is greater than a preset threshold, establish a corresponding relationship between the behavior type and the identity information.
7. The device according to claim 6, wherein the identification module is specifically configured to:
detecting a human face and a human body contained in the target picture respectively;
determining the human face through a first rectangular frame, and taking an image range determined by four vertex coordinates of the first rectangular frame as the human face area;
determining the human body through a second rectangular frame, and taking the image range determined by four vertex coordinates of the second rectangular frame as the human body area;
the computing module is specifically configured to:
and calculating the overlapping degree of the first rectangular frame and the second rectangular frame.
8. The apparatus of claim 6, wherein the first detection module is specifically configured to:
according to the face area, a face model corresponding to the face area is established;
comparing the face model with a preset face model stored in a preset database, and determining a target preset face model matched with the face model;
and taking the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
9. The apparatus of claim 6, wherein the correction module is specifically configured to:
if the behavior type is the behavior of the shooting position facing the front of the body, the correction parameter is a first correction parameter; and/or,
if the behavior type is the behavior of the shooting position facing the side face of the body, the correction parameter is a second correction parameter; and/or,
and if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
10. The apparatus of claim 6, wherein the matching module is specifically configured to:
determining a target person corresponding to the face area through the identity information;
and taking the behavior type as the behavior type of the target person.
11. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-5 when executing a program stored on a memory.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-5.
CN201811139415.1A 2018-09-28 2018-09-28 Behavior detection method and device, electronic equipment and storage medium Active CN110969045B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811139415.1A CN110969045B (en) 2018-09-28 2018-09-28 Behavior detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811139415.1A CN110969045B (en) 2018-09-28 2018-09-28 Behavior detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110969045A CN110969045A (en) 2020-04-07
CN110969045B true CN110969045B (en) 2023-06-02

Family

ID=70026877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811139415.1A Active CN110969045B (en) 2018-09-28 2018-09-28 Behavior detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110969045B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476160A (en) * 2020-04-07 2020-07-31 杭州视在科技有限公司 Loss function optimization method, model training method, target detection method, and medium
CN111507600A (en) * 2020-04-10 2020-08-07 开滦(集团)有限责任公司电信分公司 Skill martial art monitoring method, device and system and storage medium
CN112001230A (en) * 2020-07-09 2020-11-27 浙江大华技术股份有限公司 Sleeping behavior monitoring method and device, computer equipment and readable storage medium
CN112714280A (en) * 2020-12-08 2021-04-27 深圳英飞拓科技股份有限公司 Intelligent old age care method, system, equipment and storage medium based on intelligent box
CN113469132A (en) * 2021-07-26 2021-10-01 浙江大华技术股份有限公司 Violation detection method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8254633B1 (en) * 2009-04-21 2012-08-28 Videomining Corporation Method and system for finding correspondence between face camera views and behavior camera views
CN107609517A (en) * 2017-09-15 2018-01-19 华中科技大学 A kind of classroom behavior detecting system based on computer vision
CN107679613A (en) * 2017-09-30 2018-02-09 同观科技(深圳)有限公司 A kind of statistical method of personal information, device, terminal device and storage medium
CN108304762A (en) * 2017-11-30 2018-07-20 腾讯科技(深圳)有限公司 A kind of human body attitude matching process and its equipment, storage medium, terminal
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
CN108537172A (en) * 2018-04-09 2018-09-14 北京邦天信息技术有限公司 A kind of method and apparatus of the behavior based on Machine Vision Recognition people

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9406136B2 (en) * 2013-07-05 2016-08-02 Canon Kabushiki Kaisha Information processing device, information processing method and storage medium for identifying communication counterpart based on image including person
CN104463090A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Method for recognizing actions of human body skeleton of man-machine interactive system
US20170155955A1 (en) * 2015-12-01 2017-06-01 Le Holdings (Beijing) Co., Ltd. Method and device for distinguishing user data of smart tv
CN108256404B (en) * 2016-12-29 2021-12-10 北京旷视科技有限公司 Pedestrian detection method and device
CN107292240B (en) * 2017-05-24 2020-09-18 深圳市深网视界科技有限公司 Person finding method and system based on face and body recognition
CN108537165A (en) * 2018-04-08 2018-09-14 百度在线网络技术(北京)有限公司 Method and apparatus for determining information
CN108416336B (en) * 2018-04-18 2019-01-18 特斯联(北京)科技有限公司 A kind of method and system of intelligence community recognition of face

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8254633B1 (en) * 2009-04-21 2012-08-28 Videomining Corporation Method and system for finding correspondence between face camera views and behavior camera views
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
CN107609517A (en) * 2017-09-15 2018-01-19 华中科技大学 A kind of classroom behavior detecting system based on computer vision
CN107679613A (en) * 2017-09-30 2018-02-09 同观科技(深圳)有限公司 A kind of statistical method of personal information, device, terminal device and storage medium
CN108304762A (en) * 2017-11-30 2018-07-20 腾讯科技(深圳)有限公司 A kind of human body attitude matching process and its equipment, storage medium, terminal
CN108537172A (en) * 2018-04-09 2018-09-14 北京邦天信息技术有限公司 A kind of method and apparatus of the behavior based on Machine Vision Recognition people

Also Published As

Publication number Publication date
CN110969045A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
CN110659397B (en) Behavior detection method and device, electronic equipment and storage medium
CN108269333A (en) Face identification method, application server and computer readable storage medium
CN110321852B (en) Action type identification method and device, storage medium and computer equipment
CN111210399B (en) Imaging quality evaluation method, device and equipment
CN110287862B (en) Anti-candid detection method based on deep learning
WO2019056503A1 (en) Store monitoring evaluation method, device and storage medium
CN111666922A (en) Video matching method and device, computer equipment and storage medium
WO2021082045A1 (en) Smile expression detection method and apparatus, and computer device and storage medium
CN111325082A (en) Personnel concentration degree analysis method and device
CN111881740B (en) Face recognition method, device, electronic equipment and medium
CN108875506B (en) Face shape point tracking method, device and system and storage medium
CN110458971B (en) Classroom attendance recording method and device, computer equipment and storage medium
CN110363111B (en) Face living body detection method, device and storage medium based on lens distortion principle
CN111241926A (en) Attendance checking and learning condition analysis method, system, equipment and readable storage medium
US10445620B2 (en) Method and system for object tracking in multiple non-linear distortion lenses
CN112287905A (en) Vehicle damage identification method, device, equipment and storage medium
CN117218703A (en) Intelligent learning emotion analysis method and system
CN113591885A (en) Target detection model training method, device and computer storage medium
CN115527083B (en) Image annotation method and device and electronic equipment
CN113743434A (en) Training method of target detection network, image augmentation method and device
US20220122341A1 (en) Target detection method and apparatus, electronic device, and computer storage medium
CN115116119A (en) Face recognition system based on digital image processing technology
US10032079B2 (en) Evaluation of models generated from objects in video
CN111767829B (en) Living body detection method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant