CN110969045A - Behavior detection method and device, electronic equipment and storage medium - Google Patents
Behavior detection method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110969045A CN110969045A CN201811139415.1A CN201811139415A CN110969045A CN 110969045 A CN110969045 A CN 110969045A CN 201811139415 A CN201811139415 A CN 201811139415A CN 110969045 A CN110969045 A CN 110969045A
- Authority
- CN
- China
- Prior art keywords
- face
- behavior
- identity information
- determining
- human body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a behavior detection method, a behavior detection device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a target picture; determining a face region and a human body region in a target picture; carrying out face recognition on the face area, and determining identity information corresponding to the face area; performing behavior detection on the human body area, and determining a behavior type corresponding to the human body area; calculating the overlapping degree of the human face area and the human body area; and if the overlapping degree is greater than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information. In the behavior detection method implemented by the invention, when the behavior detection is carried out, not only can the specific behavior type be detected, but also the behavior type can be matched with the identity information, so that the identity information of the person who specifically generates the behavior can be determined. It is possible to know who specifically has taken place the action. Therefore, the behavior of the individual can be better analyzed and recorded.
Description
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a behavior detection method and apparatus, an electronic device, and a storage medium.
Background
Because the continuous development of computer, artificial intelligence technique can carry out the analysis to a large amount of human action data through the computer to help researcher or managers that can be better, carry out the analysis to people's action and corresponding action logic etc. with the personnel management strategy that realizes more reasonable, for example, through the detection to student's action, reasonable teaching mode and teaching plan can better be formulated.
In the prior art, when a person is subjected to behavior detection, the person can only be subjected to the behavior detection simply in many cases, and the corresponding identity of the person who has the behavior cannot be matched accurately. Therefore, the behavior analysis and detection of the target personnel can not be carried out in a targeted manner, and the application range of the behavior detection technology is greatly limited.
Disclosure of Invention
The embodiment of the invention aims to provide a behavior detection method, a behavior detection device, electronic equipment and a storage medium, so that identity information of a person who performs a behavior can be accurately matched when the behavior is detected. The specific technical scheme is as follows:
the embodiment of the invention provides a behavior detection method, which comprises the following steps:
acquiring a target picture;
determining a face region and a human body region in the target picture;
carrying out face recognition on the face area, and determining identity information corresponding to the face area;
performing behavior detection on the human body area, and determining a behavior type corresponding to the human body area;
calculating the overlapping degree of the human face area and the human body area;
and if the overlapping degree is greater than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information.
Optionally, the determining a face region and a body region in the target picture includes:
respectively detecting the human face and the human body contained in the target picture;
determining the face through a first rectangular frame, and taking an image range determined by coordinates of four vertexes of the first rectangular frame as the face area;
determining the human body through a second rectangular frame, and taking an image range determined by coordinates of four vertexes of the second rectangular frame as the human body area;
the calculating the overlapping degree of the face region and the human body region comprises:
and calculating the overlapping degree of the first rectangular frame and the second rectangular frame.
Optionally, the performing face recognition on the face region to determine the identity information corresponding to the face region includes:
establishing a face model corresponding to the face region according to the face region;
comparing the face model with a preset face model stored in a preset database, and determining a target preset face model matched with the face model;
and taking the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
Optionally, after the calculating the degree of overlap between the face region and the body region, the method further includes:
determining correction parameters corresponding to the behavior types according to a preset algorithm, wherein the preset algorithm is used for determining different correction parameters corresponding to different behavior types;
if the overlapping degree is greater than a preset threshold, establishing a corresponding relationship between the behavior type and the identity information, including:
and correcting the overlapping degree through the correction parameters, and if the corrected overlapping degree is greater than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information.
Optionally, the determining, according to a preset algorithm, a correction parameter corresponding to the behavior type includes:
if the behavior type is the behavior of the shooting position facing the front face of the body, the correction parameter is a first correction parameter; and/or (c) and/or,
if the behavior type is the behavior of the shooting position facing to the side face of the body, the correction parameter is a second correction parameter; and/or (c) and/or,
and if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
Optionally, the establishing a correspondence between the behavior type and the identity information includes:
determining a target person corresponding to the face area according to the identity information;
and taking the behavior type as the behavior type of the target person.
An embodiment of the present invention further provides a behavior detection apparatus, including:
the acquisition module is used for acquiring a target picture;
the recognition module is used for determining a face region and a human body region in the target picture;
the first detection module is used for carrying out face recognition on the face area and determining identity information corresponding to the face area;
the second detection module is used for carrying out behavior detection on the human body area and determining a behavior type corresponding to the human body area;
the calculation module is used for calculating the overlapping degree of the human face area and the human body area;
and the matching module is used for establishing the corresponding relation between the behavior type and the identity information if the overlapping degree is greater than a preset threshold value.
Optionally, the identification module is specifically configured to:
respectively detecting the human face and the human body contained in the target picture;
determining the face through a first rectangular frame, and taking an image range determined by coordinates of four vertexes of the first rectangular frame as the face area;
determining the human body through a second rectangular frame, and taking an image range determined by coordinates of four vertexes of the second rectangular frame as the human body area;
the calculation module is specifically configured to:
and calculating the overlapping degree of the first rectangular frame and the second rectangular frame.
Optionally, the first detection module is specifically configured to:
establishing a face model corresponding to the face region according to the face region;
comparing the face model with a preset face model stored in a preset database, and determining a target preset face model matched with the face model;
and taking the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
Optionally, the behavior detection apparatus according to the embodiment of the present invention further includes:
the correcting module is used for determining the correcting parameters corresponding to the behavior types according to a preset algorithm, and the preset algorithm is used for determining different correcting parameters corresponding to different behavior types;
the matching module is specifically configured to modify the overlapping degree according to the modification parameter, and if the modified overlapping degree is greater than a preset threshold, establish a corresponding relationship between the behavior type and the identity information.
Optionally, the modification module is specifically configured to:
if the behavior type is the behavior of the shooting position facing the front face of the body, the correction parameter is a first correction parameter; and/or (c) and/or,
if the behavior type is the behavior of the shooting position facing to the side face of the body, the correction parameter is a second correction parameter; and/or (c) and/or,
and if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
Optionally, the matching module is specifically configured to:
determining a target person corresponding to the face area according to the identity information;
and taking the behavior type as the behavior type of the target person.
The embodiment of the invention also provides electronic equipment which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing any one of the behavior detection methods when executing the program stored in the memory.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements any of the behavior detection methods described above.
Embodiments of the present invention also provide a computer program product containing instructions, which when run on a computer, cause the computer to perform any of the above behavior detection methods.
According to the behavior detection method, the device, the electronic equipment and the storage medium provided by the embodiment of the invention, after the target picture is obtained, the face region and the human body region can be respectively determined in the target picture, and then the face region and the human body region are respectively identified and detected, so that the identity information corresponding to the face region and the behavior type corresponding to the human body region can be determined. And then, calculating the overlapping degree of the human face area and the human body area, and if the overlapping degree is greater than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information. Therefore, when the behavior detection is carried out, not only can the specific behavior type be detected, but also the behavior type can be matched with the identity information, and therefore the identity information of the personnel who specifically take place the behavior can be determined. It is possible to know who specifically has taken place the action. Therefore, the behavior of the individual can be better analyzed and recorded. Of course, it is not necessary for any product or method of practicing the invention to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1a is a flow chart of a behavior detection method according to an embodiment of the present invention;
FIG. 1b is another flow chart of a behavior detection method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a face region and a body region according to an embodiment of the present invention;
FIG. 3a is a block diagram of a behavior detection device according to an embodiment of the present invention;
FIG. 3b is another block diagram of a behavior detection device according to an embodiment of the present invention;
fig. 4 is a structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1a, fig. 1a is a flowchart of a behavior detection method according to an embodiment of the present invention, including:
and step 110, acquiring a target picture.
The behavior detection method provided by the embodiment of the invention can be applied to various electronic devices with data processing, such as computers, servers and the like, and particularly can be applied to monitoring devices, such as a camera, or NVR (Network Video Recorder) connected with the camera, DVR (Digital Video Recorder), server, monitoring system and the like.
The electronic device can acquire the target picture in various ways. For example, the electronic device may obtain a target picture taken by the video capture device through the connected video capture device such as a camera, and a snapshot machine, or obtain an input target picture through other input methods, or the electronic device may obtain the target picture from its own memory, such as a hard disk. The target picture comprises all persons and objects which can be shot in the shooting range. Therefore, when the target picture contains one or more persons needing behavior detection, the target picture at least contains images of the faces and all or part of the bodies of the one or more persons.
In practical applications, the obtained target picture may be different according to different application scenarios, for example, when performing behavior detection on a student, the target picture may be an image of the student in a classroom when the student goes to school; when performing behavior detection on people in other public areas, the target picture may be a real-time image of the public area.
And step 120, determining a face region and a human body region in the target picture.
After the electronic equipment acquires the target picture, the target picture can be identified and detected in various modes, so that the human face area and the human body area are respectively determined.
Specifically, the electronic device may determine the face region in the target picture by using various existing face detection methods, such as a head-shoulder separation method, an overall contour method, and a mosaic method. The face region refers to an image region including a face of a person.
And when a plurality of faces exist in the target picture, the face area of each face can be determined. Preferably, in order to improve the accuracy of face detection, a deep learning mode may be adopted to detect the target picture, so that each face region in the target picture is determined more quickly and accurately. The determined face area may be a coordinate range, for example, the face area may be determined by a frame selection method, and the framed coordinate range is the face area.
Similarly, the electronic device may also determine the human body region in the target picture by using various existing detection methods, for example, a deep learning method. The human body region refers to an image region including a human body part. When the body parts of a plurality of persons exist in the target picture, a plurality of body regions are correspondingly determined. The human body region may also be a coordinate range, for example, the human body region may be determined by a frame selection method, and the framed coordinate range is the human body region.
And step 130, performing face recognition on the face area, and determining identity information corresponding to the face area.
After the face region is determined from the target picture, various face recognition methods can be adopted to recognize the face in the face region. For example, feature point parameters of a face may be obtained in a face region in a target picture, and then matching of the feature point parameters is performed in a preset database containing personnel identity information, so as to determine identity information corresponding to the face region.
Of course, there are various face recognition technologies in the prior art, which can determine corresponding identity information through the face in the face region. Various methods in the prior art may be applied to the embodiment of the present invention as long as the functions in the embodiment of the present invention can be implemented. The identity information may contain a variety of information types, such as name, contact, and the like. Specifically, the identity information may not be identical in different application scenarios, for example, when detecting the behavior of a student, the identity information may include a name, an age, a class, and the like.
In this embodiment of the present invention, in order to determine the identity information corresponding to the face region more accurately and quickly, step 130, performing face recognition on the face region, and determining the identity information corresponding to the face region may include:
and 131, establishing a face model corresponding to the face region according to the face region.
After the electronic device obtains the face region determined in the target picture, a face model corresponding to the face region can be established by adopting a modeling method according to the face region. For example, corresponding feature points and related parameters, such as the proportion of five sense organs, relative positions and the like, can be acquired for the face in the face region. And establishing a face model corresponding to the face region through the feature points and the related parameters. The human face model is a digital model capable of reflecting human face features.
Step 132, comparing the face model with a preset face model stored in a preset database, and determining a target preset face model matched with the face model.
The preset database stores a large number of preset face models and identity information corresponding to each preset face model.
For example, when a behavior detection room is performed for a student, identity information and facial images of the student may be collected in advance. And then generating a preset face model corresponding to each student according to the acquired face image, and storing the preset face model corresponding to each student and the corresponding identity information in a preset database, wherein the identity information of each student in the preset database can correspond to the preset face model of the student.
Or when the behavior of the employees of the company is detected, the identity information and the facial image of each employee can be collected in advance. And generating a preset face model corresponding to each employee, and storing the preset face model and corresponding identity information in a preset database.
After the face model corresponding to the face region is established, the face model can be compared with the preset face model stored in the preset database, so that the target preset face model matched with the face model can be determined from a large number of preset face models. For example, if the feature point parameters of the face model are the same as the feature point parameters of a preset face model, the preset face model may be considered as a target preset face model matching with the face model. In the embodiment of the invention, in order to more accurately realize the comparison between the human face model and the preset human face model in the preset database, the neural network model obtained by deep learning method training can be adopted to realize the comparison between the human face model and the preset human face model, so that the comparison can be more accurately and rapidly completed.
Step 133, using the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
After the target preset face model matched with the face model is determined, the face in the face region corresponding to the face model can be represented, and the person corresponding to the target preset face model is the same person. Therefore, the identity information corresponding to the target preset face model can be used as the identity information corresponding to the face region. Thereby determining the identity information of the face in the face region.
And 140, performing behavior detection on the human body region, and determining a behavior type corresponding to the human body region.
The electronic equipment detects the behaviors of the human body in the human body region by various behavior detection methods aiming at each human body region determined in the target picture, so as to obtain behavior types, and each human body region can obtain a corresponding behavior type after the behavior detection.
Specifically, the behavior of the human body region can be detected in a deep manner.
By training the neural network by adopting a large number of samples, a target behavior detection network capable of detecting the action type of the person can be obtained. For example, each action type may be trained on the target behavior detection network through a large number of sample pictures, so that the target behavior detection network can recognize various types of action types.
The target picture, or a human body picture including a human body region extracted from the target picture, may be input to the target behavior detection network. Then, the target behavior detection network can detect the features of the posture, the motion, the body motion amplitude and the like of the target person in the target picture or the human body picture based on the features of each motion type obtained by learning each motion type in the original training process, so as to determine the behavior type of the target person.
The detected behavior types are not completely the same under different application scenarios. For example, when performing behavior detection for a student, the behavior types may include: student orientation, student posture, student listening state, student reading state, student writing state, student holding up hands state, dozing, playing mobile phone, lying on desk, etc. When performing behavior detection on a company employee, then the behavior types may include: work, no work, standing work, etc.
And 150, calculating the overlapping degree of the human face area and the human body area.
In the previous step, a face region and a body region have been determined, both of which are image ranges in the target picture. Therefore, the overlapping range, namely the overlapping degree, of the human face region and the human body region can be determined through calculation of the geometric relationship. For example, the face region and the body region are both coordinate ranges selected from the frames, and the overlapping area or the overlapping ratio of the two coordinate ranges can be used as the overlapping degree of the face region and the body region. For example, the ratio of the overlapping area of the face region and the body region to the area of the face region may be used as the degree of overlap.
The higher the determined overlap degree of the face region and the body region is, the closer the relationship between the face region and the body region is, and the higher the possibility that the face region and the body region belong to one person is. Similarly, if the overlapping degree of the face region and the body region is lower, it indicates that the face region and the body region may belong to a person different from each other.
And step 160, if the overlapping degree is greater than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information.
The preset threshold may be selected as desired, and may be, for example, generally 0.3, 0.40, 0.5, etc.
If the overlapping degree is larger than the preset threshold value, the overlapping degree is high, and the human face area and the human body area can be considered to belong to the same person. Therefore, the determined behavior type can be corresponded to the corresponding identity information. Namely, the corresponding relation between the behavior type and the identity information is established. Through the corresponding relation, the behavior of each specific person can be determined, and therefore recording, namely subsequent analysis, can be better carried out.
For example, when behavior detection is performed for a student, a face region and a body region are determined in a target picture, respectively. Through this face area, can determine student's identity information, include: name: small a, class: three in the first shift. Through detecting the human body region, the behavior type of the student is determined as follows: and (5) playing the mobile phone. By calculating the degree of overlap, the degree of overlap was determined to be 0.6. When the preset threshold value is 0.5, the overlapping degree is larger than the preset threshold value, and the fact that the determined face area and the determined human body area belong to the same student is shown. Therefore, the corresponding relation between the behavior type and the identity information can be established, namely that the students with the surname of small A and the class of first three and one are playing the mobile phones.
If the overlapping degree is less than or equal to the preset threshold value, the overlapping degree is low, and the human face area and the human body area do not belong to the same person. Therefore, the determined behavior type cannot be corresponding to the identity information corresponding to the face area. Therefore, the present detection result can be discarded. And (4) detecting again, or after acquiring a new target picture, continuously detecting the new target picture again according to the steps.
In the embodiment of the invention, when the behavior detection is carried out, not only can the specific behavior type be detected, but also the behavior type can be matched with the identity information, so that the identity information of a person who specifically takes the behavior can be determined, and the specific person who takes the behavior can be known, and the behavior can be better analyzed and recorded aiming at the individual.
Optionally, referring to fig. 1b, after the calculating the degree of overlap between the face region and the body region, the method further includes:
and 151, determining the correction parameters corresponding to the behavior types according to a preset algorithm, wherein the preset algorithm is used for determining different correction parameters corresponding to different behavior types.
In practical applications, there are many possible behaviors due to the person in the target picture, i.e. the body of the person may have various different postures or actions. Therefore, when different behaviors occur, different motions and postures may have different influences on the positional relationship of the face and the body thereof. For example, when performing behavior detection on students in one classroom, student a in the previous row leans down on the desk, student B in the next row sits up to listen and speak, and the two students are in the same column. After the target picture is thus acquired, the face region corresponding to the face of the student B and the body region corresponding to the body of the student a are learned. The degree of overlap may also be high, but the two students are not the same person. Therefore, if it is determined whether the face region and the body region correspond to the same person only by the overlapping degree, a determination error may be caused.
In order to solve the above problem, in the embodiment of the present invention, after determining the behavior type corresponding to the human body range in the target picture. The correction parameters corresponding to the behavior types can be determined according to a preset algorithm. The preset algorithm may be in various forms, for example, the preset algorithm may be a look-up table, and the look-up table stores correction parameters corresponding to different behavior types respectively.
The range of the correction parameter is typically between 0 and 1 and may be equal to 0 or 1. If the behavior type is a head-down and head-over table, the corresponding correction parameter is 0.5. When the behavior type is sitting up, the corresponding correction parameter is 1. If the behavior type is head-down, the corresponding correction parameter is 0. In practical application, different correction parameters can be set for different behavior types according to actual conditions. In the embodiment of the present invention, the specific value of the correction parameter is not particularly limited.
When the behavior detection method provided by the embodiment of the invention is actually used, if correction parameters are respectively set for each behavior type, certain resource waste is sometimes caused, and the operation efficiency of the electronic equipment is reduced. Therefore, in order to simplify the method flow and improve the operation efficiency of the electronic device, optionally, in the embodiment of the present invention, in step 151, determining the correction parameter corresponding to the behavior type according to a preset algorithm may include:
if the behavior type is the behavior of the shooting position facing the front face of the body, the correction parameter is the first correction parameter. And/or the presence of a gas in the gas,
and if the behavior type is the behavior of the shooting position facing the side face of the body, the correction parameter is a second correction parameter. And/or the presence of a gas in the gas,
and if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
Through analysis of a large number of actual behaviors, the matching condition of the human face region and the human body region is found to have a direct relation with the body orientation when the behaviors occur. For example, when the person's body is facing the shooting position, there is a high possibility that the face of the person is shot at the same time. When the back of the body of the person who acts is facing to the shooting position, the possibility that the face of the person is shot at the same time is low, so that even if the overlapping degree of the face area and the human body area of the person is high, the probability that the face area and the human body area belong to the same person is low.
Based on the principle described above, in the embodiment of the present invention, the behavior types may be classified into 3 types depending on the orientation of the body when the behavior occurs. Namely, the shooting position with the body facing forward, the shooting position with the body facing sideways, and the shooting position with the body facing backward. And corresponding correction parameters may be preset for the 3 behavior types, respectively. For example, the first correction parameter may be 1; the second correction parameter may be 0.7; the third correction parameter may be 0.4. Of course, the specific numerical value may be set as necessary.
In this way, it is not necessary to set a corresponding correction parameter for each behavior type. Therefore, on the premise of ensuring accurate determination, the flow is simplified, and the efficiency of the electronic equipment is improved.
In step 160, if the overlapping degree is greater than the preset threshold, establishing a corresponding relationship between the behavior type and the identity information includes:
and 160, correcting the overlapping degree through the correction parameters, and if the corrected overlapping degree is greater than a preset threshold, establishing a corresponding relation between the behavior type and the identity information.
After the correction parameters are determined, the degree of overlap can be corrected by the correction parameters. Specifically, the degree of overlap is generally corrected by directly multiplying the degree of overlap by a correction parameter. For example, the ratio of the overlapping area of the face region and the body region to the area of the face region is regarded as the overlapping degree, and the overlapping degree is 0.5. The determined correction parameter is 0.8. The degree of overlap is corrected to 0.5 × 0.8 — 0.4. I.e. the corrected overlap is 0.4.
After the corrected overlap is determined, the corrected overlap may be compared to a preset threshold. The preset threshold may be selected as desired, and may be, for example, generally 0.3, 0.40, 0.5, etc.
If the corrected overlap degree is larger than the preset threshold value, the overlap degree is high, and the human face area and the human body area can be considered to belong to the same person. Therefore, the determined behavior type can be corresponded to the corresponding identity information. Namely, the corresponding relation between the behavior type and the identity information is established. Through the corresponding relation, the behavior of each specific person can be determined, and therefore recording, namely subsequent analysis, can be better carried out.
For example, when behavior detection is performed for a student, a face region and a body region are determined in a target picture, respectively. Through this face area, can determine student's identity information, include: name: small a, class: three in the first shift. Through detecting the human body region, the behavior type of the student is determined as follows: and (5) playing the mobile phone. By calculating the degree of overlap and correcting the degree of overlap, it was determined that the degree of overlap after correction was 0.6. The preset threshold is 0.5. The corrected overlap is greater than a preset threshold. Thereby indicating that the determined face region and body region belong to the same student. Therefore, the corresponding relation between the behavior type and the identity information can be established, namely that the students with the surname of small A and the class of first three and one are playing the mobile phones.
If the corrected overlap degree is less than or equal to the preset threshold, the overlap degree is low, and the human face area and the human body area do not belong to the same person. Therefore, the determined behavior type cannot be corresponding to the identity information corresponding to the face area. Therefore, the present detection result can be discarded. And (4) detecting again, or after acquiring a new target picture, continuously detecting the new target picture again according to the steps.
In practical application, the corresponding relation between the identity information and the behavior type is established more conveniently and rapidly. In this embodiment of the present invention, in step 160, establishing a corresponding relationship between the behavior type and the identity information includes:
and 161, determining the target person corresponding to the face region according to the identity information.
After the electronic equipment determines the identity information corresponding to the face area in the target picture, the target person with the identity information can be determined according to the identity information, and the target person is located at the target person corresponding to the face area in the target picture. I.e. to determine the specific person based on the identity information.
And step 162, taking the behavior type as the behavior type of the target person.
After a particular target person is determined, the determined behavior type may be used as the behavior that the target person has, or is taking place. Thus, the detected behavior type can be associated with each specific person.
In the embodiment of the invention, after the target picture is obtained, the face region and the human body region can be respectively determined in the target picture, and then the face region and the human body region are respectively identified and detected, so that the identity information corresponding to the face region and the behavior type corresponding to the human body region can be determined. And then calculating the overlapping degree of the human face area and the human body area, correcting the overlapping degree according to the determined behavior type, comparing the corrected overlapping degree with a preset threshold, and if the overlapping degree is greater than the preset threshold, establishing a corresponding relation between the behavior type and the identity information. Therefore, when the behavior detection is carried out, not only can the specific behavior type be detected, but also the behavior type can be matched with the identity information, and therefore the identity information of the personnel who specifically take place the behavior can be determined. It is possible to know who specifically has taken place the action. Therefore, the behavior of the individual can be better analyzed and recorded. In addition, in the embodiment of the invention, the accuracy of matching the behavior type and the identity information is improved by correcting the overlapping degree, so that the reliability of behavior detection in a dense environment is improved.
With reference to the foregoing embodiment, in order to quickly determine a corresponding region or range when determining a face region and a human body region from a target picture, in the embodiment of the present invention, step 120, in determining a face region and a human body region in the target picture, may include:
and step 121, detecting the human face and the human body contained in the target picture respectively.
After the electronic device acquires the target picture, the electronic device can respectively identify the human face and the human body in the target picture by adopting corresponding identification methods, so that the human face and the human body are respectively identified in the target picture. The specific method is the same as the above embodiments, and is not described herein again.
And step 122, determining the face through the first rectangular frame, and taking the image range determined by the coordinates of the four vertexes of the first rectangular frame as a face area.
And step 123, determining a human body through the second rectangular frame, and taking the image range determined by the coordinates of the four vertexes of the second rectangular frame as a human body area.
Referring to fig. 2, in the target picture 201 acquired by the electronic device in fig. 2, a human face 211 and a human body 212 have been determined respectively.
The electronic equipment can respectively select the human face area and the human body area in a frame selection mode. Specifically, as shown in fig. 2, the first rectangular frame 202 may frame an image range in which the face 211 is located. The four vertex coordinates of the first rectangular frame 202 are (x1, y1), (x2, y1), (x1, y2), (x2, y2), respectively. The range of the first rectangular frame 202 can be clearly shown by the four vertex coordinates. The range of the first rectangular frame 202 is the face region.
Similarly, the second rectangular frame 203 may frame the image range in which the human body 212 is located. The coordinates of the four vertices of the second rectangular frame 203 are (a1, b1), (a2, b1), (a1, b2), and (a2, b 2). The range of the second rectangular frame 203 can be clearly shown by the four vertex coordinates. The range of the second rectangular frame 203 is the human body region.
Correspondingly, in step 150, calculating the overlapping degree of the face region and the body region may include:
the degree of overlap of the first rectangular frame 202 and the second rectangular frame 203 is calculated.
As in the above-described embodiment, the overlapping degree may be an overlapping area or an overlapping proportion of the first rectangular frame 202 and the second rectangular frame 203.
Preferably, the ratio of the overlapping area of the face region and the body region to the area of the face region may be generally used as the degree of overlap. The overlapping degree can be obtained by dividing the overlapping area of the first rectangular frame 202 and the second rectangular frame 203 by the first rectangular frame 202.
Specifically, the following can be obtained by the geometric relationship:
the area of the first rectangular frame 202 is: s1 ═ x2-x1 x (y2-y 1);
the overlapping area of the first rectangular frame 202 and the second rectangular frame 203 is: s2 ═ x2-x1 x (y2-b 1);
so that the degree of overlap H ═ S2/S1, and (0 ═ H ═ 1).
Through the process, the overlapping degree of the first rectangular frame 202 and the second rectangular frame 203 can be calculated quickly and conveniently, and the operation efficiency of the electronic equipment is improved.
Referring to fig. 3a, fig. 3a is a structural diagram of a behavior detection apparatus according to an embodiment of the present invention, which includes:
an obtaining module 301, configured to obtain a target picture;
the recognition module 302 is configured to determine a face region and a body region in the target picture;
the first detection module 303 is configured to perform face recognition on the face region, and determine identity information corresponding to the face region;
a second detection module 304, configured to perform behavior detection on the human body region, and determine a behavior type corresponding to the human body region;
a calculating module 305, configured to calculate an overlapping degree between the face region and the body region;
a matching module 306, configured to establish a corresponding relationship between the behavior type and the identity information if the overlapping degree is greater than a preset threshold.
In the embodiment of the invention, when the behavior detection is carried out, not only can the specific behavior type be detected, but also the behavior type can be matched with the identity information, so that the identity information of a person who specifically takes the behavior can be determined, and the specific person who takes the behavior can be known, and the behavior can be better analyzed and recorded aiming at the individual.
Optionally, referring to fig. 3b, the behavior detection apparatus according to the embodiment of the present invention preferably includes:
a modification module 307, configured to determine modification parameters corresponding to the behavior types according to a preset algorithm, where the preset algorithm is used to determine different modification parameters corresponding to different behavior types;
the matching module 306 is specifically configured to modify the overlapping degree according to the modification parameter, and if the modified overlapping degree is greater than a preset threshold, establish a corresponding relationship between the behavior type and the identity information.
In the embodiment of the invention, when the behavior detection is carried out, not only can the specific behavior type be detected, but also the behavior type can be matched with the identity information, so that the identity information of the person who specifically takes the behavior can be determined. It is possible to know who specifically has taken place the action. Therefore, the behavior of the individual can be better analyzed and recorded. In addition, in the embodiment of the invention, the accuracy of matching the behavior type and the identity information is improved by correcting the overlapping degree, so that the reliability of behavior detection in a dense environment is improved.
Optionally, in the behavior detection apparatus provided in the embodiment of the present invention, the identification module 302 is specifically configured to:
respectively detecting the human face and the human body contained in the target picture;
determining the face through a first rectangular frame, and taking an image range determined by coordinates of four vertexes of the first rectangular frame as the face area;
determining the human body through a second rectangular frame, and taking an image range determined by coordinates of four vertexes of the second rectangular frame as the human body area;
the calculating module 305 is specifically configured to:
and calculating the overlapping degree of the first rectangular frame and the second rectangular frame.
Optionally, in the behavior detection apparatus provided in the embodiment of the present invention, the first detection module 303 is specifically configured to:
establishing a face model corresponding to the face region according to the face region;
comparing the face model with a preset face model stored in a preset database, and determining a target preset face model matched with the face model;
and taking the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
Optionally, in the behavior detection apparatus provided in the embodiment of the present invention, the modification module 307 is specifically configured to:
if the behavior type is the behavior of the shooting position facing the front face of the body, the correction parameter is a first correction parameter; and/or the presence of a gas in the gas,
if the behavior type is the behavior of the shooting position facing to the side face of the body, the correction parameter is a second correction parameter; and/or the presence of a gas in the gas,
and if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
Optionally, in the behavior detection apparatus provided in the embodiment of the present invention, the matching module 306 is specifically configured to:
determining a target person corresponding to the face area according to the identity information;
and taking the behavior type as the behavior type of the target person.
An embodiment of the present invention further provides an electronic device, as shown in fig. 4, including a processor 401, a communication interface 402, a memory 403, and a communication bus 404, where the processor 401, the communication interface 402, and the memory 403 complete mutual communication through the communication bus 404,
a memory 403 for storing a computer program;
the processor 401, when executing the program stored in the memory 403, implements the following steps:
acquiring a target picture;
determining a face region and a human body region in the target picture;
carrying out face recognition on the face area, and determining identity information corresponding to the face area;
performing behavior detection on the human body area, and determining a behavior type corresponding to the human body area;
calculating the overlapping degree of the human face area and the human body area;
and if the overlapping degree is greater than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information.
The communication bus mentioned in the electronic device may be a PCI (Peripheral component interconnect) bus, an EISA (Extended Industry standard architecture) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a RAM (Random Access Memory) or an NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, which has instructions stored therein, and when the instructions are executed on a computer, the instructions cause the computer to execute the behavior detection method described in any of the above embodiments.
In yet another embodiment, a computer program product containing instructions is provided, which when run on a computer, causes the computer to perform the behavior detection method described in any of the above embodiments.
For the apparatus, the computer-readable storage medium, and the computer program product embodiment, since the contents of the methods involved are substantially similar to the foregoing method embodiments, the description is relatively simple, and for the relevant points, reference may be made to partial descriptions of the method embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (14)
1. A method of behavior detection, comprising:
acquiring a target picture;
determining a face region and a human body region in the target picture;
carrying out face recognition on the face area, and determining identity information corresponding to the face area;
performing behavior detection on the human body area, and determining a behavior type corresponding to the human body area;
calculating the overlapping degree of the human face area and the human body area;
and if the overlapping degree is greater than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information.
2. The method according to claim 1, wherein the determining a face region and a body region in the target picture comprises:
respectively detecting the human face and the human body contained in the target picture;
determining the face through a first rectangular frame, and taking an image range determined by coordinates of four vertexes of the first rectangular frame as the face area;
determining the human body through a second rectangular frame, and taking an image range determined by coordinates of four vertexes of the second rectangular frame as the human body area;
the calculating the overlapping degree of the face region and the human body region comprises:
and calculating the overlapping degree of the first rectangular frame and the second rectangular frame.
3. The method according to claim 1, wherein the performing face recognition on the face region and determining the identity information corresponding to the face region comprises:
establishing a face model corresponding to the face region according to the face region;
comparing the face model with a preset face model stored in a preset database, and determining a target preset face model matched with the face model;
and taking the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
4. The method of claim 1, wherein after the calculating the degree of overlap of the face region and the body region, the method further comprises:
determining correction parameters corresponding to the behavior types according to a preset algorithm, wherein the preset algorithm is used for determining different correction parameters corresponding to different behavior types;
if the overlapping degree is greater than a preset threshold, establishing a corresponding relationship between the behavior type and the identity information, including:
and correcting the overlapping degree through the correction parameters, and if the corrected overlapping degree is greater than a preset threshold value, establishing a corresponding relation between the behavior type and the identity information.
5. The method according to claim 4, wherein the determining, according to a preset algorithm, the correction parameter corresponding to the behavior type includes:
if the behavior type is the behavior of the shooting position facing the front face of the body, the correction parameter is a first correction parameter; and/or (c) and/or,
if the behavior type is the behavior of the shooting position facing to the side face of the body, the correction parameter is a second correction parameter; and/or (c) and/or,
and if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
6. The method according to claim 1 or 4, wherein the establishing the correspondence between the behavior type and the identity information comprises:
determining a target person corresponding to the face area according to the identity information;
and taking the behavior type as the behavior type of the target person.
7. A behavior detection device, comprising:
the acquisition module is used for acquiring a target picture;
the recognition module is used for determining a face region and a human body region in the target picture;
the first detection module is used for carrying out face recognition on the face area and determining identity information corresponding to the face area;
the second detection module is used for carrying out behavior detection on the human body area and determining a behavior type corresponding to the human body area;
the calculation module is used for calculating the overlapping degree of the human face area and the human body area;
and the matching module is used for establishing the corresponding relation between the behavior type and the identity information if the overlapping degree is greater than a preset threshold value.
8. The apparatus according to claim 7, wherein the identification module is specifically configured to:
respectively detecting the human face and the human body contained in the target picture;
determining the face through a first rectangular frame, and taking an image range determined by coordinates of four vertexes of the first rectangular frame as the face area;
determining the human body through a second rectangular frame, and taking an image range determined by coordinates of four vertexes of the second rectangular frame as the human body area;
the calculation module is specifically configured to:
and calculating the overlapping degree of the first rectangular frame and the second rectangular frame.
9. The apparatus of claim 7, wherein the first detection module is specifically configured to:
establishing a face model corresponding to the face region according to the face region;
comparing the face model with a preset face model stored in a preset database, and determining a target preset face model matched with the face model;
and taking the identity information corresponding to the target preset face model stored in the preset database as the identity information corresponding to the face area.
10. The apparatus of claim 7, further comprising:
the correcting module is used for determining the correcting parameters corresponding to the behavior types according to a preset algorithm, and the preset algorithm is used for determining different correcting parameters corresponding to different behavior types;
the matching module is specifically configured to modify the overlapping degree according to the modification parameter, and if the modified overlapping degree is greater than a preset threshold, establish a corresponding relationship between the behavior type and the identity information.
11. The apparatus of claim 10, wherein the modification module is specifically configured to:
if the behavior type is the behavior of the shooting position facing the front face of the body, the correction parameter is a first correction parameter; and/or (c) and/or,
if the behavior type is the behavior of the shooting position facing to the side face of the body, the correction parameter is a second correction parameter; and/or (c) and/or,
and if the behavior type is the behavior of the shooting position facing the back of the body, the correction parameter is a third correction parameter.
12. The apparatus of claim 7, wherein the matching module is specifically configured to:
determining a target person corresponding to the face area according to the identity information;
and taking the behavior type as the behavior type of the target person.
13. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1-6 when executing a program stored in the memory.
14. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811139415.1A CN110969045B (en) | 2018-09-28 | 2018-09-28 | Behavior detection method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811139415.1A CN110969045B (en) | 2018-09-28 | 2018-09-28 | Behavior detection method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110969045A true CN110969045A (en) | 2020-04-07 |
CN110969045B CN110969045B (en) | 2023-06-02 |
Family
ID=70026877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811139415.1A Active CN110969045B (en) | 2018-09-28 | 2018-09-28 | Behavior detection method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110969045B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476160A (en) * | 2020-04-07 | 2020-07-31 | 杭州视在科技有限公司 | Loss function optimization method, model training method, target detection method, and medium |
CN111507600A (en) * | 2020-04-10 | 2020-08-07 | 开滦(集团)有限责任公司电信分公司 | Skill martial art monitoring method, device and system and storage medium |
CN112001230A (en) * | 2020-07-09 | 2020-11-27 | 浙江大华技术股份有限公司 | Sleeping behavior monitoring method and device, computer equipment and readable storage medium |
CN112714280A (en) * | 2020-12-08 | 2021-04-27 | 深圳英飞拓科技股份有限公司 | Intelligent old age care method, system, equipment and storage medium based on intelligent box |
CN113469132A (en) * | 2021-07-26 | 2021-10-01 | 浙江大华技术股份有限公司 | Violation detection method and device, electronic equipment and storage medium |
CN113688756A (en) * | 2021-08-30 | 2021-11-23 | 维沃移动通信(杭州)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN114443892A (en) * | 2021-12-30 | 2022-05-06 | 深圳云天励飞技术股份有限公司 | Personnel data archiving method and device, electronic equipment and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8254633B1 (en) * | 2009-04-21 | 2012-08-28 | Videomining Corporation | Method and system for finding correspondence between face camera views and behavior camera views |
US20150010214A1 (en) * | 2013-07-05 | 2015-01-08 | Canon Kabushiki Kaisha | Information processing device, communication counterpart decision method and storage medium |
CN104463090A (en) * | 2013-11-25 | 2015-03-25 | 安徽寰智信息科技股份有限公司 | Method for recognizing actions of human body skeleton of man-machine interactive system |
US20170155955A1 (en) * | 2015-12-01 | 2017-06-01 | Le Holdings (Beijing) Co., Ltd. | Method and device for distinguishing user data of smart tv |
CN107292240A (en) * | 2017-05-24 | 2017-10-24 | 深圳市深网视界科技有限公司 | It is a kind of that people's method and system are looked for based on face and human bioequivalence |
CN107609517A (en) * | 2017-09-15 | 2018-01-19 | 华中科技大学 | A kind of classroom behavior detecting system based on computer vision |
CN107679613A (en) * | 2017-09-30 | 2018-02-09 | 同观科技(深圳)有限公司 | A kind of statistical method of personal information, device, terminal device and storage medium |
CN108256404A (en) * | 2016-12-29 | 2018-07-06 | 北京旷视科技有限公司 | Pedestrian detection method and device |
CN108304762A (en) * | 2017-11-30 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of human body attitude matching process and its equipment, storage medium, terminal |
WO2018133666A1 (en) * | 2017-01-17 | 2018-07-26 | 腾讯科技(深圳)有限公司 | Method and apparatus for tracking video target |
CN108416336A (en) * | 2018-04-18 | 2018-08-17 | 特斯联(北京)科技有限公司 | A kind of method and system of intelligence community recognition of face |
CN108537165A (en) * | 2018-04-08 | 2018-09-14 | 百度在线网络技术(北京)有限公司 | Method and apparatus for determining information |
CN108537172A (en) * | 2018-04-09 | 2018-09-14 | 北京邦天信息技术有限公司 | A kind of method and apparatus of the behavior based on Machine Vision Recognition people |
-
2018
- 2018-09-28 CN CN201811139415.1A patent/CN110969045B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8254633B1 (en) * | 2009-04-21 | 2012-08-28 | Videomining Corporation | Method and system for finding correspondence between face camera views and behavior camera views |
US20150010214A1 (en) * | 2013-07-05 | 2015-01-08 | Canon Kabushiki Kaisha | Information processing device, communication counterpart decision method and storage medium |
CN104463090A (en) * | 2013-11-25 | 2015-03-25 | 安徽寰智信息科技股份有限公司 | Method for recognizing actions of human body skeleton of man-machine interactive system |
US20170155955A1 (en) * | 2015-12-01 | 2017-06-01 | Le Holdings (Beijing) Co., Ltd. | Method and device for distinguishing user data of smart tv |
CN108256404A (en) * | 2016-12-29 | 2018-07-06 | 北京旷视科技有限公司 | Pedestrian detection method and device |
WO2018133666A1 (en) * | 2017-01-17 | 2018-07-26 | 腾讯科技(深圳)有限公司 | Method and apparatus for tracking video target |
CN107292240A (en) * | 2017-05-24 | 2017-10-24 | 深圳市深网视界科技有限公司 | It is a kind of that people's method and system are looked for based on face and human bioequivalence |
CN107609517A (en) * | 2017-09-15 | 2018-01-19 | 华中科技大学 | A kind of classroom behavior detecting system based on computer vision |
CN107679613A (en) * | 2017-09-30 | 2018-02-09 | 同观科技(深圳)有限公司 | A kind of statistical method of personal information, device, terminal device and storage medium |
CN108304762A (en) * | 2017-11-30 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of human body attitude matching process and its equipment, storage medium, terminal |
CN108537165A (en) * | 2018-04-08 | 2018-09-14 | 百度在线网络技术(北京)有限公司 | Method and apparatus for determining information |
CN108537172A (en) * | 2018-04-09 | 2018-09-14 | 北京邦天信息技术有限公司 | A kind of method and apparatus of the behavior based on Machine Vision Recognition people |
CN108416336A (en) * | 2018-04-18 | 2018-08-17 | 特斯联(北京)科技有限公司 | A kind of method and system of intelligence community recognition of face |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476160A (en) * | 2020-04-07 | 2020-07-31 | 杭州视在科技有限公司 | Loss function optimization method, model training method, target detection method, and medium |
CN111507600A (en) * | 2020-04-10 | 2020-08-07 | 开滦(集团)有限责任公司电信分公司 | Skill martial art monitoring method, device and system and storage medium |
CN112001230A (en) * | 2020-07-09 | 2020-11-27 | 浙江大华技术股份有限公司 | Sleeping behavior monitoring method and device, computer equipment and readable storage medium |
CN112001230B (en) * | 2020-07-09 | 2024-07-30 | 浙江大华技术股份有限公司 | Sleep behavior monitoring method and device, computer equipment and readable storage medium |
CN112714280A (en) * | 2020-12-08 | 2021-04-27 | 深圳英飞拓科技股份有限公司 | Intelligent old age care method, system, equipment and storage medium based on intelligent box |
CN113469132A (en) * | 2021-07-26 | 2021-10-01 | 浙江大华技术股份有限公司 | Violation detection method and device, electronic equipment and storage medium |
CN113688756A (en) * | 2021-08-30 | 2021-11-23 | 维沃移动通信(杭州)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN114443892A (en) * | 2021-12-30 | 2022-05-06 | 深圳云天励飞技术股份有限公司 | Personnel data archiving method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110969045B (en) | 2023-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110969045B (en) | Behavior detection method and device, electronic equipment and storage medium | |
CN110659397B (en) | Behavior detection method and device, electronic equipment and storage medium | |
EP3341851B1 (en) | Gesture based annotations | |
CN111726586A (en) | Production system operation standard monitoring and reminding system | |
CN111814587B (en) | Human behavior detection method, teacher behavior detection method, and related systems and devices | |
CN107679475B (en) | Store monitoring and evaluating method and device and storage medium | |
CN110321852B (en) | Action type identification method and device, storage medium and computer equipment | |
CN112506340B (en) | Equipment control method, device, electronic equipment and storage medium | |
CN109376631A (en) | A kind of winding detection method and device neural network based | |
CN111339801B (en) | Personnel attention detection method, device, equipment and system | |
CN112464904B (en) | Classroom behavior analysis method and device, electronic equipment and storage medium | |
CN111325082B (en) | Personnel concentration analysis method and device | |
CN109873979A (en) | Camera-based static image difference comparison method and device | |
CN111881740A (en) | Face recognition method, face recognition device, electronic equipment and medium | |
JP6773825B2 (en) | Learning device, learning method, learning program, and object recognition device | |
CN111241926A (en) | Attendance checking and learning condition analysis method, system, equipment and readable storage medium | |
US10438066B2 (en) | Evaluation of models generated from objects in video | |
CN114332927A (en) | Classroom hand-raising behavior detection method, system, computer equipment and storage medium | |
US10445620B2 (en) | Method and system for object tracking in multiple non-linear distortion lenses | |
CN117218703A (en) | Intelligent learning emotion analysis method and system | |
JP4012200B2 (en) | Object detection method, apparatus, and program | |
CN106169057A (en) | Information processor and method | |
US20220122341A1 (en) | Target detection method and apparatus, electronic device, and computer storage medium | |
CN111444803A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN112766230A (en) | Video streaming personnel online time length estimation method and corresponding system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |