CN107920223B - Object behavior detection method and device - Google Patents

Object behavior detection method and device Download PDF

Info

Publication number
CN107920223B
CN107920223B CN201610875498.5A CN201610875498A CN107920223B CN 107920223 B CN107920223 B CN 107920223B CN 201610875498 A CN201610875498 A CN 201610875498A CN 107920223 B CN107920223 B CN 107920223B
Authority
CN
China
Prior art keywords
target object
behavior
image
preset
time period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610875498.5A
Other languages
Chinese (zh)
Other versions
CN107920223A (en
Inventor
许可
童鸿翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201610875498.5A priority Critical patent/CN107920223B/en
Publication of CN107920223A publication Critical patent/CN107920223A/en
Application granted granted Critical
Publication of CN107920223B publication Critical patent/CN107920223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method and a device for detecting object behaviors. The method comprises the following steps: acquiring a first image, and detecting whether a target object exists in a preset area of the first image; determining the current state of the target object according to the detection result; judging whether the state of the target object changes or not according to the previous state and the current state of the target object; if so, acquiring a first acquisition time of the first image, and updating the previous state to the current state; and determining the behavior of the target object according to the relation between the first acquisition time and a preset time period. The embodiment of the invention can improve the detection efficiency of the object behaviors.

Description

Object behavior detection method and device
Technical Field
The application relates to the technical field of intelligent video monitoring, in particular to a method and a device for detecting object behaviors.
Background
With the development of video monitoring technology, cameras are often installed in some important places to obtain monitoring videos of the places. These surveillance videos may provide people with information about the location being monitored. At present, the late behavior of people in a workplace can be monitored according to a monitoring video.
In order to detect the late-arrival behaviors of the staff in the workplace, in the prior art, a mode of manually checking a monitoring video is often adopted to detect whether the staff have the late-arrival behaviors. However, the manual checking method needs to manually identify the arrival time of the person and determine whether the person has a late behavior. This detection method of late behavior is inefficient.
Disclosure of Invention
The embodiment of the application aims to provide a method and a device for detecting object behaviors so as to improve the detection efficiency of the object behaviors.
In order to achieve the above object, the present invention discloses an object behavior detection method, comprising:
obtaining a first image;
detecting whether a target object exists in a preset area of the first image;
determining the current state of the target object according to the detection result;
judging whether the state of the target object changes or not according to the last state and the current state of the target object;
if so, acquiring a first acquisition time of the first image, and updating the previous state to the current state;
and determining the behavior of the target object according to the relation between the first acquisition time and a preset time period.
Optionally, the detecting whether the target object exists in the preset region of the first image includes:
detecting whether a suspected face area exists in a preset area of the first image;
if yes, judging whether the suspected face area is matched with the pre-stored characteristics of the target object;
and if so, judging that the target object exists in the preset area of the first image.
Optionally, when the current state indicates that the target object appears and the previous state indicates that the target object does not appear, determining the behavior of the target object according to the relationship between the first acquisition time and a preset time period includes:
and when the first acquisition time e (a, b) and the target object appears for the first time in the preset time period, determining that the target object has a late behavior, wherein a is the starting time of the preset time period, and b is the ending time of the preset time period.
Optionally, when the current state indicates that the target object appears and the previous state indicates that the target object does not appear, determining the behavior of the target object according to the relationship between the first acquisition time and a preset time period includes:
and when the first acquisition time e (a, b) and the target object has appeared in the preset time period, determining that the target object has a leaving behavior, wherein a is the starting time of the preset time period, and b is the ending time of the preset time period.
Optionally, the method further includes:
when it is detected that no target object exists in a preset area of the first image, the first acquisition time is equal to b, and the target object is determined to have early-quit behavior under the condition that the target object has appeared in the preset time period.
Optionally, the method further includes:
when it is detected that no target object exists in a preset area of the first image, the first acquisition time is equal to b, and the target object does not appear in the preset time period, determining that the target object has an absent behavior.
Optionally, the method further includes:
when the target object exists in the preset area of the first image, detecting whether the clothing of the target object meets the requirement or not according to a clothing detection model generated in advance;
and determining the dressing behavior of the target object according to the detection result.
Optionally, the determining, according to the detection result, the dressing behavior of the target object includes:
when the first acquisition time is equal to b, counting a first number of images of the target object, the clothing of which meets the requirement, and counting a second number of images of the target object, the clothing of which does not meet the requirement;
and determining the dressing behavior of the target object according to the first quantity and the second quantity.
In order to achieve the above object, the present invention also discloses an object behavior detection apparatus, including:
an image obtaining module for obtaining a first image;
the object detection module is used for detecting whether a target object exists in a preset area of the first image or not;
the state determining module is used for determining the current state of the target object according to the detection result;
the state judgment module is used for judging whether the state of the target object changes or not according to the last state and the current state of the target object;
the time acquisition module is used for acquiring a first acquisition time of the first image when the state of the target object changes and updating the previous state to the current state;
and the behavior determining module is used for determining the behavior of the target object according to the relation between the first acquisition time and a preset time period.
Optionally, the object detection module includes:
the detection submodule is used for detecting whether a suspected face area exists in a preset area of the first image;
the judging submodule is used for judging whether a suspected face area is matched with the characteristics of a pre-stored target object or not when the suspected face area is detected to exist in the preset area of the first image;
and the judging submodule is used for judging that the target object exists in the preset area of the first image when the suspected face area is judged to be matched with the feature of the pre-stored target object.
Optionally, the behavior determining module is specifically configured to:
and when the current state indicates that the target object appears, the last state indicates that the target object does not appear, and when the first acquisition time e (a, b) and the target object appears for the first time in the preset time period, determining that the target object has late behavior, wherein a is the starting time of the preset time period, and b is the ending time of the preset time period.
Optionally, the behavior determining module is specifically configured to:
and when the current state indicates that the target object appears, and the last state indicates that the target object does not appear, determining that the target object has the behavior of leaving the seat when the first acquisition time belongs to the group of (a, b), and the target object has appeared in the preset time period, wherein a is the starting time of the preset time period, and b is the ending time of the preset time period.
Optionally, the apparatus further comprises an early-quit behavior determination module;
the early-quit behavior determining module is configured to determine that an early-quit behavior exists in the target object when it is detected that the target object does not exist in the preset region of the first image, the first acquisition time is equal to the b, and the target object has already appeared in the preset time period.
Optionally, the apparatus further comprises an absence behavior determination module;
the absence behavior determining module is configured to determine that an absence behavior exists in the target object when it is detected that the target object does not exist in the preset region of the first image, the first acquisition time is equal to the second acquisition time, and the target object does not appear within the preset time period.
Optionally, the device further comprises a dressing detection module;
the dressing detection module is used for detecting whether the dressing of the target object meets the requirement or not according to a pre-generated dressing detection model when the target object exists in the preset area of the first image;
the behavior determination module is specifically configured to determine the dressing behavior of the target object according to the detection result.
Optionally, the behavior determining module includes:
the counting submodule is used for counting a first number of images of the target object, the dressing of which meets the requirement, and counting a second number of images of the target object, the dressing of which does not meet the requirement, when the first acquisition time is equal to the second acquisition time;
and the determining submodule is used for determining the dressing behavior of the target object according to the first quantity and the second quantity.
According to the technical scheme, whether a target object exists in a preset area of an obtained first image is detected, the current state of the target object is determined according to a detection result, whether the state of the target object changes or not is judged according to the previous state and the current state of the target object, if yes, the first acquisition time of the first image is obtained, and the previous state is updated to the current state; and determining the behavior of the target object according to the relation between the first acquisition time and a preset time period.
That is to say, according to the relationship between the first acquisition time and the preset time period when the previous state and the current state of the target object change, the behavior of the target object is determined, and the behavior of the object does not need to be manually detected, so that the detection efficiency of the behavior of the object can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flowchart of an object behavior detection method according to an embodiment of the present application;
FIG. 2 is a diagram illustrating several behaviors of a target object versus time;
fig. 3 is a schematic structural diagram of an object behavior detection system according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an object behavior detection apparatus according to an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The embodiment of the application provides an object behavior detection method and device, and aims to improve the detection efficiency of object behaviors. The object behavior detection method and device provided by the embodiment of the application can be applied to a terminal or a server, and can also be applied to other electronic devices, which is not specifically limited in the application.
The present application will be described in detail below with reference to specific examples.
Fig. 1 is a schematic flow chart of an object behavior detection method provided in an embodiment of the present application, where the method includes the following steps:
step S101: a first image is obtained.
Specifically, the obtained first image may be an image collected in real time by a surveillance video collecting device, or an image obtained from a pre-recorded surveillance video. Of course, the images may be acquired in other forms, and the present application is not limited to this.
The object detected by the present embodiment has a range of fixed positions in the image. Therefore, the scene in the monitoring video can be an office place, and in this case, the embodiment can detect the behavior of the staff in the office place; may be a conference hall, in which case the embodiment may detect the behavior of the lecturer in the conference hall. The social roles of the detected persons are not particularly limited by the present application.
Step S102: and detecting whether a target object exists in a preset area of the first image.
Specifically, the preset area is preset according to the fixed position of the object in the first image. The preset region may be located at an arbitrary position in the first image. The preset area may be an area with a certain regular shape in the image, and the preset area is preset, that is, the coordinate range of the area in the monitoring picture is preset.
The target object may be a human, but may also be an animal, such as a pet dog, and the like, and the present application is not limited thereto. As the actual video monitoring mainly monitors people in a monitoring scene, as a specific implementation, the target object is a person.
In this embodiment, the step S102, namely, detecting whether the target object exists in the preset area of the first image, may include multiple embodiments:
firstly, whether a human face area exists in a preset area of the first image is detected, and if so, a target object exists in the preset area of the first image is determined.
Since the target object in the preset area is generally relatively fixed, other target objects than the target object may not always appear in the preset area. Therefore, in this embodiment, only whether a human face exists in the preset area needs to be detected, and it is not necessary to distinguish whether the human face detected in each image is the same, so that the detection efficiency can be improved.
Secondly, detecting whether a suspected face area exists in a preset area of the first image; if yes, judging whether the suspected face area is matched with the pre-stored characteristics of the target object; and if so, judging that the target object exists in the preset area of the first image.
In this embodiment, the features of the target object are stored in the terminal, the server or other electronic devices in advance, and the detected suspected face area is matched with the features, that is, whether all the suspected face areas have the features is judged, and if yes, the suspected face area is determined to be matched with the features.
The pre-stored features of the target object may be obtained from a pre-stored face image of the target object.
For example, in a workplace, in order to detect the behavior of employee a (i.e., a is a target object), a face image of a may be acquired in advance, and a feature of the face image of a may be acquired and stored in the electronic device. When the behavior of the person A is detected, whether a suspected face area exists in a preset area in the image can be detected, if yes, the characteristics of the suspected face area are obtained, the characteristics of the suspected face area are judged to be matched with the characteristics of the face image of the person A stored in advance, and if the matching is successful, the fact that the employee A is detected in the preset area of the image is determined. When the employee B enters the working position of the employee a, the presence of the target object cannot be detected in the preset area of the image. Therefore, the above embodiment can improve the accuracy of detection.
It should be noted that detecting a face region in an image belongs to the prior art, and a specific process thereof is not described herein again.
Step S103: and determining the current state of the target object according to the detection result.
The state of the target object may be divided into two states of presence and absence depending on whether the target object is present in the preset area.
When the target object is detected to exist in the preset area of the first image, the current state of the target object is determined to be present. When the target object is detected not to exist in the preset area of the first image, the current state of the target object is determined to be absent.
Since the surveillance video is composed of a large number of images, the state of a target object, either present or absent, can be determined at the time of detection of each image. For the currently detected first image, the determined state of the target object is the current state, and for the previous image of the currently detected first image, the determined state of the target object is the previous state.
Step S104: and judging whether the state of the target object is changed or not according to the last state and the current state of the target object, and if so, executing the step S105. Otherwise, no processing is performed.
In this embodiment, the previous state and the current state of the target object may be determined whether the state of the target object changes, and the method specifically includes a plurality of implementation manners:
when the last state of the target object is appearance and the current state is appearance, judging that the state of the target object is not changed;
when the last state of the target object is not present and the current state is not present, judging that the state of the target object is not changed;
when the last state of the target object is present and the current state is absent, judging that the state of the target object is changed;
and when the last state of the target object is not present and the current state is present, judging that the state of the target object is changed.
Step S105: and acquiring a first acquisition moment of a first image, and updating the previous state to the current state.
As a specific implementation manner of this embodiment, acquiring the first acquisition time of the first image may include: and determining the first acquisition time of the first image according to the time when the video starts to be recorded, the frame rate and the position of the first image in the whole video. For example, when the video is recorded at 8 hours and 0 minutes, the code rate is 25 frames/s, the first image is the 10000 th frame of the video, and the first acquisition time of the first image is 8 hours 00 minutes +10000 frames/(25 frames/s) ═ 8 hours, 6 minutes and 40 seconds.
Or, for each acquired image, storing the acquisition time of the image, and directly acquiring the first acquisition time of the first image according to the stored acquisition time.
Step S106: and determining the behavior of the target object according to the relation between the first acquisition time and a preset time period.
It is understood that the preset time period may be understood as a time period in which the target object should be in the preset area under normal conditions, and the preset time period is determined by the start time and the end time.
The preset time period may be expressed in absolute time. For example, for a workplace, the work hours are 8: 00, the off-duty time is 16: 00, then the preset time period may be set to 8: 00-16: 00. wherein, 8: 00 is the starting time, 16: and 00 is the termination time.
The preset time period may also be expressed in relative time instants. For example, the on-duty time is 0 and the off-duty time is 8, the preset time period may be set to [0, 8 ]. Wherein 0 is the starting time, and 8 is the ending time.
The relationship between the first acquisition instant t and the preset time period [ a, b ] may comprise: t < a, t ═ a, a < t < b, t ═ b, t > b, and combinations of the foregoing. Wherein, a is the starting time of the preset time period, and b is the ending time of the preset time period.
According to the difference between the first acquisition time and the preset time period, the correspondingly determined behavior of the target object is also different, and the behavior of the target object may include: normal attendance, late arrival, absence, etc.
In addition, in the specific implementation manner of this embodiment, the method may further include: and recording the first image, the first acquisition time and the corresponding behavior. In a more specific embodiment, a preset number of forward images of the first image and a preset number of backward images of the first image may be acquired, a first video may be generated according to the preset number of forward images of the first image and the preset number of backward images of the first image, and the first video, the first capture time, and corresponding behaviors may be recorded.
It is understood that the recorded information can be used as evidence of corresponding behavior of the target object.
As can be seen from the above, in this embodiment, first, whether a target object exists in a preset region of an obtained first image is detected, a current state of the target object is determined according to a detection result, whether a state of the target object changes is determined according to a previous state and the current state of the target object, and if yes, a first acquisition time of the first image is obtained, and the previous state is updated to the current state; and determining the behavior of the target object according to the relation between the first acquisition time and a preset time period. That is to say, according to the relationship between the first acquisition time and the preset time period when the previous state and the current state of the target object change, the behavior of the target object is determined, and the behavior of the object does not need to be manually detected, so that the detection efficiency of the behavior of the object can be improved.
In the embodiment shown in fig. 1, it may be determined that different behaviors exist in the target object according to different contents of the previous state and the current state of the target object. To more specifically determine different behaviors of the target object, the embodiment shown in FIG. 1 may include different implementations. Described separately below.
In another embodiment of the present application, in the embodiment shown in fig. 1, the step S106 of determining the behavior of the target object according to the relationship between the first acquisition time and a preset time period may include:
when the current state indicates that the target object appears, and the last state indicates that the target object does not appear, determining that the target object has late behavior when the first acquisition time e (a, b) is the first appearance within the preset time period.
Wherein, the ∈ belongs to a symbol, a is the starting time of the preset time period, b is the ending time of the preset time period, and (a, b) indicates that the point a is not included and the point b is included.
It should be noted that the late behavior means that the first appearance time of the target object within the preset time period is later than the starting time.
Specifically, after determining the current state of the target object, the current state of the target object and the occurrence frequency of each state, including the occurrence frequency of the target object and the non-occurrence frequency of the target object, may be recorded and stored. The number of times that the target object does not appear may be understood as the number of times that the target object leaves the preset area.
In a specific implementation manner of this embodiment, it may be determined whether the target object appears for the first time within a preset time period according to the stored appearance times of the target object.
It can be understood that if the first acquisition time t ═ a, the target object is considered to be normally admitted and no late behavior exists.
In another embodiment of the present application, in the embodiment shown in fig. 1, the step S106 of determining the behavior of the target object according to the relationship between the first acquisition time and a preset time period may include:
when the current state indicates that the target object appears, and the last state indicates that the target object does not appear, and when the first acquisition time e (a, b) belongs to the condition that the target object has appeared within the preset time period, determining that the target object has the absence behavior.
It should be noted that the leaving behavior refers to that the target object has appeared within a preset time period, but leaves the seat halfway, and returns the seat before the termination time.
Specifically, whether the target object has appeared within the preset time period may be determined according to the stored number of times of appearance of the target object.
It is understood that if the target object has the behavior of leaving the seat within the preset time period, the target object may be considered to have appeared within the preset time period first, and then the target object sequentially makes the actions of leaving the seat, returning to the seat, and the like. When the target object returns to the seat again, and the time for returning to the seat does not exceed the termination time, the target object can be considered to have the absence behavior.
It is worth pointing out that multiple leave behaviors of the target object may occur within a preset time period. Therefore, in the specific implementation of this embodiment, the number of times of the absence behavior may also be recorded.
In another embodiment of the present application, the embodiment shown in fig. 1 may further include:
when it is detected that no target object exists in a preset area of the first image, the first acquisition time is equal to b, and the target object is determined to have early-quit behavior under the condition that the target object has appeared in the preset time period.
It should be noted that the early-quit behavior refers to that the target object has appeared within a preset time period, but leaves the seat halfway, and the target object does not return to the seat until the termination time. And under the condition that the target object appears in the preset time period, determining that the target object has early-quit behavior only if the target object is not detected in the preset area of the first image corresponding to the termination time b.
In another embodiment of the present application, the embodiment shown in fig. 1 may further include:
when it is detected that no target object exists in a preset area of the first image, the first acquisition time is equal to b, and the target object does not appear in the preset time period, determining that the target object has an absent behavior.
It should be noted that the absence behavior means that the target object does not appear all the time within a preset time period. Only in the case where the target object is not detected in the preset region of the first image corresponding to the termination time b and the target object never appears within the preset time period, it can be determined that the target object has the absent behavior.
In addition, in order to detect whether the target object is worn satisfactorily, in another embodiment of the present application, the method may further include:
step 1: and when the target object exists in the preset area of the first image, detecting whether the clothing of the target object meets the requirement or not according to a clothing detection model generated in advance.
Specifically, an image containing satisfactory garments may be captured in advance, and the garments in the image may be marked. And then, training a preset machine learning model by using the image to obtain a dressing detection model.
Step 2: and determining the dressing behavior of the target object according to the detection result.
Specifically, when the detection result shows that the target object is in conformity with the clothing requirement, the clothing behavior of the target object is determined to be normal. And when the detection result shows that the target object does not meet the requirement of dressing, determining that the dressing behavior of the target object is abnormal.
In one embodiment of this step, step 2 may comprise:
step 2A: and when the first acquisition time is equal to b, counting a first number of images of the target object with the clothing meeting the requirement, and counting a second number of images of the target object with the clothing not meeting the requirement.
And step 2B: and determining the dressing behavior of the target object according to the first quantity and the second quantity.
Specifically, the quantity proportion of the target object with normal dressing behavior can be obtained according to the first quantity and the second quantity, that is, the first quantity/(the first quantity + the second quantity). And judging whether the quantity proportion is larger than a preset proportion threshold value, if so, determining that the target object dressing behavior is normal, and otherwise, determining that the target object dressing behavior is abnormal.
The preset proportion threshold value may be set to a small value, for example, 10% or 30%, in consideration of various interference factors existing in the actual detection process.
The present application will be described in detail with reference to specific examples.
FIG. 2 is a diagram illustrating the relationship between late, absent, early, absent behavior and time. Wherein, a is the starting time of the preset time period, and b is the ending time of the preset time period. The target object is denoted by a.
The boxes in different filling patterns in fig. 2 represent the states of whether the target object is present or absent in a preset region of the image. As shown in (1) in fig. 2, when the currently detected image is located between a and t1, it is determined that a is not present in the preset area, and both the previous state and the current state of a indicate that a is not present, i.e., the state of a is not changed.
When the currently detected image is located at t1, namely t1 is the first acquisition time, it is determined that a exists in the preset area, the last state of a indicates that a does not exist, the current state indicates that a exists, at this time, it is determined that the state of a changes, and at the same time, it is determined that t1 is located between ab, so that it can be determined that a has late behavior.
When the currently detected image is located at t 1-t 2, it is determined that a exists in the preset area, but both the previous state and the current state of a indicate that a exists, i.e., the state of a does not change.
When the currently detected image is located at t2, that is, t2 is the first acquisition time, it is determined that a does not exist in the preset region, the last state of a indicates that a appears, and the current state indicates that a does not appear, at this time, it is determined that the state of a changes, and it is not determined whether a is absent or early receding.
When the currently detected image is located at t 2-t 3, it is determined that a does not exist in the preset area, and both the previous state and the current state of a indicate that a does not exist, i.e., the state of a does not change.
When the currently detected image is located at t3, it is determined that a exists in the preset area, the last state of a indicates that a does not exist, and the current state indicates that a exists, and it is determined that the state of a changes. Meanwhile, since a has occurred between ab, it can be determined that a has an absent behavior at this time.
When the currently detected image is located at t 3-t 4, it is determined that a exists in the preset area, but both the previous state and the current state of a indicate that a exists, i.e., the state of a does not change.
When the currently detected image is located at t4, that is, t4 is the first acquisition time, it is determined that a does not exist in the preset region, the last state of a indicates that a appears, and the current state indicates that a does not appear, at this time, it is determined that the state of a changes, but it cannot be determined whether a has an absence behavior or an early exit behavior.
When the currently detected image is located at t 4-b, it is determined that a is not present in the preset area, and both the previous state and the current state of a indicate that a is not present, i.e., the state of a is not changed.
If the currently detected image is located at b, that is, b is the first acquisition time, it is determined that a does not exist in the preset area, and if a has already appeared within the time period from a to b, it may be determined that a has early-receding behavior.
As shown in (2) in fig. 2, if the currently detected image is located at b, that is, b is the first acquisition time, it is determined that a does not exist in the preset area, and if a does not appear within the time period from a to b, it may be determined that a has an absent behavior.
The application can also provide an object behavior detection system, which comprises a video acquisition unit 301, a video analysis unit 302, a data association unit 303 and a violation presentation unit 304. Fig. 3 is a schematic structural diagram of the system, which corresponds to the embodiment of the method shown in fig. 1.
The video acquisition unit 301 is configured to acquire a first image including a preset region, and send the image to the video analysis unit 302.
A particular video capture unit 301 may be implemented by a video camera. The camera can be wall-mounted or ceiling-mounted according to different situations.
According to the actual application requirement, the video capture unit 301 may further include a light supplement device.
A video analysis unit 302, configured to obtain a first image sent by the video acquisition unit 301; detecting whether a target object exists in a preset area of the first image; determining the current state of the target object according to the detection result; judging whether the state of the target object changes or not according to the last state and the current state of the target object; if so, acquiring a first acquisition time of the first image, and updating the previous state to the current state; determining the behavior of the target object according to the relation between the first acquisition time and a preset time period; the first image, the first acquisition time and the behavior of the target object are sent to the data association unit 303.
Specifically, the video analysis unit 302 may be integrated into a camera chip, and the video analysis unit 301 and the video capture unit 302 may be integrated into a whole according to actual requirements and the type of a camera in the video capture unit. Of course, the video analysis unit 302 may also be implemented by a server or an embedded device.
And the data association unit 303 is configured to receive and store the first image, the first acquisition time, and the behavior of the target object, which are sent by the video analysis unit 302.
Specifically, the data association unit 303 may store the information in a database, associate the information with related target objects, and update the information of all target objects.
And the violation display unit 304 is configured to count the stored first image, the first collection time, and the behavior of the target object, generate a target object behavior report, and display the target object behavior report in a visual manner.
Fig. 4 is a schematic structural diagram of an object behavior detection apparatus provided in an embodiment of the present application, and the apparatus includes, corresponding to the method embodiment shown in fig. 1:
an image obtaining module 401, configured to obtain a first image;
an object detection module 402, configured to detect whether a target object exists in a preset region of the first image;
a state determining module 403, configured to determine a current state of the target object according to the detection result;
a state determining module 404, configured to determine whether a state of the target object changes according to a previous state and the current state of the target object;
a time obtaining module 405, configured to obtain a first collecting time of the first image when the state of the target object changes, and update the previous state to the current state;
a behavior determining module 406, configured to determine a behavior of the target object according to a relationship between the first acquisition time and a preset time period.
As another embodiment of the present application, in the embodiment shown in fig. 4, the object detection module 402 may include a detection sub-module, a judgment sub-module, and a judgment sub-module; (not shown in the figure)
The detection submodule is used for detecting whether a suspected face area exists in a preset area of the first image;
the judging submodule is used for judging whether a suspected face area is matched with the characteristics of a pre-stored target object or not when the suspected face area is detected to exist in the preset area of the first image;
and the judging submodule is used for judging that the target object exists in the preset area of the first image when the suspected face area is judged to be matched with the feature of the pre-stored target object.
As another implementation manner of the present application, in the embodiment shown in fig. 4, the behavior determining module 406 is specifically configured to:
and when the current state indicates that the target object appears, the last state indicates that the target object does not appear, and when the first acquisition time e (a, b) and the target object appears for the first time in the preset time period, determining that the target object has late behavior, wherein a is the starting time of the preset time period, and b is the ending time of the preset time period.
As another implementation manner of the present application, in the embodiment shown in fig. 4, the behavior determining module 406 is specifically configured to:
when the current state indicates that the target object appears, and the last state indicates that the target object does not appear, and when the first acquisition time e (a, b) belongs to the condition that the target object has appeared within the preset time period, determining that the target object has the absence behavior.
As another implementation manner of the present application, the embodiment shown in fig. 4 may further include an early-quit behavior determination module (not shown in the figure);
the early-quit behavior determining module is configured to determine that an early-quit behavior exists in the target object when it is detected that the target object does not exist in the preset region of the first image, the first acquisition time is equal to the b, and the target object has already appeared in the preset time period.
As another implementation manner of the present application, the embodiment shown in fig. 4 may further include an absence behavior determination module (not shown in the figure);
the absence behavior determining module is configured to determine that an absence behavior exists in the target object when it is detected that the target object does not exist in the preset region of the first image, the first acquisition time is equal to the second acquisition time, and the target object does not appear within the preset time period.
As another embodiment of the present application, in the embodiment shown in fig. 4, the apparatus may further include a dressing detection module (not shown in the figure);
the dressing detection module is used for detecting whether the dressing of the target object meets the requirement or not according to a pre-generated dressing detection model when the target object exists in the preset area of the first image;
the behavior determining module 406 is specifically configured to determine the dressing behavior of the target object according to the detection result.
As another implementation manner of the present application, in the embodiment shown in fig. 4, the behavior determination module 406 may include a statistics sub-module and a determination sub-module; (not shown in the figure)
The counting submodule is used for counting a first number of images of the target object, the dressing of which meets the requirement, and counting a second number of images of the target object, the dressing of which does not meet the requirement, when the first acquisition time is equal to the second acquisition time;
and the determining submodule is used for determining the dressing behavior of the target object according to the first quantity and the second quantity.
Since the device embodiment and the system embodiment are obtained based on the method embodiment and have the same technical effect as the method, the technical effects of the device embodiment and the system embodiment are not described herein again.
For the apparatus embodiment and the system embodiment, since they are substantially similar to the method embodiment, they are described relatively simply, and reference may be made to some descriptions of the method embodiment for related points.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It will be understood by those skilled in the art that all or part of the steps in the above embodiments can be implemented by hardware associated with program instructions, and the program can be stored in a computer readable storage medium. The storage medium referred to herein is a ROM/RAM, a magnetic disk, an optical disk, or the like.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (16)

1. An object behavior detection method, comprising:
obtaining a first image;
performing target detection on the preset area of the first image to obtain a detection result, wherein the detection result indicates whether a target object exists in the preset area of the first image;
determining the current state of the target object according to the detection result;
judging whether the state of the target object changes or not according to the last state and the current state of the target object;
if so, acquiring a first acquisition time of the first image, and updating the previous state to the current state;
and determining the behavior of the target object according to the relation between the first acquisition time and a preset time period.
2. The method according to claim 1, wherein the performing target detection on the preset area of the first image to obtain a detection result comprises:
detecting whether a suspected face area exists in a preset area of the first image;
if yes, judging whether the suspected face area is matched with the pre-stored characteristics of the target object;
and if so, judging that the target object exists in the preset area of the first image.
3. The method of claim 1, wherein when the current state indicates that the target object is present and the previous state indicates that the target object is not present, the determining the behavior of the target object according to the relationship between the first acquisition time and a preset time period comprises:
and when the first acquisition time e (a, b) and the target object appears for the first time in the preset time period, determining that the target object has a late behavior, wherein a is the starting time of the preset time period, and b is the ending time of the preset time period.
4. The method of claim 1, wherein when the current state indicates that the target object is present and the previous state indicates that the target object is not present, the determining the behavior of the target object according to the relationship between the first acquisition time and a preset time period comprises:
and when the first acquisition time e (a, b) and the target object has appeared in the preset time period, determining that the target object has a leaving behavior, wherein a is the starting time of the preset time period, and b is the ending time of the preset time period.
5. The method according to any one of claims 1-4, further comprising:
when it is detected that no target object exists in a preset area of the first image, the first acquisition time is equal to b, and under the condition that the target object has appeared in the preset time period, it is determined that the target object has an early-receding behavior, wherein b is the termination time of the preset time period.
6. The method according to any one of claims 1-4, further comprising:
when it is detected that no target object exists in a preset area of the first image, the first acquisition time is equal to b, and under the condition that the target object does not appear in the preset time period, it is determined that the target object has an absent behavior, wherein b is the termination time of the preset time period.
7. The method according to any one of claims 1-4, further comprising:
when the target object exists in the preset area of the first image, detecting whether the clothing of the target object meets the requirement or not according to a clothing detection model generated in advance;
and determining the dressing behavior of the target object according to the detection result.
8. The method of claim 7, wherein determining the target object's dressing behavior according to the detection result comprises:
when the first acquisition time is equal to b, counting a first number of images of the target object, the clothing of which meets the requirement, and counting a second number of images of the target object, the clothing of which does not meet the requirement, wherein b is the termination time of the preset time period;
and determining the dressing behavior of the target object according to the first quantity and the second quantity.
9. An object behavior detection apparatus, comprising:
an image obtaining module for obtaining a first image;
the object detection module is used for carrying out target detection on the preset area of the first image to obtain a detection result, and the detection result indicates whether a target object exists in the preset area of the first image or not;
the state determining module is used for determining the current state of the target object according to the detection result;
the state judgment module is used for judging whether the state of the target object changes or not according to the last state and the current state of the target object;
the time acquisition module is used for acquiring a first acquisition time of the first image when the state of the target object changes and updating the previous state to the current state;
and the behavior determining module is used for determining the behavior of the target object according to the relation between the first acquisition time and a preset time period.
10. The apparatus of claim 9, wherein the object detection module comprises:
the detection submodule is used for detecting whether a suspected face area exists in a preset area of the first image;
the judging submodule is used for judging whether a suspected face area is matched with the characteristics of a pre-stored target object or not when the suspected face area is detected to exist in the preset area of the first image;
and the judging submodule is used for judging that the target object exists in the preset area of the first image when the suspected face area is judged to be matched with the feature of the pre-stored target object.
11. The apparatus of claim 9, wherein the behavior determination module is specifically configured to:
and when the current state indicates that the target object appears, the last state indicates that the target object does not appear, and when the first acquisition time e (a, b) and the target object appears for the first time in the preset time period, determining that the target object has late behavior, wherein a is the starting time of the preset time period, and b is the ending time of the preset time period.
12. The apparatus of claim 9, wherein the behavior determination module is specifically configured to:
and when the current state indicates that the target object appears, and the last state indicates that the target object does not appear, determining that the target object has the behavior of leaving the seat when the first acquisition time belongs to the group of (a, b), and the target object has appeared in the preset time period, wherein a is the starting time of the preset time period, and b is the ending time of the preset time period.
13. The apparatus according to any of claims 9-12, wherein the apparatus further comprises an early fallback behavior determination module;
the early-quit behavior determination module is configured to determine that the target object has an early-quit behavior when it is detected that the target object does not exist in the preset region of the first image, the first acquisition time is equal to b, and the target object has already appeared in the preset time period, where b is a termination time of the preset time period.
14. The apparatus according to any of claims 9-12, wherein the apparatus further comprises an absence behavior determination module;
the absence behavior determination module is configured to determine that an absence behavior exists in the target object when it is detected that the target object does not exist in the preset region of the first image, the first acquisition time is equal to b, and the target object does not appear within the preset time period, where b is a termination time of the preset time period.
15. The apparatus of any one of claims 9-12, further comprising a dressing detection module;
the dressing detection module is used for detecting whether the dressing of the target object meets the requirement or not according to a pre-generated dressing detection model when the target object exists in the preset area of the first image;
the behavior determination module is specifically configured to determine the dressing behavior of the target object according to the detection result.
16. The apparatus of claim 15, wherein the behavior determination module comprises:
the counting submodule is used for counting a first number of the images of the target object, the dressing of which meets the requirement, and counting a second number of the images of the target object, the dressing of which does not meet the requirement, when the first acquisition time is equal to b, wherein b is the termination time of the preset time period;
and the determining submodule is used for determining the dressing behavior of the target object according to the first quantity and the second quantity.
CN201610875498.5A 2016-10-08 2016-10-08 Object behavior detection method and device Active CN107920223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610875498.5A CN107920223B (en) 2016-10-08 2016-10-08 Object behavior detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610875498.5A CN107920223B (en) 2016-10-08 2016-10-08 Object behavior detection method and device

Publications (2)

Publication Number Publication Date
CN107920223A CN107920223A (en) 2018-04-17
CN107920223B true CN107920223B (en) 2020-08-28

Family

ID=61892184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610875498.5A Active CN107920223B (en) 2016-10-08 2016-10-08 Object behavior detection method and device

Country Status (1)

Country Link
CN (1) CN107920223B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345887B (en) * 2018-09-29 2021-03-16 北京金山安全软件有限公司 Task execution duration obtaining method and device and electronic equipment
CN111723603A (en) * 2019-03-19 2020-09-29 杭州海康威视数字技术股份有限公司 Material monitoring method, system and device
CN109993436A (en) * 2019-03-30 2019-07-09 王浩 A kind of intelligent building total management system
CN110309768B (en) * 2019-06-28 2020-11-20 上海眼控科技股份有限公司 Method and equipment for detecting staff at vehicle inspection station
CN110543830B (en) * 2019-08-12 2022-05-13 珠海格力电器股份有限公司 Motion detection method, motion detection device, and storage medium
CN111310665A (en) * 2020-02-18 2020-06-19 深圳市商汤科技有限公司 Violation event detection method and device, electronic equipment and storage medium
CN112150435B (en) * 2020-09-23 2023-05-05 江苏睿住住工科技有限公司 Construction management method and device for prefabricated part

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001160161A (en) * 1999-12-02 2001-06-12 Softbrain Co Ltd Action management system utilizing the internet
CN104408406A (en) * 2014-11-03 2015-03-11 安徽中科大国祯信息科技有限责任公司 Staff off-post detection method based on frame difference method and background subtraction method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577812B (en) * 2009-03-06 2014-07-30 北京中星微电子有限公司 Method and system for post monitoring
CN202159375U (en) * 2011-03-14 2012-03-07 惠州学院 Classroom roll-call system
CN103810696B (en) * 2012-11-15 2017-03-22 浙江大华技术股份有限公司 Method for detecting image of target object and device thereof
CN105488490A (en) * 2015-12-23 2016-04-13 天津天地伟业数码科技有限公司 Judge dressing detection method based on video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001160161A (en) * 1999-12-02 2001-06-12 Softbrain Co Ltd Action management system utilizing the internet
CN104408406A (en) * 2014-11-03 2015-03-11 安徽中科大国祯信息科技有限责任公司 Staff off-post detection method based on frame difference method and background subtraction method

Also Published As

Publication number Publication date
CN107920223A (en) 2018-04-17

Similar Documents

Publication Publication Date Title
CN107920223B (en) Object behavior detection method and device
US10812761B2 (en) Complex hardware-based system for video surveillance tracking
US11157778B2 (en) Image analysis system, image analysis method, and storage medium
CN109117827B (en) Video-based method for automatically identifying wearing state of work clothes and work cap and alarm system
CN110222640B (en) Method, device and method for identifying suspect in monitoring site and storage medium
CN110855935B (en) Personnel track generation system and method based on multiple cameras
US9594963B2 (en) Determination of object presence and motion state
CN106203458B (en) Crowd video analysis method and system
CN111507283B (en) Student behavior identification method and system based on classroom scene
CN112507813A (en) Event detection method and device, electronic equipment and storage medium
CN107679471B (en) Indoor personnel air post detection method based on video monitoring platform
WO2018096787A1 (en) Person&#39;s behavior monitoring device and person&#39;s behavior monitoring system
CN110705482A (en) Personnel behavior alarm prompt system based on video AI intelligent analysis
CN102306304A (en) Face occluder identification method and device
JP5669082B2 (en) Verification device
CN110718067A (en) Violation behavior warning method and related device
JP2003235034A (en) Image capturing device and method for event monitoring
CN110956118B (en) Target object detection method and device, storage medium and electronic device
CN102902960B (en) Leave-behind object detection method based on Gaussian modelling and target contour
CN109508586B (en) Passenger flow statistical method, device and equipment
JP2012252613A (en) Customer behavior tracking type video distribution system
CN102244769B (en) Object and key person monitoring system and method thereof
CN110895663B (en) Two-wheel vehicle identification method and device, electronic equipment and monitoring system
JP5758165B2 (en) Article detection device and stationary person detection device
CN106781167A (en) The method and apparatus of monitoring object motion state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant