CN107122743B - Security monitoring method and device and electronic equipment - Google Patents

Security monitoring method and device and electronic equipment Download PDF

Info

Publication number
CN107122743B
CN107122743B CN201710291430.7A CN201710291430A CN107122743B CN 107122743 B CN107122743 B CN 107122743B CN 201710291430 A CN201710291430 A CN 201710291430A CN 107122743 B CN107122743 B CN 107122743B
Authority
CN
China
Prior art keywords
region
target object
pixel
area
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710291430.7A
Other languages
Chinese (zh)
Other versions
CN107122743A (en
Inventor
高浩渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN201710291430.7A priority Critical patent/CN107122743B/en
Publication of CN107122743A publication Critical patent/CN107122743A/en
Application granted granted Critical
Publication of CN107122743B publication Critical patent/CN107122743B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Abstract

A security monitoring method, a security monitoring device and electronic equipment are disclosed. The method comprises the following steps: detecting a target object in the acquired image sequence of the monitoring scene; in response to detecting a target object in a particular image frame among the sequence of images, determining a pixel location of the target object in the particular image frame; determining the area type of a specific position area where the target object is located according to the pixel positions; and performing a predetermined operation according to the area type of the specific location area. Therefore, more efficient and accurate security monitoring can be achieved.

Description

Security monitoring method and device and electronic equipment
Technical Field
The present application relates to the field of image processing, and more particularly, to a security monitoring method, apparatus, electronic device, computer program product, and computer-readable storage medium.
Background
Traditionally, in a security scene based on a camera, remote watching and video playback are often supported, and one consequence of this is that there is no way to accurately grasp the real invasion time of an intruder.
With the development of artificial intelligence technology, more and more security devices use a feature part (such as a human face/human body) detection mode to determine whether an intrusion occurs in a monitored scene. It would be a significant experience enhancement to such monitoring if it could be accurately determined when outside personnel entered.
The current judgment strategy generally only adopts a means of detecting human faces/human bodies, and once the parts are detected, alarm information is sent immediately. However, since the monitoring scene may be very complex, for example, a photo, a portrait, etc. of the user may be placed therein, such a simple and direct detection means easily causes a false alarm and a false alarm, which may have a relatively large impact on the user experience, resulting in a decrease in the accuracy of security monitoring.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. Embodiments of the present application provide a security monitoring method, apparatus, electronic device, computer program product, and computer-readable storage medium, which can effectively reduce the occurrence of false alarms in security monitoring.
According to one aspect of the application, a security monitoring method is provided, which includes: detecting a target object in the acquired image sequence of the monitoring scene; in response to detecting a target object in a particular image frame among the sequence of images, determining a pixel location of the target object in the particular image frame; determining the area type of a specific position area where the target object is located according to the pixel positions; and performing a predetermined operation according to the area type of the specific location area.
According to another aspect of the present application, there is provided a security monitoring device, including: the target object detection unit is used for detecting a target object in the acquired image sequence of the monitoring scene; a pixel position determination unit for determining a pixel position of a target object in a specific image frame among the image sequence in response to the target object being detected in the specific image frame; a region type determining unit, configured to determine a region type of a specific position region where the target object is located according to the pixel position; and a predetermined operation performing unit for performing a predetermined operation according to the area type of the specific location area.
According to another aspect of the present application, there is provided an electronic device including: a processor; a memory; and computer program instructions stored in the memory, which when executed by the processor, cause the processor to perform the security monitoring method described above.
According to another aspect of the application, a computer program product is provided, comprising computer program instructions which, when executed by a processor, cause the processor to perform the above-described security monitoring method.
According to another aspect of the present application, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the security monitoring method described above.
Compared with the prior art, by adopting the security monitoring method, the security monitoring device, the electronic equipment, the computer program product and the computer readable storage medium, the target object can be detected in the image sequence of the acquired monitoring scene; in response to detecting a target object in a particular image frame among the sequence of images, determining a pixel location of the target object in the particular image frame; determining the area type of a specific position area where the target object is located according to the pixel positions; and performing a predetermined operation according to the area type of the specific location area. Therefore, more efficient and accurate security monitoring can be achieved. Therefore, more efficient and accurate security monitoring can be achieved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 illustrates a schematic diagram of an application scenario of security monitoring operation according to an embodiment of the present application.
FIG. 2 illustrates a flow chart of a security monitoring method according to an embodiment of the present application.
Fig. 3 illustrates a flow chart of the target object detection step according to an embodiment of the present application.
FIG. 4 illustrates a flow chart of the pixel location determining step according to an embodiment of the present application.
Fig. 5 illustrates a flowchart of a region type determining step according to an embodiment of the present application.
Fig. 6 illustrates a schematic diagram of a monitoring scenario according to a specific example of an embodiment of the present application.
Fig. 7 illustrates a flowchart of a region type division step according to an embodiment of the present application.
FIG. 8 illustrates a block diagram of a security monitoring device according to an embodiment of the application.
FIG. 9 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, the security monitoring algorithm in the market currently only uses feature part (such as human face/human body) detection, and the overall user experience is reduced due to the influence of problems such as false alarm.
Through further analysis, the existing security monitoring scheme based on feature detection does not distinguish different position areas in a monitoring scene and possibly has different intrusion probabilities, but is only very sensitive to the result of feature identification in an algorithm, so that a detection result with high false alarm is generated, and the user experience is reduced.
In view of the technical problem, the basic concept of the present application is to provide a security monitoring method, device, electronic device, computer program product, and computer readable storage medium, which can effectively reduce the false alarm rate and improve the user experience by analyzing the alarm priorities of different location areas in the monitoring scene and corresponding to different alarm strategies. Specifically, when analyzing different location areas, a more complex model can be used to analyze the environmental characteristics of the monitoring scene, generate different location area importance information, and alarm by using different strategies in location areas with different importance. Therefore, more efficient and accurate security monitoring can be achieved.
It should be noted that the basic concept of the present application can be applied not only to the application scenario of intrusion detection, but also to various application scenarios such as home care, traffic monitoring, and the like.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
Fig. 1 illustrates a schematic diagram of an application scenario of security monitoring operation according to an embodiment of the present application.
As shown in fig. 1, an application scenario for security monitoring operation includes a security monitoring device 100 and a target object 200.
The target object 200 may be any type of object in the monitored scene that may be of interest for its presence, and may include features having certain characteristic information, such as color, texture, shape, layout, etc., and thus, can be identified in the image by a particular algorithm. For example, in the home monitoring scenario, the target object 200 may be a person, which may be a subject of intrusion detection, home care, and the like, and accordingly, the feature part may be various body parts of the face, body, torso, hands, head, feet, and the like of the user. As another example, in the context of traffic monitoring, the target object 200 may also be a traffic entity such as a motor vehicle, a bicycle, a pedestrian, etc., which may be the object of traffic flow detection, traffic violation surveillance, etc., and accordingly, the feature may be an overall feature (e.g., shape), a local feature (e.g., number plate), etc., of each traffic entity. Alternatively, in other scenarios, the target object 200 may be any other object, such as a robot, a drone, or the like.
The security monitoring device 100 may be used for detecting and tracking a target object, determining a position, identifying a region type, and the like. For example, the security monitoring device 100 may include a camera 110, and a security monitoring module 120.
For example, the camera (imaging device) 110 may be used to capture image data of a monitored scene, which may include one or more cameras. For example, the image data acquired by the camera 110 may be a continuous image frame sequence (i.e., a video stream) or a discrete image frame sequence (i.e., an image data set sampled at a predetermined sampling time point), etc. For example, the camera 110 may be a monocular camera, a binocular camera, a multi-view camera, etc., and in addition, it may be used to capture a gray scale image, and may also capture a color image with color information. Of course, any other type of camera known in the art and that may appear in the future may be applied to the present application, and the present application has no particular limitation on the manner in which an image is captured as long as gray scale or color information of an input image can be obtained. To reduce the amount of computation in subsequent operations, in one embodiment, the color map may be grayed out before analysis and processing. Of course, to preserve a larger amount of information, in another embodiment, the color map may also be analyzed and processed directly.
The security monitoring module 120 may be configured to detect a target object in a sequence of captured images of a monitored scene; in response to detecting a target object in a particular image frame among the sequence of images, determining a pixel location of the target object in the particular image frame; determining the area type of a specific position area where the target object is located according to the pixel positions; and performing a predetermined operation according to the area type of the specific location area. Therefore, more efficient and accurate security monitoring can be achieved.
It should be noted that the above application scenarios are only shown for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited thereto. Rather, embodiments of the present application may be applied to any scenario where it may be applicable. For example, the target object 200 may be one or more, and similarly, the security monitoring device 100 may also be one or more.
Exemplary method
The security monitoring method according to the embodiment of the present application is described below with reference to an application scenario of fig. 1.
FIG. 2 illustrates a flow chart of a security monitoring method according to an embodiment of the present application.
As shown in fig. 2, the security monitoring method according to the embodiment of the present application may include:
in step S120, a target object is detected in the acquired sequence of images of the monitored scene.
For example, image data of a monitored scene may be continuously acquired by a camera to generate an image sequence that is a series of consecutive or discrete image frames generated in a temporal sequence. Then, the target object may be continuously detected in each frame image.
Fig. 3 illustrates a flow chart of the target object detection step according to an embodiment of the present application.
As shown in fig. 3, the step S120 may include:
in sub-step S121, a feature of the target object is detected in each image frame of the image sequence; and
in sub-step S122, in response to detecting the feature, it is determined that the target object is detected.
For convenience of description, the following will be described in detail in the context of an application of intrusion detection.
For example, a camera may be arranged in advance at a specific position of a space to be monitored (e.g., a living room) for the purpose of security monitoring. The imaging angle of the camera may be set reasonably so that it photographs the entire living room as completely as possible, and thus can be caught by the camera immediately when an intruder invades the living room.
Then, feature recognition may be performed on the image data obtained from the camera, and feature information such as a human face/body may be detected.
Here, in the application scenario of intrusion detection, the detected object is an intruder, which may be a human, an animal, a robot, or the like, and the following description will be continued by taking a human as an example. Thus, for example, a human face/body or the like characteristic portion of a human can be detected in each image frame.
For example, such feature detection can be implemented in a simple feature alignment manner (e.g., detecting the shape and layout of five sense organs on a human face, etc.), or in a manner based on machine learning (e.g., deep neural network).
Once a potential feature such as a human face/body is detected in a specific image frame in the image sequence, it is considered that a person as a target object is detected in the specific image frame.
In step S140, in response to detecting a target object in a particular image frame among the sequence of images, a pixel position of the target object in the particular image frame is determined.
Once a target object is detected in a particular image frame, the pixel location of the target object may be determined.
FIG. 4 illustrates a flow chart of the pixel location determining step according to an embodiment of the present application.
As shown in fig. 4, the step S140 may include:
in sub-step S141, the pixel position of the feature part in the specific image frame is determined; and
in sub-step S142, a pixel position of the target object is determined from the pixel position.
For example, the pixel position of the feature part such as the face/human body in the image frame may be determined, that is, the abscissa and the ordinate of the face/human body in the image frame are determined.
Since the human face/person detection is performed by labeling (e.g., framing) them in the image frame, the framed area of the labeled frame can be directly used as the pixel position of the feature. For example, in the case of a rectangular labeling box, the coordinates of the upper left corner and the lower right corner of the labeling box may be selected as the position coordinates of the feature part.
Next, the pixel position of the feature portion may be simply determined directly as the pixel position of the target object. For example, when the detected feature part is a human body feature, the pixel position of the human body feature may be directly taken as the pixel position of the target object.
Alternatively, the entire target object may be further framed and the pixel position thereof may be determined based on the determined pixel positions of the feature portions. For example, when the detected feature part is a face feature, the position of the human body may be further estimated from the position of the face feature, and the estimated position of the human body may be taken as the pixel position of the target object.
In step S160, the area type of the specific location area where the target object is located is determined according to the pixel location.
In step S180, a predetermined operation is performed according to the area type of the specific location area.
Once the pixel location of the target object is obtained, in both steps, it is possible to determine which location area in the monitored scene the target object is in and determine the area type of the location area. Subsequent operations may then be performed, e.g., selecting alarm, no alarm, or other possible operations, depending on the zone type.
For example, the region type of the location region may be a location region with different alarm priorities divided in advance in connection with the analysis of the stable image. For example, the various location areas may include: a sensitive location area that alarms upon detection of a target object, a non-sensitive location area that detects a target object and alarms only if a predetermined condition is satisfied, a false alarm location area that does not alarm even if a target object is detected, and the like. For example, the predetermined condition may be a case where the target object is detected to be stationary and continuous, a case where the entire movement of the target object is detected, a case where the target object is detected and delayed for a certain period of time, or the like. The specific location area dividing step will be described in detail in step S110 later.
For example, pixel position ranges of the predefined position areas may be obtained, and the pixel position of the target object may be compared with the pixel position ranges to determine a specific position area where the target object is located and determine the area type thereof.
After the zone type is determined, subsequent operations may be performed based on the alarm priority associated with the location zone.
Fig. 5 illustrates a flowchart of a region type determining step according to an embodiment of the present application.
As shown in fig. 5, the step S160 may include:
in sub-step S161, determining whether the pixel position is in a false alarm prone pixel region; and
in sub-step S162, in response to the pixel position being in the false positive prone pixel region, it is determined that the region type of the specific location region belongs to a false positive prone location region, which is a location region in which a probability of possibly generating a false positive is greater than or equal to a first threshold value.
Accordingly, the step S180 may include: and in response to the area type of the specific position area belonging to the false alarm prone position area, not executing alarm operation.
For example, after the pixel position of the intruder is determined, whether the intruder is located in the easily false-alarming pixel region in the image can be determined according to the pixel position and the predefined coordinate range of each position region, so that whether the intruder appears in the easily false-alarming position region in the monitoring scene is determined. For example, the error prone location region may be a location region with a high probability of false positive (e.g., 70% or more).
Fig. 6 illustrates a schematic diagram of a monitoring scenario according to a specific example of an embodiment of the present application.
As shown in fig. 6, a plurality of location areas may be divided in advance in the monitoring scene. This division of the location area may be done in advance based on a stable image of the monitored scene, for example. Different pixel areas in the monitored scene correspond to location areas with different alarm priorities.
For example, a false alarm prone location area (area 1) may be included in the monitoring scenario. For example, in the case of home monitoring, the false alarm prone area may be an area of a television, a projection screen, a digital photo frame, or the like, in which a human face/body display may appear continuously or occasionally. Due to the above-mentioned characteristics of the zone, it can be set in advance as a zone where false alarms are easily triggered, and the corresponding alarm policy is set to: if a face/body is detected in region 1, no alarm is given.
With continued reference to fig. 5, this step S160 may further include:
in sub-step S163, in response to the pixel position not being in the false positive prone pixel region but being in a possible false positive pixel region, tracking the target object in a subsequent image frame of the specific image frame;
in sub-step S164, in response to tracking the target object, determining whether a position movement of the target object occurs; and
in sub-step S165, in response to no position movement of the target object occurring, it is determined that the region type of the specific position region belongs to a possible false positive position region, which is a position region in which a probability of possibly generating a false positive is smaller than the first threshold value but greater than or equal to a second threshold value.
Accordingly, the step S180 may include: in response to the area type of the particular location area belonging to the possible false positive location area, no alarm operation is performed.
For example, if the intruder is not located in the false alarm prone pixel region in the image, whether the intruder is located in the false alarm possible pixel region in the image is further judged, so as to determine whether the intruder appears in the false alarm possible position region in the monitoring scene. For example, the possible false positive location region may be a location region with a medium false positive probability (e.g., less than 70%, but greater than or equal to 40%).
As shown in fig. 6, a possible false positive location area (area 2) may also be included in the monitoring scenario. For example, in the case of home monitoring, the area of possible false positives may be a sofa, end table, picture frame, etc. that includes some patterns or textures that may be misidentified by the algorithm as a face/body, but where the intruder is often not located (e.g., based on common sense, the intruder is not typically sitting on the sofa, standing near the end table, or resting on a wall surface, standing still for a long time after intruding into a living room). Due to the above-mentioned properties of the zone, it can be set in advance as a zone that may trigger a false alarm and the corresponding alarm policy is set to: if a face/body is detected in region 2, the detected person needs to be tracked on and no alarm is given if it has not been in motion (motion) in subsequent multi-frames.
With continued reference to fig. 5, this step S160 may further include:
in sub-step S166, in response to the target object appearance position movement, determining a pixel size of the feature portion;
in sub-step S167, comparing the pixel size of the feature with a predetermined pixel size; and
in sub-step S168, in response to the pixel size of the feature being equal to the predetermined pixel size, it is determined that the region type of the specific location region belongs to the possible false positive location region.
Accordingly, the step S180 may include: in response to the area type of the particular location area belonging to the possible false positive location area, no alarm operation is performed.
As shown in fig. 6, the monitoring scenario may include other possible false alarm regions (region 3) in addition to region 2. For example, in the case of home monitoring, the possibly false positive area may also be an area on the sofa where the owner may be located, but where the intruder is often not located (e.g., based on common sense, the intruder is not usually active on the sofa for a long time after invading the room). Due to the above-mentioned properties of the zone, it can be set in advance as a zone that may trigger a false alarm and the corresponding alarm policy is set to: if a face/body is detected in zone 3, the detected person needs to be tracked continuously, and if it has motion in subsequent multi-frames, the size of the face/body is further judged, if the size of the face/body is the same as the size of the face/body of the person sitting on the sofa, it is indicated that the person is probably the owner of the room and not the intruder, and no alarm is given.
With continued reference to fig. 5, this step S160 may further include:
in sub-step S169, in response to the specific location area not belonging to the false positive prone location area or the false positive possible location area, it is determined that the area type of the specific location area belongs to an alarm sensitive location area, which is a location area in which the probability of possibly generating a false positive is smaller than the second threshold value.
Accordingly, the step S180 may include: immediately performing an alert operation in response to the area type of the particular location area belonging to the alert-sensitive location area.
For example, if the intruder is not located in the false alarm prone pixel area in the image, the intruder is considered to be located in the alarm sensitive pixel area in the image, and thus the alarm sensitive position area where the intruder appears in the monitoring scene is determined. For example, the alarm sensitive location area may be a location area with a low probability of false alarms (e.g., less than 40%).
As shown in FIG. 6, the monitoring scenario may also include areas other than areas 1-3, i.e., alarm sensitive areas. Thus, the corresponding alarm policy may be set to: if a face/body is detected at this location, an alarm is immediately issued. For example, if a face/body is detected in region 3, then the detected person needs to be tracked on and if it is in motion in subsequent frames, then the size of the face/body is further determined, and if the size of the feature is the same as or smaller than the size of the face/body on which the person is sitting, then it is said that the person may actually be standing in front of the sofa (i.e. between the sofa and the lens) or behind the sofa, then it may be an intruder, and an alarm is given immediately. For example, the alarm-sensitive area may also be a gate, hallway, important financial depository (e.g., safe), etc.
For example, it is needless to say that before the alarm operation is performed in the above steps, or after the feature information of the human face/human body is detected, it may be determined whether the detected feature information is owner feature information that has been preset in the feature library, so as to prevent the owner at home from being mistaken for an intruder and generating a false alarm.
For example, the above-described alerting operations may be implemented by one or more of a variety of means, such as sound, light, vibration, pushing an instant message to the user's cell phone, making an alert phone, etc., for the purpose of alerting an intruder, reporting law enforcement personnel, and alerting the user's owner.
In an embodiment, further, to implement area type division of each location area in the monitoring scene, as shown in fig. 1, before step S120, the security monitoring method according to the embodiment of the present application may further include:
in step S110, area type division is performed on each location area of the monitoring scene.
As described above, before performing the above-described steps S120 to S180, a stable image of a monitored scene may be analyzed to previously partition location areas having different alarm priorities. Of course, the present application is not limited thereto, and for example, the region type division result may also be preset by the system according to the intrinsic monitoring scenario.
For example, such a region type partitioning operation may be performed on the local side, but in order to reduce the computation cost of the local side and improve the algorithm precision, the partitioning operation may also be implemented on the server side (cloud side).
Fig. 7 illustrates a flowchart of a region type division step according to an embodiment of the present application.
As shown in fig. 7, the step S110 may include:
in sub-step S111, stable image frames of the monitored scene, which are image frames during which the number of moving target objects existing is less than a predetermined threshold value, are acquired;
in sub-step S112, performing a pattern analysis on the monitoring scene according to the stable image frame to determine a location area in which a false alarm may occur and a probability of the occurrence of the false alarm; and
in sub-step S113, location area division is performed on the monitoring scene according to the result of the pattern analysis and an area type of each location area is determined.
For example, the security monitoring device 100 may first need to be initialized before it is formally enabled.
For this, it is possible to take an image of a monitored scene using the camera 200 and acquire a stable image for a newly added image area analyzing section.
The stable image acquisition is not operated at every time, but a series of judgments are carried out on the camera when the camera is just started, and the mode analysis is carried out at long intervals; if the target position is moved a little in this time, that is, the image has little change, it can be understood as a substantially home stable environment, and the image can be uploaded to the cloud and image analysis can be started to determine the false alarm sensitive areas of various levels (for example, including a television area with a very high false alarm rate, a wall with a high false alarm rate, a complex texture, a sofa area, and other areas).
For example, the image scene may be parsed to obtain attributes for each portion. For example, after taking an image, the following type regions can be analyzed: areas such as gates, corridors, important financial storage (e.g., safes) can be set as sensitive areas, requiring a relatively quick alarm if an intrusion is detected; the image display area of a television and the like can be set as an area which is easy to trigger alarm, and if information is detected at the position, the alarm is not given; the areas such as sofas, tea tables and wall surfaces can be set to be the areas where the alarm can be given out only by judging the information such as sports. For example, the portion of functionality may be identified in the cloud and/or locally using machine learning (e.g., deep neural networks).
In addition, in addition to the above-described area analysis, a person analysis may be further added to the image analysis to simultaneously acquire a plurality of pieces of person information to confirm the alarm modes for each piece.
For example, in combination with specific information of the user, an area actually moving frequently, such as area 3 in fig. 6, but generally not a frequently-appearing location of an intruder, may be counted, and in combination with analysis of this part, a specific location may be obtained and calculated in the cloud and/or locally, so as to update statistical information of a size that a human face/a human body and a corresponding camera display need to be sensitive in each area as a whole, so as to update a corresponding alarm policy.
In the above, a cloud processing mode is adopted, and information expansion is performed by combining position and area information, so that the habit data of each user can be unified at the cloud, and the local computation amount and storage amount of the security monitoring device 100 can be well reduced.
Therefore, by adopting the security monitoring method according to the embodiment of the application, the target object can be detected in the image sequence of the acquired monitoring scene; in response to detecting a target object in a particular image frame among the sequence of images, determining a pixel location of the target object in the particular image frame; determining the area type of a specific position area where the target object is located according to the pixel positions; and performing a predetermined operation according to the area type of the specific location area. Therefore, more efficient and accurate security monitoring can be achieved. Therefore, more efficient and accurate security monitoring can be achieved.
More specifically, the security monitoring method according to the embodiment of the application has the following advantages:
1. acquiring the alarm grade of each position area in a monitoring scene by a detection/analysis (matching) method, thereby maintaining a set of alarm weight distribution maps of different alarm logics;
2. due to the adoption of more precise scene analysis, the false alarm can be effectively reduced, and the user experience is improved;
3. due to the adoption of the gesture analysis of the characteristic parts (such as human faces/human bodies) based on the motion, the generation of false alarm can be avoided theoretically, and only a small amount of operation processing is introduced.
Exemplary devices
Next, a security monitoring apparatus according to an embodiment of the present application is described with reference to fig. 8.
FIG. 8 illustrates a block diagram of a security monitoring device according to an embodiment of the application.
As shown in fig. 8, the security monitoring apparatus 300 according to the embodiment of the present application may include: a target object detection unit 320 for detecting a target object in the acquired image sequence of the monitored scene; a pixel position determination unit 340 for determining a pixel position of a target object in a specific image frame among the image sequence in response to the target object being detected in the specific image frame; a region type determining unit 360, configured to determine a region type of a specific location region where the target object is located according to the pixel location; and a predetermined operation performing unit 380 for performing a predetermined operation according to the area type of the specific location area.
In one example, the target object detection unit 320 may detect a feature of a target object in each image frame of the image sequence; and in response to detecting the feature, determining that a target object is detected.
In one example, the pixel position determination unit 340 may determine a pixel position of the feature in the specific image frame; and determining a pixel position of the target object from the pixel position.
In one example, the region type determining unit 360 may determine whether the pixel position is in a false alarm prone pixel region; and in response to the pixel location being in the false positive prone pixel region, determining that the region type of the particular location region belongs to a false positive prone location region, the false positive prone location region being a location region in which a probability of possibly producing a false positive is greater than or equal to a first threshold.
In one example, the predetermined operation performing unit 380 may not perform an alarm operation in response to the area type of the specific location area belonging to the false alarm prone location area.
In one example, the region type determining unit 360 may also track the target object in a subsequent image frame of the particular image frame in response to the pixel location not being in the false positive prone pixel region but being in a possible false positive pixel region; in response to tracking the target object, determining whether a position movement of the target object occurs; and in response to the target object not exhibiting positional movement, determining that the region type of the particular location region belongs to a possible false positive location region, the possible false positive location region being a location region in which a probability of possibly producing a false positive is less than the first threshold but greater than or equal to a second threshold.
In one example, the predetermined operation performing unit 380 may not perform an alarm operation in response to the area type of the specific location area belonging to the possibly false-positive location area.
In one example, the region type determining unit 360 may further determine a pixel size of the feature part in response to the target object appearance position movement; comparing the pixel size of the feature with a predetermined pixel size; and in response to the pixel size of the feature being equal to the predetermined pixel size, determining that the region type of the particular location region belongs to the probable false positive location region.
In one example, the predetermined operation performing unit 380 may not perform an alarm operation in response to the area type of the specific location area belonging to the possibly false-positive location area.
In one example, the area type determination unit 360 may further determine that the area type of the specific location area belongs to an alarm-sensitive location area, which is a location area in which a probability of possibly generating a false alarm is less than the second threshold, in response to the specific location area not belonging to the false alarm prone location area or the false alarm possible location area.
In one example, the predetermined operation performing unit 380 may immediately perform an alarm operation in response to the area type of the specific location area belonging to the alarm-sensitive location area.
In one example, the security monitoring apparatus 300 may further include: and an area type dividing unit 310, configured to perform area type division on each location area of the monitoring scene.
In one example, the region type dividing unit 310 may acquire a stable image frame of the monitoring scene, which is an image frame in which the number of moving target objects existing therebetween is less than a predetermined threshold; performing mode analysis on the monitoring scene according to the stable image frame to determine a position area in which false alarm is likely to occur and the probability of the false alarm; and performing location area division on the monitoring scene according to the result of the pattern analysis and determining the area type of each location area.
The specific functions and operations of the respective units and modules in the security monitoring apparatus 300 described above have been described in detail in the security monitoring method described above with reference to fig. 1 to 7, and thus, a repetitive description thereof will be omitted.
As described above, the security monitoring apparatus 300 according to the embodiment of the present application may be applied to the security monitoring device 100 shown in fig. 1, so as to perform operations such as detection and tracking, position determination, area type identification, and the like on a target object.
In one example, the security monitoring apparatus 300 according to the embodiment of the present application may be integrated into the security monitoring device 100 in fig. 1 as a software module and/or a hardware module. For example, the security monitoring apparatus 300 may be implemented as the security monitoring module 120 in the device 100. For example, the security monitoring apparatus 300 may be a software module in an operating system of the security monitoring device 100, or may be an application developed for the security monitoring device 100; of course, the security monitoring apparatus 300 may also be one of many hardware modules of the security monitoring device 100.
Alternatively, in another example, the security monitoring apparatus 300 and the security monitoring device 100 may be separate devices, and the security monitoring apparatus 300 may be connected to the security monitoring device 100 through a wired and/or wireless network and transmit the interaction information according to the agreed data format.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 9. The electronic device may be a computer or server or other device. For example, in one example, the electronic device of the target object according to the embodiment of the present application may correspond to the security monitoring device 100 in fig. 1.
FIG. 9 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 9, the electronic device 10 includes one or more processors 11 and memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer readable storage medium and executed by the processor 11 to implement the security monitoring methods of the various embodiments of the present application described above and/or other desired functions. Various contents such as a pixel position of the target object, an area type of a specific position area where the target object is located, and the like may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, the input device 13 may be the camera 110 described above for capturing a sequence of images of a monitored scene. The input device 13 may also include, for example, a keyboard, a mouse, and a communication network and a remote input device connected thereto.
The output device 14 may output various information to the outside (e.g., a user or a machine learning model), including the pixel position of the target object, the region type of a specific location region where the target object is located, alarm information, and the like. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 9, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the security monitoring method according to various embodiments of the present application described in the "exemplary methods" section of this specification, supra.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the security monitoring method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (13)

1. A security monitoring method comprises the following steps:
detecting a target object in the acquired image sequence of the monitoring scene;
in response to detecting a target object in a particular image frame among the sequence of images, determining a pixel location of the target object in the particular image frame;
determining the area type of a specific position area where the target object is located according to the pixel positions; and
performing a predetermined operation according to an area type of the specific location area,
wherein determining the region type of the specific position region where the target object is located according to the pixel position comprises:
judging whether the pixel position is in a pixel region easy for false alarm; and
determining that the region type of the particular location region belongs to a false positive prone location region in response to the pixel location being in the false positive prone pixel region, the false positive prone location region being a location region in which a probability of possibly producing a false positive is greater than or equal to a first threshold,
tracking the target object in a subsequent image frame of the particular image frame in response to the pixel location not being in the false positive prone pixel region but being in a likely false positive pixel region;
in response to tracking the target object, determining whether a position movement of the target object occurs; and
in response to no occurrence of a positional shift of the target object, determining that the region type of the specific position region belongs to a possible false positive position region, the possible false positive position region being a position region in which a probability of possibly generating a false positive is less than the first threshold value but greater than or equal to a second threshold value.
2. The method of claim 1, wherein detecting a target object in the sequence of acquired images of the monitored scene comprises:
detecting a characteristic part of a target object in each image frame of the image sequence; and
in response to detecting the feature, it is determined that a target object is detected.
3. The method of claim 2, wherein determining the pixel location of the target object in the particular image frame comprises:
determining pixel positions of the feature in the particular image frame; and
determining a pixel location of the target object from the pixel location.
4. The method of claim 1, wherein performing a predetermined operation according to the area type of the specific location area comprises:
and in response to the area type of the specific position area belonging to the false alarm prone position area, not executing alarm operation.
5. The method of claim 2, wherein determining a region type of a particular location region in which the target object is located from the pixel locations further comprises:
determining the pixel size of the characteristic part in response to the target object appearance position movement;
comparing the pixel size of the feature with a predetermined pixel size; and
determining that the region type of the particular location region belongs to the probable false positive location region in response to the pixel size of the feature being equal to the predetermined pixel size.
6. The method of claim 1 or 5, wherein performing a predetermined operation according to the area type of the specific location area comprises:
in response to the area type of the particular location area belonging to the possible false positive location area, no alarm operation is performed.
7. The method of claim 1 or 5, wherein determining the region type of the specific location region in which the target object is located according to the pixel positions further comprises:
in response to the particular location area not belonging to the false positive prone location area or the false positive possible location area, determining that the area type of the particular location area belongs to an alarm sensitive location area, the alarm sensitive location area being a location area in which a probability of possibly generating a false positive is less than the second threshold.
8. The method of claim 7, wherein performing a predetermined operation according to the area type of the specific location area comprises:
immediately performing an alert operation in response to the area type of the particular location area belonging to the alert-sensitive location area.
9. The method of claim 1, prior to detecting a target object in the sequence of acquired images of the monitored scene, further comprising:
and carrying out region type division on each position region of the monitoring scene.
10. The method of claim 9, wherein performing zone type classification for each location zone of the monitored scene comprises:
acquiring stable image frames of the monitored scene, wherein the stable image frames are image frames in which the number of moving target objects existing in the stable image frames is less than a preset threshold value;
performing mode analysis on the monitoring scene according to the stable image frame to determine a position area in which false alarm is likely to occur and the probability of the false alarm; and
and carrying out position area division on the monitoring scene according to the result of the pattern analysis and determining the area type of each position area.
11. A security monitoring device comprising:
the target object detection unit is used for detecting a target object in the acquired image sequence of the monitoring scene;
a pixel position determination unit for determining a pixel position of a target object in a specific image frame among the image sequence in response to the target object being detected in the specific image frame;
a region type determining unit, configured to determine a region type of a specific position region where the target object is located according to the pixel position; and
a predetermined operation performing unit for performing a predetermined operation according to an area type of the specific location area,
wherein determining the region type of the specific position region where the target object is located according to the pixel position comprises:
judging whether the pixel position is in a pixel region easy for false alarm; and
determining that the region type of the particular location region belongs to a false positive prone location region in response to the pixel location being in the false positive prone pixel region, the false positive prone location region being a location region in which a probability of possibly producing a false positive is greater than or equal to a first threshold,
tracking the target object in a subsequent image frame of the particular image frame in response to the pixel location not being in the false positive prone pixel region but being in a likely false positive pixel region;
in response to tracking the target object, determining whether a position movement of the target object occurs; and
in response to no occurrence of a positional shift of the target object, determining that the region type of the specific position region belongs to a possible false positive position region, the possible false positive position region being a position region in which a probability of possibly generating a false positive is less than the first threshold value but greater than or equal to a second threshold value.
12. An electronic device, comprising:
a processor;
a memory; and
computer program instructions stored in the memory, which, when executed by the processor, cause the processor to perform the method of any of claims 1-10.
13. A computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1-10.
CN201710291430.7A 2017-04-28 2017-04-28 Security monitoring method and device and electronic equipment Active CN107122743B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710291430.7A CN107122743B (en) 2017-04-28 2017-04-28 Security monitoring method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710291430.7A CN107122743B (en) 2017-04-28 2017-04-28 Security monitoring method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN107122743A CN107122743A (en) 2017-09-01
CN107122743B true CN107122743B (en) 2020-02-14

Family

ID=59725144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710291430.7A Active CN107122743B (en) 2017-04-28 2017-04-28 Security monitoring method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN107122743B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108391073A (en) * 2018-01-29 2018-08-10 盎锐(上海)信息科技有限公司 Track record device and data analysing method
CN110895663B (en) * 2018-09-12 2023-06-02 杭州海康威视数字技术股份有限公司 Two-wheel vehicle identification method and device, electronic equipment and monitoring system
CN110348422B (en) * 2019-07-18 2021-11-09 北京地平线机器人技术研发有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110392239B (en) * 2019-08-13 2020-04-21 北京积加科技有限公司 Designated area monitoring method and device
CN111028480A (en) * 2019-12-06 2020-04-17 江西洪都航空工业集团有限责任公司 Drowning detection and alarm system
CN111126317B (en) * 2019-12-26 2023-06-23 腾讯科技(深圳)有限公司 Image processing method, device, server and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059896A (en) * 2007-05-16 2007-10-24 华为技术有限公司 Detecting alarm method and alarm system
CN102411703A (en) * 2010-09-21 2012-04-11 索尼公司 Device and method for detecting specific object in image sequence as well as video camera equipment
CN104200589A (en) * 2014-08-15 2014-12-10 深圳市中兴新地通信器材有限公司 Invasion detecting method, device and security and protection monitoring system thereof
CN104601969A (en) * 2015-02-26 2015-05-06 张耀 District fortifying method and device
CN106162091A (en) * 2016-07-28 2016-11-23 乐视控股(北京)有限公司 A kind of video frequency monitoring method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9275285B2 (en) * 2012-03-29 2016-03-01 The Nielsen Company (Us), Llc Methods and apparatus to count people in images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059896A (en) * 2007-05-16 2007-10-24 华为技术有限公司 Detecting alarm method and alarm system
CN102411703A (en) * 2010-09-21 2012-04-11 索尼公司 Device and method for detecting specific object in image sequence as well as video camera equipment
CN104200589A (en) * 2014-08-15 2014-12-10 深圳市中兴新地通信器材有限公司 Invasion detecting method, device and security and protection monitoring system thereof
CN104601969A (en) * 2015-02-26 2015-05-06 张耀 District fortifying method and device
CN106162091A (en) * 2016-07-28 2016-11-23 乐视控股(北京)有限公司 A kind of video frequency monitoring method and device

Also Published As

Publication number Publication date
CN107122743A (en) 2017-09-01

Similar Documents

Publication Publication Date Title
CN107122743B (en) Security monitoring method and device and electronic equipment
Cucchiara et al. A multi‐camera vision system for fall detection and alarm generation
US9396400B1 (en) Computer-vision based security system using a depth camera
US20230316762A1 (en) Object detection in edge devices for barrier operation and parcel delivery
US11295139B2 (en) Human presence detection in edge devices
CN110933955B (en) Improved generation of alarm events based on detection of objects from camera images
KR101910542B1 (en) Image Analysis Method and Server Apparatus for Detecting Object
US20090041297A1 (en) Human detection and tracking for security applications
TW201826141A (en) A method for generating alerts in a video surveillance system
Maddalena et al. People counting by learning their appearance in a multi-view camera environment
KR102195706B1 (en) Method and Apparatus for Detecting Intruder
US11776274B2 (en) Information processing apparatus, control method, and program
KR102002812B1 (en) Image Analysis Method and Server Apparatus for Detecting Object
KR101979375B1 (en) Method of predicting object behavior of surveillance video
KR101454644B1 (en) Loitering Detection Using a Pedestrian Tracker
US8126212B2 (en) Method of detecting moving object
US11948362B2 (en) Object detection using a combination of deep learning and non-deep learning techniques
JP2022184761A (en) Concept for detecting abnormality in input data
Nishanthini et al. Smart Video Surveillance system and alert with image capturing using android smart phones
WO2020139071A1 (en) System and method for detecting aggressive behaviour activity
KR20220000209A (en) Recording medium that records the operation program of the intelligent security monitoring device based on deep learning distributed processing
KR102407202B1 (en) Apparatus and method for intelligently analyzing video
CN111225178A (en) Video monitoring method and system based on object detection
JP7347481B2 (en) Information processing device, information processing method, and program
CN117456610A (en) Climbing abnormal behavior detection method and system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant