WO2017071085A1 - 报警方法及装置 - Google Patents

报警方法及装置 Download PDF

Info

Publication number
WO2017071085A1
WO2017071085A1 PCT/CN2015/099586 CN2015099586W WO2017071085A1 WO 2017071085 A1 WO2017071085 A1 WO 2017071085A1 CN 2015099586 W CN2015099586 W CN 2015099586W WO 2017071085 A1 WO2017071085 A1 WO 2017071085A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
monitoring
monitoring target
video
sensitive area
Prior art date
Application number
PCT/CN2015/099586
Other languages
English (en)
French (fr)
Inventor
张涛
陈志军
汪平仄
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to JP2016549719A priority Critical patent/JP2017538978A/ja
Priority to KR1020167021748A priority patent/KR101852284B1/ko
Priority to MX2016005066A priority patent/MX360586B/es
Priority to RU2016117967A priority patent/RU2648214C1/ru
Publication of WO2017071085A1 publication Critical patent/WO2017071085A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19606Discriminating between target movement or movement in an area of interest and other non-signicative movements, e.g. target movements induced by camera shake or movements of pets, falling leaves, rotating fan
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/0205Specific application combined with child monitoring using a transmitter-receiver system
    • G08B21/0208Combination with audio or video communication, e.g. combination with "baby phone" function
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/008Alarm setting and unsetting, i.e. arming or disarming of the security system
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/014Alarm signalling to a central station with two-way communication, e.g. with signalling back
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/14Central alarm receiver or annunciator arrangements
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B5/00Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied
    • G08B5/22Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present disclosure relates to the field of Internet technologies, and in particular, to an alarm method and apparatus.
  • the embodiments of the present disclosure provide an alarm method and apparatus.
  • an alarm method comprising:
  • an alarm message is sent to the terminal, so that the terminal performs an alarm.
  • the determining whether the monitoring target exists in the sensitive area of the monitoring video includes:
  • the monitoring target When the monitoring target is located in the sensitive area, it is determined that the monitoring target exists in a sensitive area of the monitoring video.
  • the determining whether the monitoring target exists in the sensitive area of the monitoring video includes:
  • the moving target is the monitoring target
  • determining that the monitoring target exists in a sensitive area of the monitoring video determining that the monitoring target exists in a sensitive area of the monitoring video.
  • the determining whether the moving target is a monitoring target in conjunction with the first possible implementation of the first aspect or the second possible implementation of the first aspect, in a third possible implementation manner of the foregoing first aspect, the determining whether the moving target is a monitoring target ,include:
  • the determining the feature of the moving target includes:
  • Feature extraction is performed on the target image to obtain features of the moving target.
  • the determining a degree of matching between the feature of the moving target and the feature of the monitoring target ,Also includes:
  • Feature extraction is performed on the tracking image of the monitoring target to obtain characteristics of the monitoring target.
  • the setting information further includes the sensitive area information corresponding to the monitoring target, where the sensitive area information is Used to get sensitive areas.
  • the determining whether the monitoring target is located in a sensitive area includes:
  • Whether the monitoring target is located in the sensitive area is determined based on the current location of the monitoring target.
  • an alarm method comprising:
  • the Before the server sends the setup information it also includes:
  • the monitoring target identification information and the sensitive area information corresponding to the monitoring target are determined based on the video image of the historical video.
  • the determining, by the video image of the historical video, the monitoring target identifier information and the monitoring The sensitive area information corresponding to the target including:
  • the area selected by the second selection instruction is determined as a sensitive area corresponding to the monitoring target
  • the determining, by the video image of the historical video, the monitoring target identifier information and the monitoring The sensitive area information corresponding to the target including:
  • the first area is determined as a sensitive area corresponding to the monitoring target, and the target object is determined as The monitoring target;
  • an alarm device comprising:
  • a determining module configured to determine whether a monitoring target exists in a sensitive area of the monitoring video
  • the sending module is configured to send an alarm message to the terminal when the monitoring target exists in the sensitive area, so that the terminal performs an alarm.
  • the determining module includes:
  • a first determining unit configured to determine whether there is a moving target in the monitoring video
  • a monitoring target identification unit configured to determine whether the moving target is a monitoring target when the moving target exists in the monitoring video
  • a second determining unit configured to determine, when the moving target is the monitoring target, whether the monitoring target is located in a sensitive area
  • the first determining unit is configured to determine that the monitoring target exists in the sensitive area of the monitoring video when the monitoring target is located in the sensitive area.
  • the determining module includes:
  • a third determining unit configured to determine whether there is a moving target in the sensitive area of the monitoring video
  • a monitoring target identification unit configured to determine whether the moving target is a monitoring target when the moving target exists in the sensitive area
  • a second determining unit configured to determine that the monitoring target exists in a sensitive area of the monitoring video when the moving target is the monitoring target.
  • the monitoring target identification unit includes:
  • a first determining subunit configured to determine a characteristic of the moving target
  • a second determining subunit configured to determine a degree of matching between a feature of the moving target and a feature of the monitoring target
  • a third determining subunit configured to determine that the moving target is the monitoring target when the matching degree is greater than a specified value.
  • the first determining subunit is configured to:
  • the moving target exists in the monitoring video
  • the area where the moving target is located is cropped to obtain a target image
  • Feature extraction is performed on the target image to obtain features of the moving target.
  • the monitoring target identification unit further includes:
  • a receiving subunit configured to receive setting information sent by the terminal, where the setting information carries monitoring target identification information
  • a first acquiring sub-unit configured to acquire, according to the monitoring target identification information, a tracking video of the monitoring target from the stored historical video
  • a second acquiring subunit configured to acquire a tracking image of the monitoring target from each frame of the video image of the tracking video
  • the setting information further includes the sensitive area information corresponding to the monitoring target, where the sensitive area information is Used to get sensitive areas.
  • the second determining unit includes:
  • a tracking subunit configured to perform target tracking on the monitoring target when the moving target is the monitoring target, to obtain a current location of the monitoring target
  • the determining subunit is configured to determine whether the monitoring target is located in the sensitive area based on the current location of the monitoring target.
  • an alarm device comprising:
  • the first sending module is configured to send the setting information to the server, where the setting information carries the monitoring target identification information and the sensitive area information corresponding to the monitoring target, so that the server obtains the monitoring video, and is in the sensitive area of the monitoring video. Returning alarm information when the monitoring target exists;
  • the alarm module is configured to perform an alarm based on the alarm information when receiving the alarm information returned by the server.
  • the device further includes:
  • a play module configured to play the historical video
  • the determining module is configured to determine the monitoring target identification information and the sensitive area information corresponding to the monitoring target based on the video image of the historical video during the playing of the historical video.
  • the determining module includes:
  • a first determining unit configured to determine, in the process of playing the historical video, an object selected by the first selection instruction as the monitoring when receiving a first selection instruction based on the video image of the historical video aims;
  • a second determining unit configured to determine, according to the video image of the historical video, a region selected by the second selection instruction as corresponding to the monitoring target Sensitive area
  • the first obtaining unit is configured to acquire monitoring target identification information of the monitoring target, and acquire sensitive area information of the sensitive area.
  • the determining module includes:
  • a second acquiring unit configured to acquire a first area drawn in a video image of the historical video and a target object selected in the video image, where the target object is a picture drawn in the video image An object included in the two regions, or the target object is an object selected by a selection operation detected in the video image;
  • a third determining unit configured to determine the first area as a sensitive area corresponding to the monitoring target when a preset gesture operation is detected on at least one of the first area and the target object, and Determining the target object as the monitoring target;
  • a third acquiring unit configured to acquire monitoring target identification information of the monitoring target, and acquire sensitive area information of the sensitive area.
  • an alarm device comprising:
  • a memory configured to store processor executable instructions
  • processor is configured to:
  • an alarm message is sent to the terminal, so that the terminal performs an alarm.
  • an alarm device comprising:
  • a memory configured to store processor executable instructions
  • processor is configured to:
  • the server acquires the monitoring video and determines whether there is a monitoring target in the sensitive area of the monitoring video. When there is a monitoring target in the sensitive area, the server sends an alarm message to the terminal, so that the terminal performs an alarm, thereby preventing insecurity. The occurrence of the event.
  • FIG. 1 is a schematic diagram of an implementation environment involved in an alarm method according to an exemplary embodiment
  • FIG. 2 is a flow chart showing an alarm method according to an exemplary embodiment
  • FIG. 3 is a flowchart of another alarm method according to an exemplary embodiment
  • FIG. 4 is a flow chart showing still another alarm method according to an exemplary embodiment
  • FIG. 5 is a block diagram of a first type of alarm device, according to an exemplary embodiment
  • FIG. 6 is a block diagram of a determination module according to an exemplary embodiment
  • FIG. 7 is a block diagram of another judging module according to an exemplary embodiment
  • FIG. 8 is a block diagram of a monitoring target recognition unit according to an exemplary embodiment
  • FIG. 9 is a block diagram of another monitoring target recognition unit according to an exemplary embodiment.
  • FIG. 10 is a block diagram of a second determining unit, according to an exemplary embodiment
  • FIG. 11 is a block diagram of a second type of alarm device, according to an exemplary embodiment
  • FIG. 12 is a block diagram showing a third type of alarm device according to an exemplary embodiment
  • FIG. 13 is a block diagram of a determining module, according to an exemplary embodiment
  • FIG. 14 is a block diagram of another determining module, according to an exemplary embodiment.
  • Figure 15 is a block diagram of a fourth type of alarm device, according to an exemplary embodiment
  • FIG. 16 is a block diagram of a fifth type of alarm device, according to an exemplary embodiment.
  • FIG. 1 is a schematic diagram of an implementation environment involved in an alarm method according to an exemplary embodiment.
  • the implementation environment may include a server 101, a smart camera device 102, and a terminal 103.
  • the server 101 can be a server, or a server cluster composed of several servers, or a cloud computing service center.
  • the smart camera device 102 can be a smart camera.
  • the terminal 103 can be a mobile phone, a computer, a tablet device, or the like.
  • the server 101 and the smart camera device 102 can be connected through a network, and the server 101 and the terminal 103 can also be connected through a network.
  • the server 101 is configured to receive the surveillance video transmitted by the smart camera device and send the alarm information to the terminal.
  • the smart camera device 102 is configured to collect monitoring video within the monitoring area and send the monitoring video to the server.
  • the terminal 103 is configured to receive alarm information sent by the server and perform an alarm.
  • FIG. 2 is a flowchart of an alarm method according to an exemplary embodiment. As shown in FIG. 2, the method is used in a server, and includes the following steps.
  • step 201 a surveillance video is obtained.
  • step 202 it is determined whether there is a monitoring target in the sensitive area of the surveillance video.
  • step 203 when there is a monitoring target in the sensitive area, an alarm message is sent to the terminal, so that the terminal performs an alarm.
  • the server acquires the monitoring video and determines whether there is a monitoring target in the sensitive area of the monitoring video. When there is a monitoring target in the sensitive area, the server sends an alarm message to the terminal, so that the terminal performs an alarm, thereby preventing insecurity. The occurrence of the event.
  • determining whether there is a monitoring target in a sensitive area of the monitoring video includes:
  • the moving target is a monitoring target, it is judged whether the monitoring target is located in the sensitive area;
  • the monitoring target When the monitoring target is in the sensitive area, it is determined that there is a monitoring target in the sensitive area of the surveillance video.
  • the server determines whether there is a moving target in the monitoring video. When there is a moving target in the monitoring video, the server determines whether the moving target is a monitoring target, thereby effectively determining whether there is a monitoring target in the monitoring video, and thus effectively determining the monitoring. Whether the target is in a sensitive area.
  • determining whether there is a monitoring target in a sensitive area of the monitoring video includes:
  • the moving target is a monitoring target
  • the server determines whether there is a moving target in the sensitive area of the monitoring video. When there is a moving target in the sensitive area, the server determines whether the moving target is a monitoring target, thereby effectively determining whether a monitoring target exists in the sensitive area, and the server It is not necessary to detect other areas except the sensitive area, effectively avoiding the monitoring area except the sensitive area. The interference of its area on the detection results improves the detection efficiency and detection accuracy.
  • determining whether the moving target is a monitoring target includes:
  • the matching degree between the feature of the moving target and the feature of the monitoring target is greater than the specified value, it indicates that the feature of the moving target is different from the feature of the monitoring target, that is, the moving target is likely to be the monitoring target. Therefore, based on the matching degree between the feature of the moving target and the feature of the monitoring target, it is possible to effectively determine whether the moving target is a monitoring target and improve the correct rate when determining the monitoring target.
  • determining characteristics of the moving target includes:
  • the area where the moving target is located is cropped to obtain a target image
  • Feature extraction is performed on the target image to obtain features of the moving target.
  • the server crops the area where the moving target is located to obtain the target image, which can facilitate the server to extract the feature image of the target image, obtain the feature of the moving target, and improve the efficiency of feature extraction.
  • the method before determining the degree of matching between the feature of the moving target and the feature of the monitoring target, the method further includes:
  • Feature extraction is performed on the tracking image of the monitoring target to obtain the characteristics of the monitoring target.
  • the server obtains the monitoring target from each video image of the tracking video of the monitoring target
  • the target tracking image and feature extraction of the tracking image improve the accuracy of feature extraction.
  • the setting information further carries sensitive area information corresponding to the monitoring target, where the sensitive area information is used to acquire the sensitive area.
  • the setting information carries the sensitive area corresponding to the monitoring target, so that the server can determine whether the monitoring target exists in the sensitive area based on the sensitive area.
  • determining whether the monitoring target is located in a sensitive area includes:
  • Target tracking of the monitoring target and obtain the current location of the monitoring target
  • the server In order to determine whether the monitoring target is located in a sensitive area, the server needs to track the target of the monitoring target, and obtain the current location of the monitoring target. Based on the current location of the monitoring target, the monitoring target can be effectively determined.
  • FIG. 3 is a flowchart of an alarm method according to an exemplary embodiment. As shown in FIG. 3, the method includes the following steps.
  • step 301 the setting information is sent to the server, where the setting information carries the monitoring target identification information and the sensitive area information corresponding to the monitoring target, so that the server acquires the monitoring video, and returns the alarm information when there is a monitoring target in the sensitive area of the monitoring video.
  • step 302 when the alarm information returned by the server is received, an alarm is made based on the alarm information.
  • the terminal sends the setting information to the server, where the setting information carries the monitoring target identification information and the sensitive area information corresponding to the monitoring target, so that the server obtains the monitoring video, and when there is a monitoring target in the sensitive area of the monitoring video
  • the alarm information is returned.
  • the terminal receives the alarm information, the terminal can perform an alarm to prevent an unsafe event from occurring.
  • the method before sending the setting information to the server, the method further includes:
  • the monitoring target identification information and the sensitive area information corresponding to the monitoring target are determined.
  • the server needs to determine whether there is a monitoring target in the sensitive area. Therefore, the server needs to determine the sensitive area and the monitoring target, and the terminal determines the monitoring target identification information and the sensitive area corresponding to the monitoring target based on the video image of the historical video sent by the server. When the server receives the setting information, it quickly determines the sensitive area and the monitoring target based on the setting information.
  • the monitoring target identification information and the sensitive area information corresponding to the monitoring target are determined based on the video image of the historical video, including:
  • the video image based on the history video receives the first selection instruction, determining an object selected by the first selection instruction as a monitoring target;
  • the area selected by the second selection instruction is determined as the sensitive area corresponding to the monitoring target.
  • the user corresponding to the terminal needs to select the monitoring target and the sensitive area based on the video image of the historical video, so that the server can monitor the monitoring target and the sensitive area.
  • the monitoring target identification information and the sensitive area information corresponding to the monitoring target are determined based on the video image of the historical video, including:
  • the terminal acquires the first area drawn in the video image of the historical video and the target object selected in the video image, and when a preset gesture operation is detected on at least one of the first area and the target object,
  • the first area is determined as the sensitive area corresponding to the monitoring target
  • the target object is determined as the monitoring target
  • the sensitive area and the target object can be determined simply and intuitively, the operation is simple, and the efficiency of the terminal to determine the monitoring target identification information and the sensitive area information is improved.
  • FIG. 4 is a flow chart showing an alarm method according to an exemplary embodiment. As shown in FIG. 4, the method includes the following steps.
  • step 401 the server obtains a surveillance video.
  • the server can obtain the monitoring video from the smart camera device.
  • the smart camera device can also send the monitoring video to other devices, so that the server can obtain the monitoring video from the other device.
  • the disclosed embodiments do not specifically limit this.
  • the smart camera device is configured to collect the monitoring video in the monitoring area, and the process of the smart camera device collecting the monitoring video in the monitoring area may refer to related technologies, and the embodiments of the present disclosure are not described in detail herein.
  • the smart camera device can communicate with a server or other device through a wired network or a wireless network, and when the smart camera device communicates with a server or other device through a wireless network, the smart camera device can pass the built-in wireless fidelity (English: WIreless-FIdelity (WIFI), Bluetooth or other wireless communication chip to communicate with a server or other device, which is not specifically limited in the embodiment of the present disclosure.
  • WIFI wireless-fidelity
  • Bluetooth Bluetooth
  • step 402 the server determines whether there is a monitoring target in the sensitive area of the surveillance video.
  • the server In order to prevent an insecure event when the monitoring target is located in a sensitive area, the server needs to determine whether there is a monitoring target in the sensitive area of the monitoring video, and the server determines whether there is a monitoring target in the sensitive area of the monitoring video in the following two ways:
  • the first mode the server determines whether there is a moving target in the monitoring video; when there is a moving target in the monitoring video, determining whether the moving target is a monitoring target; when the moving target is a monitoring target, determining whether the monitoring target is located in the sensitive area When the monitoring target is in the sensitive area, it is determined that there is a monitoring target in the sensitive area of the monitoring video.
  • a background may be established in the background in the fixed monitoring area.
  • a model such that each frame of video in the surveillance video can be compared to the background model to determine a foreground image in the fixed surveillance area, the foreground image being any meaningful moving object if the background is assumed to be stationary Image.
  • the server determines whether there is a moving target in the monitoring video.
  • the operation may be: for each video image in the monitoring video, the server acquires a pixel value of each pixel in the video image; and based on the pixel value of each pixel Specifying a background model to determine whether there is a foreground pixel in the video image; when there is a foreground pixel in the video image, determining that there is a moving target in the monitoring video; otherwise, determining that there is no moving target in the monitoring video.
  • the specified background model is configured to represent a distribution feature of the pixel values of each background pixel in the video image in the time domain, and the specified background model may be a mixed Gaussian model. Of course, the specified background model may also be other models. The embodiment does not specifically limit this.
  • the specified background model may be pre-established, for example, the specified background model may be established according to the distribution of the pixel values of each pixel in the specified video image of the surveillance video in advance, and of course, the specified background may be established in other manners.
  • the model is not specifically limited in this embodiment.
  • the color feature is one of the essential features of the image
  • the color feature can be expressed as the pixel value of the pixel of the image, and the pixel value refers to the position, color, brightness, and the like of the pixel point of the image. Therefore, the server can be based on each of the video images.
  • the pixel value of the pixel and the specified background model determine whether there is a foreground pixel in the video image. When there is a foreground pixel in the video image, it indicates that there is a meaningful moving object in the video image, that is, there is a moving target in the monitoring video.
  • the server determines whether there is a foreground pixel in the video image, and the server can match the pixel value of each pixel with the specified background model, when each pixel
  • the server can match the pixel value of each pixel with the specified background model, when each pixel
  • the process of the server matching the specified background model based on the pixel value of each pixel may refer to the related art, which is not elaborated in this embodiment of the present disclosure.
  • the server may further update the specified background model based on the pixel value of each pixel in the video image.
  • the specified background model Since the specified background model is pre-established by the server, and due to uncontrollable factors such as illumination changes and camera shake, the background will change. Therefore, in order to avoid the accumulation of changes caused by the unmeasured factors, the specified background model is targeted to the moving target. The detection error occurs.
  • the server may update the specified background model in real time based on the pixel value of each pixel in the video image to make the specified background model adaptive. Sexuality can continuously keep close to the distribution characteristics of the pixel values of the current background pixels in the time domain, thereby improving the accuracy of moving target detection.
  • the server determining whether the moving target is a monitoring target may include steps (1)-(3):
  • the server determines the characteristics of the moving target.
  • the server needs to determine the feature of the moving target, and the server determines that the feature of the moving target may be: the server is in the video image of the monitoring video, The region where the moving target is located is cropped to obtain a target image, and feature extraction is performed on the target image to obtain features of the moving target.
  • the server cuts the area where the moving target is located in the video image of the monitoring video, and when the target image is obtained, the server may intercept the circumscribed rectangle of the moving target from the video image of the moving target, and connect the external The rectangle is determined as the image area in which the moving object is in the surveillance video, that is, the target image.
  • the server may further acquire foreground pixel points from the video image in which the moving target is located, and combine the acquired foreground pixel points to obtain an image region in which the moving target is located in the monitoring video, that is, the target image.
  • the server may also clear a background pixel in the video image of the moving target to obtain an image region in which the moving target is located in the monitoring video, that is, the target image, wherein the background pixel is successfully matched with the specified background model.
  • the pixel point corresponding to the pixel value may be clear.
  • the server may perform feature extraction on the target image by specifying a feature extraction algorithm, and the specified feature extraction algorithm may be a wavelet transform method, a least square method, a boundary method histogram method, etc., the present disclosure.
  • the specified feature extraction algorithm may be a wavelet transform method, a least square method, a boundary method histogram method, etc.
  • the embodiment does not specifically limit this.
  • the process of performing feature extraction on the target image by the specified feature extraction algorithm may be referred to the related art, and the embodiments of the present disclosure are not described in detail herein.
  • the feature of the moving object may include one or more, and the feature may be a color feature, a texture feature, a shape feature, and the like, which is not specifically limited in the embodiment of the present disclosure.
  • the server determines the degree of matching between the feature of the moving target and the feature of the monitoring target.
  • the server In order to determine whether the moving target is a monitoring target, the server needs to match the feature of the moving target with the feature of the monitoring target to determine the matching degree between the feature of the moving target and the feature of the monitoring target, and the server determines the feature and monitoring of the moving target.
  • the server may respectively match the feature of the moving target with the feature of the monitoring target to determine the number of features that are successfully matched, and then calculate the number of features successfully matched and the number of features of the monitoring target. The ratio is determined and the ratio is determined as the degree of matching between the feature of the moving target and the feature of the monitoring target.
  • the server determines the similarity between the feature of the moving target and the feature of the monitoring target, obtains the feature similarity, and determines the feature similarity as the feature of the moving target and the feature of the monitoring target.
  • the degree of matching between the plurality of features of the moving target and the plurality of features of the monitoring target, respectively, and obtaining a plurality of feature similarities, and then calculating the plurality of features The weighting value of the feature similarity is determined as the matching degree between the feature of the moving target and the feature of the monitoring target.
  • the feature of the monitoring target may be one or more, which is not specifically limited in the embodiment of the present disclosure.
  • process of the server matching the feature of the moving target with the feature of the monitoring target may refer to related technologies, and the embodiments of the present disclosure are not elaborated herein.
  • the server can determine 0.8 as the feature of the moving target. Monitor the match between the features of the target.
  • the server may multiply the plurality of feature similarities by the weights corresponding to the plurality of features to obtain a plurality of values, and add the plurality of values. A weighting value of the plurality of feature similarities is obtained.
  • the weight corresponding to each of the multiple features refers to determining whether the moving target is When the target is monitored, the size of the reference function that the multiple features can provide, and the weights corresponding to the multiple features can be preset.
  • the feature of the moving target includes one, and the feature is a color feature, and the server matches the color feature of the moving target with the color feature of the monitoring target to obtain a color feature similarity of 0.8, and the server can determine 0.8 as the moving target.
  • the degree of matching between the feature and the characteristics of the monitored target is 0.8.
  • the feature of the moving target includes a plurality of features, and the plurality of features are respectively a color feature and a texture feature
  • the server matches the color feature of the moving target with the color feature of the monitoring target to obtain a color feature similarity of 0.8
  • the texture feature of the moving target is matched with the texture feature of the monitoring target, and the texture feature similarity is 0.6, the weight corresponding to the color feature is 1/2, and the weight corresponding to the texture feature is 1/2, then the color feature similarity and texture
  • the server may further receive setting information sent by the terminal, where the setting information carries the monitoring target identification information; and based on the monitoring target identification information, the stored information
  • the tracking video of the monitoring target is obtained; from each video image of the tracking video, the tracking image of the monitoring target is acquired; and the tracking image of the monitoring target is extracted to obtain the characteristics of the monitoring target.
  • the setting information also carries the sensitive area information corresponding to the monitoring target, where the sensitive area information is used to obtain the sensitive area.
  • monitoring the target identification information different monitoring targets can be distinguished.
  • the monitoring target identification information may be a facial feature of the person, etc., when the monitoring target has a fixed shape.
  • the monitoring target identification information may be the The shape of the object, when the monitoring target is a pet, the monitoring target identification information can be obtained by scanning the two-dimensional code carried by the pet.
  • the monitoring target identification information can also be image information of the monitoring target, etc. No specific restrictions.
  • the sensitive area information can be used to distinguish the different sensitive areas, and the sensitive area information can be the edge information of the sensitive area, and the edge information can be the edge of the sensitive area in the video image.
  • the coordinates of the pixel of course, the sensitive area information may also be other forms of information, which is not specifically limited in the embodiment of the present disclosure.
  • the server obtains the tracking video of the monitoring target from the stored historical video by using the tracking algorithm according to the monitoring target identification information, and acquiring the tracking video of the monitoring target based on the monitoring target identification information
  • the specified tracking algorithm may be a particle swarm optimization algorithm, a continuously adaptive adaptive mean shift (English: Continuously Adaptive Mean-SHIFT, abbreviated as: CamShift) algorithm, and the like.
  • the process of obtaining the tracking video of the monitoring target from the stored historical video by using the specified tracking algorithm may refer to related technologies, and the embodiments of the present disclosure are not described in detail herein.
  • the feature extraction algorithm can be used to extract the feature of the tracking image of the monitoring target, and the server can perform feature extraction on the tracking image of the monitoring target by specifying the feature extraction algorithm.
  • the terminal may also obtain the historical video, for example, sending a historical video acquisition request to the server, causing the server to return the historical video, and then the terminal plays the historical video and plays the historical video.
  • the monitoring target identification information and the sensitive area information corresponding to the monitoring target are determined based on the video image of the historical video.
  • the operation of determining, by the terminal based on the video image of the historical video, the monitoring target identifier information and the sensitive area information corresponding to the monitoring target may include: when the terminal is based on the view of the historical video When receiving the first selection instruction, the frequency image determines the object selected by the first selection instruction as a monitoring target, and when the terminal receives the second selection instruction based on the video image of the historical video, determines the area selected by the second selection instruction. In order to monitor the sensitive area corresponding to the target, the terminal acquires the monitoring target identification information of the monitoring target and acquires the sensitive area information of the sensitive area.
  • the terminal acquires a first area drawn in the video image of the historical video and a target object selected in the video image, the target object is an object included in the second area drawn in the video image, or the target object is The object selected by the selection operation detected in the video image, when a preset gesture operation is detected on at least one of the first region and the target object, determining the first region as the sensitive region corresponding to the monitoring target, and The target object is determined as a monitoring target, and then the terminal acquires monitoring target identification information of the monitoring target and acquires sensitive area information of the sensitive area.
  • the first selection instruction is used to select a monitoring target from the objects included in the video image of the historical video, and the first selection instruction may be triggered by the user, and the user may be triggered by the first specified operation, where the first specified operation may be
  • the operation of the present disclosure, the double-click operation, and the like are not specifically limited.
  • the second selection instruction is used to select a sensitive area corresponding to the monitoring target from the area included in the video image of the historical video, and the second selection instruction may be triggered by the user, and the user may trigger by the second specified operation, and second The specified operation may be a sliding operation or the like, which is not specifically limited in the embodiment of the present disclosure.
  • first area and the second area are both closed areas or nearly closed areas
  • first area may include one or more areas
  • second area may also include one or more areas
  • first area may include the second area
  • the area may not include the second area, and the embodiment of the present disclosure does not specifically limit this.
  • the selecting operation is used to select a target object from the objects included in the video image, and the selecting operation may be triggered by a user, and the selecting operation may be a click operation, a double-click operation, or the like, and the embodiment of the present disclosure does not specifically limited.
  • the preset gesture operation is used to determine the monitoring target and the sensitive area, and the preset gesture operation may be triggered by the user, and the preset gesture operation may be a fork operation, a tick operation, etc., and the embodiment of the present disclosure does not Make specific limits.
  • the preset gesture operation is a fork operation
  • the user draws a first area on the video image of the historical video, and draws a second area on the video image, the second area includes the target object, and then the user is in the Draw a fork on the first area or draw a cross on the second area, or draw a fork on the first area and the second area at the same time
  • the terminal can determine the first area as a sensitive area
  • the first The target object included in the second area is determined as a monitoring target
  • the server can obtain the monitoring target identifier of the monitoring target, and obtain the sensitive area identifier of the sensitive area.
  • the selecting operation is a click operation
  • the preset gesture operation is a fork operation
  • the user draws the first area in the video image of the historical video, and clicks the target object in the video image, after which the user is in the first
  • the terminal may determine the first area as a sensitive area, and determine the target object as After the target is monitored, the server can obtain the monitoring target identifier of the monitoring target and obtain the sensitive area identifier of the sensitive area.
  • the user may manually draw the first area on the video image, and manually select the target object, and determine the first area as a sensitive area by using a preset gesture operation.
  • the target object is determined as the monitoring target, and the monitoring target and the sensitive area are simply and intuitively determined, which improves the efficiency of the terminal to determine the monitoring target and the sensitive area.
  • the server determines that the moving target is a monitoring target; otherwise, the server determines that the moving target is not the monitoring target.
  • the server determines that the moving target is a monitoring target. And when exercising When the matching degree between the feature of the target and the feature of the monitoring target is less than or equal to the specified value, it indicates that the feature of the moving target is significantly different from the feature of the monitoring target, that is, the moving target is unlikely to be the monitoring target, then The server can determine that the moving target is not a monitoring target.
  • the specified value may be set in advance, and the specified value may be 0.7, 0.8, 0.9, etc., which is not specifically limited in the embodiment of the present disclosure.
  • the server can determine that the moving target is the monitoring target.
  • the degree of matching between the feature of the moving target and the feature of the monitoring target is 0.6. Since 0.6 ⁇ 0.7, the server can determine that the moving target is not the monitoring target.
  • the server determining whether the monitoring target is located in the sensitive area may be: the server performs target tracking on the monitoring target, obtains the current location of the monitoring target, and further is based on the current location of the monitoring target. , to determine whether the monitoring target is located in a sensitive area.
  • the server performs target tracking on the monitoring target, and when the monitoring target is currently located, the server can perform target tracking on the monitoring target, obtain a tracking image of the monitoring target, and determine the current monitoring target based on the tracking image and the specified coordinate system. The location.
  • the specified coordinate system may be established in advance, and the specified coordinate system may be established based on the monitoring area of the smart camera device.
  • the specified coordinate system may also be established in other manners, such as based on the lens establishment of the smart camera device. This is not specifically limited.
  • the server may determine the current location of the monitoring target by using the specified positioning algorithm based on the tracking image and the specified coordinate system.
  • the specified positioning algorithm may be preset, and the specified positioning algorithm may be a region growing algorithm, a region expanding algorithm, or the like, which is not specifically limited in the embodiment of the present disclosure.
  • the process of determining, by the server based on the tracking image and the specified coordinate system, the location of the monitoring target by using the specified positioning algorithm may refer to related technologies, and the present disclosure. The examples are not described in detail herein.
  • the server determines, according to the current location of the monitoring target, whether the monitoring target is located in the sensitive area, the server determines the current target area of the monitoring target based on the current location of the monitoring target, and determines the current target of the monitoring target. Whether there is an overlapping area between the area and the sensitive area, when the monitoring target has an overlapping area between the target area and the sensitive area, the server determines that the monitoring target is located in the sensitive area; otherwise, the server determines that the monitoring target is not located in the sensitive area.
  • the process of the server determining whether the target area and the sensitive area of the monitoring target are currently in the overlapping area may refer to related technologies, and the embodiments of the present disclosure are not described in detail herein.
  • the second method the server determines whether there is a moving target in the sensitive area of the monitoring video; when there is a moving target in the sensitive area, determining whether the moving target is a monitoring target; when the moving target is a monitoring target, determining the monitoring video There are monitoring targets in the sensitive area.
  • an area background model may be established for the background in the sensitive area, so that the area image of the sensitive area and the background model of the area may be performed.
  • the foreground image refers to an image of any meaningful moving object assuming the background is stationary.
  • the server determines whether there is a moving target in the sensitive area of the monitoring video. For each video image in the monitoring video, the sensitive area is cropped to obtain an area image of the sensitive area, and the server acquires each of the area images. Pixel value of the pixel; determining whether there is a foreground pixel in the image of the region based on the pixel value of each pixel and the background model of the specified region; determining the sensitive region of the monitoring video when there is a foreground pixel in the image of the region There is a moving target in it, otherwise, it is determined that there is no moving target in the sensitive area of the monitoring video.
  • the specified area background model is used to represent each background image in the area image of the sensitive area
  • the distribution of the pixel values of the prime points in the time domain, and the specified regional background model may be a mixed Gaussian model.
  • the specified regional background model may also be other models, which is not specifically limited in the embodiment of the present disclosure.
  • the specified area background model may be pre-established, for example, the background model of the specified area may be established according to the distribution of the pixel values of each pixel in the specified area image of the sensitive area in advance, and of course, may be established in other manners.
  • the regional background model is specified, and the embodiment of the present disclosure also does not specifically limit this.
  • the server may further update the specified area background model based on the pixel value of each pixel in the area image.
  • the process of determining whether the target is a monitoring target in the sensitive area is similar to the determining process in the first mode of step 402 in the first embodiment.
  • the second method only the area image of the sensitive area is detected to determine whether there is a monitoring target in the sensitive area, thereby effectively avoiding image pair detection in other areas of the video image except the sensitive area.
  • the resulting interference improves detection efficiency and detection accuracy.
  • step 403 when there is a monitoring target in the sensitive area of the monitoring video, the server sends an alarm message to the terminal to cause the terminal to make an alarm.
  • the server may send an alarm message to the terminal, which is used to remind the user that the monitoring target is located in the sensitive area.
  • the terminal can be connected to the server through a wired network or a wireless network.
  • the alarm information can be directly played through the speaker set on the terminal.
  • the terminal can also perform an alarm by other means, which is not specifically limited in the embodiment of the present disclosure.
  • the server acquires the monitoring video, and determines whether there is a monitoring target in the sensitive area of the monitoring video. When there is a monitoring target in the sensitive area, the server sends an alarm message to the terminal, so that the terminal performs an alarm, thereby preventing the The occurrence of a security incident.
  • FIG. 5 is a block diagram of an alarm device, according to an exemplary embodiment.
  • the apparatus includes an acquisition module 501, a determination module 502, and a transmission module 503.
  • the obtaining module 501 is configured to acquire a monitoring video.
  • the determining module 502 is configured to determine whether a monitoring target exists in a sensitive area of the monitoring video
  • the sending module 503 is configured to send an alarm message to the terminal when the monitoring target exists in the sensitive area, so that the terminal performs an alarm.
  • the determining module 502 includes a first determining unit 5021, a monitoring target identifying unit 5022, a second determining unit 5023, and a first determining unit 5024.
  • the first determining unit 5021 is configured to determine whether there is a moving target in the monitoring video
  • the monitoring target identification unit 5022 is configured to determine whether the moving target is a monitoring target when there is a moving target in the monitoring video;
  • the second determining unit 5023 is configured to determine whether the monitoring target is located in the sensitive area when the moving target is the monitoring target;
  • the first determining unit 5024 is configured to determine that there is a monitoring target in the sensitive area of the monitoring video when the monitoring target is located in the sensitive area.
  • the determining module 502 includes a third determining unit 5025, a monitoring target identifying unit 5022, and a second determining unit 5026.
  • the third determining unit 5025 is configured to determine whether there is a moving target in the sensitive area of the monitoring video
  • the monitoring target identifying unit 5022 is configured to determine whether the moving target is a monitoring target when there is a moving target in the sensitive area;
  • the second determining unit 5026 is configured to determine that there is a monitoring target in the sensitive area of the monitoring video when the moving target is the monitoring target.
  • the monitoring target identifying unit 5022 includes a first determining subunit 50221, a second determining subunit 50222, and a third determining subunit 50223.
  • a first determining subunit 50221 configured to determine a feature of the moving target
  • a second determining subunit 50222 configured to determine a degree of matching between a feature of the moving target and a feature of the monitoring target
  • the third determining subunit 50223 is configured to determine that the moving target is a monitoring target when the matching degree is greater than the specified value.
  • the first determining subunit 50221 is configured to:
  • the area where the moving target is located is cropped to obtain a target image
  • Feature extraction is performed on the target image to obtain features of the moving target.
  • the monitoring target identification unit 5022 further includes a receiving subunit 50224, a first obtaining subunit 50225, a second obtaining subunit 50226, and an extracting subunit 50227.
  • the receiving subunit 50224 is configured to receive setting information sent by the terminal, where the setting information carries monitoring target identification information;
  • the first obtaining sub-unit 50225 is configured to acquire, according to the monitoring target identification information, a tracking video of the monitoring target from the stored historical video;
  • a second acquisition subunit 50226 configured to obtain supervision from each frame of the video image of the tracking video Tracking image of the target
  • the extracting sub-unit 50227 is configured to perform feature extraction on the tracking image of the monitoring target to obtain a feature of the monitoring target.
  • the setting information further carries sensitive area information corresponding to the monitoring target, where the sensitive area information is used to acquire the sensitive area.
  • the second determining unit 5023 includes a tracking subunit 50231, and a determining subunit 50232.
  • the tracking subunit 50231 is configured to perform target tracking on the monitoring target when the moving target is a monitoring target, and obtain a current location of the monitoring target;
  • the determining sub-unit 50232 is configured to determine whether the monitoring target is located in the sensitive area based on the current location of the monitoring target.
  • the server acquires the monitoring video and determines whether there is a monitoring target in the sensitive area of the monitoring video. When there is a monitoring target in the sensitive area, the server sends an alarm message to the terminal, so that the terminal performs an alarm, thereby preventing insecurity. The occurrence of the event.
  • FIG. 11 is a block diagram of an alarm device, according to an exemplary embodiment.
  • the apparatus includes a first transmitting module 1101 and an alarm module 1102.
  • the first sending module 1101 is configured to send setting information to the server, where the setting information carries the monitoring target identification information and the sensitive area information corresponding to the monitoring target, so that the server acquires the monitoring video, and returns when there is a monitoring target in the sensitive area of the monitoring video.
  • Alarm information ;
  • the alarm module 1102 is configured to perform an alarm based on the alarm information when receiving the alarm information returned by the server.
  • the apparatus further includes an obtaining module 1103, a playing module 1104, and a determining module 1105.
  • the obtaining module 1103 is configured to acquire a historical video.
  • the playing module 1104 is configured to play a historical video
  • the determining module 1105 is configured to determine, according to the video image of the historical video, the monitoring target identification information and the sensitive area information corresponding to the monitoring target during the playing of the historical video.
  • the determining module 1105 includes a first determining unit 11051, a second determining unit 11052, and a first obtaining unit 11053.
  • the first determining unit 11051 is configured to determine, in the process of playing the historical video, the selected object in the first selection instruction as the monitoring target when the video image based on the historical video receives the first selection instruction;
  • the second determining unit 11052 is configured to: when the video image based on the history video receives the second selection instruction, determine the area selected by the second selection instruction as the sensitive area corresponding to the monitoring target;
  • the first obtaining unit 11053 is configured to acquire monitoring target identification information of the monitoring target, and acquire sensitive area information of the sensitive area.
  • the determining module 1105 includes a second obtaining unit 11054, a third determining unit 11055, and a third obtaining unit 11056.
  • a second obtaining unit 11054 configured to acquire a first area drawn in a video image of the historical video and a target object selected in the video image, the target object being an object included in the second area drawn in the video image, or The target object is an object selected by a selection operation detected in the video image;
  • the third determining unit 11055 is configured to: when the preset gesture operation is detected on at least one of the first area and the target object, determine the first area as the sensitive area corresponding to the monitoring target, and determine the target object as the monitoring target ;
  • the third obtaining unit 11056 is configured to acquire monitoring target identification information of the monitoring target, and acquire sensitive area information of the sensitive area.
  • the terminal sends the setting information to the server, where the setting information carries the monitoring target identification information and the sensitive area information corresponding to the monitoring target, so that the server obtains the monitoring video, and returns when there is a monitoring target in the sensitive area of the monitoring video.
  • Alarm information when the terminal receives the alarm information, the terminal can make an alarm to prevent the occurrence of unsafe events.
  • FIG. 15 is a block diagram of an apparatus 1500 for alerting, according to an exemplary embodiment.
  • device 1500 can be provided as a server.
  • apparatus 1500 includes a processing component 1522 that further includes one or more processors, and memory resources represented by memory 1532, configured to store instructions executable by processing component 1522, such as an application.
  • An application stored in memory 1532 can include one or more modules each corresponding to a set of instructions.
  • Apparatus 1500 can also include a power supply component 1526 configured to perform power management of apparatus 1500, a wired or wireless network interface 1550 configured to connect apparatus 1500 to the network, and an input/output (I/O) interface 1558.
  • Device 1500 can operate based on an operating system stored in memory 1532, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • processing component 1522 is configured to execute instructions to perform an alerting method, the method comprising:
  • an alarm message is sent to the terminal to cause the terminal to make an alarm.
  • determining whether there is a monitoring target in a sensitive area of the monitoring video includes:
  • the moving target is a monitoring target, it is judged whether the monitoring target is located in the sensitive area;
  • the monitoring target When the monitoring target is in the sensitive area, it is determined that there is a monitoring target in the sensitive area of the surveillance video.
  • determining whether there is a monitoring target in a sensitive area of the monitoring video includes:
  • the moving target is a monitoring target
  • determining whether the moving target is a monitoring target includes:
  • determining characteristics of the moving target includes:
  • the area where the moving target is located is cropped to obtain a target image
  • Feature extraction is performed on the target image to obtain features of the moving target.
  • the method before determining the degree of matching between the feature of the moving target and the feature of the monitoring target, the method further includes:
  • Feature extraction is performed on the tracking image of the monitoring target to obtain the characteristics of the monitoring target.
  • the setting information further carries sensitive area information corresponding to the monitoring target, where the sensitive area information is used to acquire the sensitive area.
  • determining whether the monitoring target is located in a sensitive area includes:
  • Target tracking of the monitoring target and obtain the current location of the monitoring target
  • the server acquires the monitoring video and determines whether there is a monitoring target in the sensitive area of the monitoring video. When there is a monitoring target in the sensitive area, the server sends an alarm message to the terminal, so that the terminal performs an alarm, thereby preventing insecurity. The occurrence of the event.
  • FIG. 16 is a block diagram of an apparatus 1600 for alerting, according to an exemplary embodiment.
  • device 1600 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • apparatus 1600 can include one or more of the following components: processing component 1602, memory 1604, power component 1606, multimedia component 1608, audio component 1610, input/output (I/O) interface 1612, sensor component 1614, and Communication component 1616.
  • Processing component 1602 typically controls the overall operation of device 1600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • Processing component 1602 can include one or more processors 1620 to execute instructions to perform all or part of the steps of the above described alarm method.
  • processing component 1602 can include one or more modules to facilitate interaction between component 1602 and other components.
  • the processing component 1602 can include a multimedia module to facilitate interaction between the multimedia component 1608 and the processing component 1602.
  • Memory 1604 is configured to store various types of data to support operation at device 1600. Examples of such data include instructions for any application or method operating on device 1600, contact data, phone book data, messages, pictures, videos, and the like. Memory 1604 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk or Optical Disk.
  • Power component 1606 provides power to various components of device 1600.
  • Power component 1606 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power to device 1600.
  • Multimedia component 1608 includes a screen between the device 1600 and the user that provides an output interface.
  • the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 1608 includes a front camera and/or a rear camera. When the device 1600 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 1610 is configured to output and/or input an audio signal.
  • audio component 1610 includes a microphone (MIC) that is configured to receive an external audio signal when device 1600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 1604 or transmitted via communication component 1616.
  • the audio component 1610 also includes a speaker configured to output an audio signal.
  • the I/O interface 1612 provides an interface between the processing component 1602 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 1614 includes one or more sensors configured to provide a status assessment of various aspects to device 1600.
  • sensor component 1614 can detect the opening of device 1600 / The closed state, relative positioning of the components, such as the display and keypad of the device 1600, the sensor component 1614 can also detect a change in position of a component of the device 1600 or device 1600, the presence or absence of contact of the user with the device 1600, the device 1600 azimuth or acceleration/deceleration and temperature changes of device 1600.
  • Sensor assembly 1614 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 1614 may also include a light sensor, such as a CMOS or CCD image sensor, configured for use in imaging applications.
  • the sensor assembly 1614 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 1616 is configured to facilitate wired or wireless communication between device 1600 and other devices.
  • the device 1600 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • communication component 1616 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 1616 also includes a near field communication (NFC) module to facilitate short range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • device 1600 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation configured to perform the above alarm method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component implementation configured to perform the above alarm method.
  • non-transitory computer readable storage medium comprising instructions, such as a memory 1604 comprising instructions executable by processor 1620 of apparatus 1600 to perform the above method.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • the setting information carries the monitoring target identification information and the sensitive area information corresponding to the monitoring target, so that the server obtains the monitoring video, and returns the alarm information when there is a monitoring target in the sensitive area of the monitoring video;
  • an alarm is generated based on the alarm information.
  • the method before sending the setting information to the server, the method further includes:
  • the monitoring target identification information and the sensitive area information corresponding to the monitoring target are determined.
  • the monitoring target identification information and the sensitive area information corresponding to the monitoring target are determined based on the video image of the historical video, including:
  • the video image based on the history video receives the first selection instruction, determining an object selected by the first selection instruction as a monitoring target;
  • the video image based on the history video receives the second selection instruction, determining the area selected by the second selection instruction as the sensitive area corresponding to the monitoring target;
  • the monitoring target identification information and the sensitive area information corresponding to the monitoring target are determined based on the video image of the historical video, including:
  • the terminal sends the setting information to the server, where the setting information carries the monitoring target identification information and the sensitive area information corresponding to the monitoring target, so that the server obtains the monitoring video, and returns when there is a monitoring target in the sensitive area of the monitoring video.
  • Alarm information when the terminal receives the alarm information, the terminal can make an alarm to prevent the occurrence of unsafe events.
  • the server acquires the monitoring video and determines whether there is a monitoring target in the sensitive area of the monitoring video. When there is a monitoring target in the sensitive area, the server sends an alarm message to the terminal, so that the terminal performs an alarm, thereby preventing insecurity. The occurrence of the event.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Security & Cryptography (AREA)
  • Electromagnetism (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Telephonic Communication Services (AREA)
  • Image Analysis (AREA)

Abstract

一种报警方法及装置,属于互联网技术领域。所述方法包括:获取监控视频;判断所述监控视频的敏感区域中是否存在监控目标;当所述敏感区域中存在所述监控目标时,向终端发送报警信息,使所述终端进行报警。

Description

报警方法及装置
相关申请的交叉引用
本申请基于申请号为201510713143.1、申请日为2015年10月28日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及互联网技术领域,尤其涉及一种报警方法及装置。
背景技术
随着摄像头的普及,利用摄像头进行实时监控越来越流行。摄像头在实时监控时可以对监控区域内的图像进行采集,然而摄像头的监控区域内往往会存在敏感区域,如插座附近、大门、窗户等等,当特定对象位于敏感区域时可能会发生不安全事件,如当小孩位于插座附近时,可能会带来危险,因此,亟需一种报警方法,以预防不安全事件的发生。
发明内容
为克服相关技术中存在的问题,本公开实施例提供一种报警方法及装置。
根据本公开实施例的第一方面,提供一种报警方法,所述方法包括:
获取监控视频;
判断所述监控视频的敏感区域中是否存在监控目标;
当所述敏感区域中存在所述监控目标时,向终端发送报警信息,使所述终端进行报警。
结合第一方面,在上述第一方面的第一种可能的实现方式中,所述判断所述监控视频的敏感区域中是否存在监控目标,包括:
判断所述监控视频中是否存在运动目标;
当所述监控视频中存在所述运动目标时,判断所述运动目标是否为监控目标;
当所述运动目标为所述监控目标时,判断所述监控目标是否位于敏感区域;
当所述监控目标位于所述敏感区域时,确定所述监控视频的敏感区域中存在所述监控目标。
结合第一方面,在上述第一方面的第二种可能的实现方式中,所述判断所述监控视频的敏感区域中是否存在监控目标,包括:
判断所述监控视频的敏感区域中是否存在运动目标;
当所述敏感区域中存在所述运动目标时,判断所述运动目标是否为监控目标;
当所述运动目标为所述监控目标时,确定所述监控视频的敏感区域中存在所述监控目标。
结合第一方面的第一种可能的实现方式或者第一方面第二种可能的实现方式,在上述第一方面的第三种可能的实现方式中,所述判断所述运动目标是否为监控目标,包括:
确定所述运动目标的特征;
确定所述运动目标的特征与所述监控目标的特征之间的匹配度;
当所述匹配度大于指定数值时,确定所述运动目标为所述监控目标。
结合第一方面的第三种可能的实现方式,在上述第一方面的第四种可能的实现方式中,所述确定所述运动目标的特征,包括:
在所述监控视频的视频图像中,对所述运动目标所在的区域进行裁剪, 得到目标图像;
对所述目标图像进行特征提取,得到所述运动目标的特征。
结合第一方面的第三种可能的实现方式,在上述第一方面的第五种可能的实现方式中,所述确定所述运动目标的特征与所述监控目标的特征之间的匹配度之前,还包括:
接收所述终端发送的设置信息,所述设置信息中携带监控目标标识信息;
基于所述监控目标标识信息,从存储的历史视频中,获取所述监控目标的跟踪视频;
从所述跟踪视频的每帧视频图像中,获取所述监控目标的跟踪图像;
对所述监控目标的跟踪图像进行特征提取,得到所述监控目标的特征。
结合第一方面的第五种可能的实现方式,在上述第一方面的第六种可能的实现方式中,所述设置信息中还携带所述监控目标对应的敏感区域信息,所述敏感区域信息用于获取敏感区域。
结合第一方面的第一种可能的实现方式,在上述第一方面的第七种可能的实现方式中,所述判断所述监控目标是否位于敏感区域,包括:
对所述监控目标进行目标跟踪,得到所述监控目标当前所处的位置;
基于所述监控目标当前所处的位置,判断所述监控目标是否位于敏感区域。
根据本公开实施例的第二方面,提供一种报警方法,所述方法包括:
向服务器发送设置信息,所述设置信息中携带监控目标标识信息和监控目标对应的敏感区域信息,使所述服务器获取监控视频,并在所述监控视频的敏感区域中存在所述监控目标时返回报警信息;
当接收到所述服务器返回的报警信息时,基于所述报警信息进行报警。
结合第二方面,在上述第二方面的第一种可能的实现方式中,所述向 服务器发送设置信息之前,还包括:
获取历史视频,播放所述历史视频;
在播放所述历史视频的过程中,基于所述历史视频的视频图像,确定所述监控目标标识信息和所述监控目标对应的敏感区域信息。
结合第二方面的第一种可能的实现方式,在上述第二方面的第二种可能的实现方式中,所述基于所述历史视频的视频图像,确定所述监控目标标识信息和所述监控目标对应的敏感区域信息,包括:
当基于所述历史视频的视频图像接收到第一选择指令时,将所述第一选择指令所选择的对象确定为所述监控目标;
当基于所述历史视频的视频图像接收到第二选择指令时,将所述第二选择指令所选择的区域确定为所述监控目标对应的敏感区域;
获取所述监控目标的监控目标标识信息,以及获取所述敏感区域的敏感区域信息。
结合第二方面的第一种可能的实现方式,在上述第二方面的第三种可能的实现方式中,所述基于所述历史视频的视频图像,确定所述监控目标标识信息和所述监控目标对应的敏感区域信息,包括:
获取在所述历史视频的视频图像中画出的第一区域和在所述视频图像中选择的目标对象,所述目标对象为在所述视频图像中画出的第二区域包括的对象,或者所述目标对象为在所述视频图像中检测到的选择操作所选择的对象;
当在所述第一区域和所述目标对象中的至少一个上检测到预设手势操作时,将所述第一区域确定为所述监控目标对应的敏感区域,以及将所述目标对象确定为所述监控目标;
获取所述监控目标的监控目标标识信息,以及获取所述敏感区域的敏感区域信息。
根据本公开实施例的第三方面,提供一种报警装置,所述装置包括:
获取模块,配置为获取监控视频;
判断模块,配置为判断所述监控视频的敏感区域中是否存在监控目标;
发送模块,配置为当所述敏感区域中存在所述监控目标时,向终端发送报警信息,使所述终端进行报警。
结合第三方面,在上述第三方面的第一种可能的实现方式中,所述判断模块包括:
第一判断单元,配置为判断所述监控视频中是否存在运动目标;
监控目标识别单元,配置为当所述监控视频中存在所述运动目标时,判断所述运动目标是否为监控目标;
第二判断单元,配置为当所述运动目标为所述监控目标时,判断所述监控目标是否位于敏感区域;
第一确定单元,配置为当所述监控目标位于所述敏感区域时,确定所述监控视频的敏感区域中存在所述监控目标。
结合第三方面,在上述第三方面的第二种可能的实现方式中,所述判断模块包括:
第三判断单元,配置为判断所述监控视频的敏感区域中是否存在运动目标;
监控目标识别单元,配置为当所述敏感区域中存在所述运动目标时,判断所述运动目标是否为监控目标;
第二确定单元,配置为当所述运动目标为所述监控目标时,确定所述监控视频的敏感区域中存在所述监控目标。
结合第三方面的第一种可能的实现方式或者第三方面的第二种可能的实现方式,在上述第三方面的第三种可能的实现方式中,所述监控目标识别单元包括:
第一确定子单元,配置为确定所述运动目标的特征;
第二确定子单元,配置为确定所述运动目标的特征与所述监控目标的特征之间的匹配度;
第三确定子单元,配置为当所述匹配度大于指定数值时,确定所述运动目标为所述监控目标。
结合第三方面的第三种可能的实现方式,在上述第三方面的第四种可能的实现方式中,所述第一确定子单元,配置为:
当所述监控视频中存在所述运动目标时,在所述监控视频的视频图像中,对所述运动目标所在的区域进行裁剪,得到目标图像;
对所述目标图像进行特征提取,得到所述运动目标的特征。
结合第三方面的第三种可能的实现方式,在上述第三方面的第五种可能的实现方式中,所述监控目标识别单元还包括:
接收子单元,配置为接收所述终端发送的设置信息,所述设置信息中携带监控目标标识信息;
第一获取子单元,配置为基于所述监控目标标识信息,从存储的历史视频中,获取所述监控目标的跟踪视频;
第二获取子单元,配置为从所述跟踪视频的每帧视频图像中,获取所述监控目标的跟踪图像;
提取子单元,配置为对所述监控目标的跟踪图像进行特征提取,得到所述监控目标的特征。
结合第三方面的第五种可能的实现方式,在上述第三方面的第六种可能的实现方式中,所述设置信息中还携带所述监控目标对应的敏感区域信息,所述敏感区域信息用于获取敏感区域。
结合第三方面的第一种可能的实现方式,在上述第三方面的第七种可能的实现方式中,所述第二判断单元包括:
跟踪子单元,配置为当所述运动目标为所述监控目标时,对所述监控目标进行目标跟踪,得到所述监控目标当前所处的位置;
判断子单元,配置为基于所述监控目标当前所处的位置,判断所述监控目标是否位于敏感区域。
根据本公开实施例的第四方面,提供一种报警装置,所述装置包括:
第一发送模块,配置为向服务器发送设置信息,所述设置信息中携带监控目标标识信息和监控目标对应的敏感区域信息,使所述服务器获取监控视频,并在所述监控视频的敏感区域中存在所述监控目标时返回报警信息;
报警模块,配置为当接收到所述服务器返回的报警信息时,基于所述报警信息进行报警。
结合第四方面,在上述第四方面的第一种可能的实现方式中,所述装置还包括:
获取模块,配置为获取历史视频;
播放模块,配置为播放所述历史视频;
确定模块,配置为在播放所述历史视频的过程中,基于所述历史视频的视频图像,确定所述监控目标标识信息和所述监控目标对应的敏感区域信息。
结合第四方面的第一种可能的实现方式,在上述第一方面的第二种可能的实现方式中,所述确定模块包括:
第一确定单元,配置为在播放所述历史视频的过程中,当基于所述历史视频的视频图像接收到第一选择指令时,将所述第一选择指令所选择的对象确定为所述监控目标;
第二确定单元,配置为当基于所述历史视频的视频图像接收到第二选择指令时,将所述第二选择指令所选择的区域确定为所述监控目标对应的 敏感区域;
第一获取单元,配置为获取所述监控目标的监控目标标识信息,以及获取所述敏感区域的敏感区域信息。
结合第四方面的第一种可能的实现方式,在上述第四方面的第三种可能的实现方式中,所述确定模块包括:
第二获取单元,配置为获取在所述历史视频的视频图像中画出的第一区域和在所述视频图像中选择的目标对象,所述目标对象为在所述视频图像中画出的第二区域包括的对象,或者所述目标对象为在所述视频图像中检测到的选择操作所选择的对象;
第三确定单元,配置为当在所述第一区域和所述目标对象中的至少一个上检测到预设手势操作时,将所述第一区域确定为所述监控目标对应的敏感区域,以及将所述目标对象确定为所述监控目标;
第三获取单元,配置为获取所述监控目标的监控目标标识信息,以及获取所述敏感区域的敏感区域信息。
根据本公开实施例的第五方面,提供一种报警装置,所述装置包括:
处理器;
配置为存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
获取监控视频;
判断所述监控视频的敏感区域中是否存在监控目标;
当所述敏感区域中存在所述监控目标时,向终端发送报警信息,使所述终端进行报警。
根据本公开实施例的第六方面,提供一种报警装置,所述装置包括:
处理器;
配置为存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
向服务器发送设置信息,所述设置信息中携带监控目标标识信息和监控目标对应的敏感区域信息,使所述服务器获取监控视频,并在所述监控视频的敏感区域中存在所述监控目标时返回报警信息;
当接收到所述服务器返回的报警信息时,基于所述报警信息进行报警。
在本公开实施例中,服务器获取监控视频,并判断监控视频的敏感区域中是否存在监控目标,当敏感区域中存在监控目标时,服务器向终端发送报警信息,使终端进行报警,从而预防不安全事件的发生。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。
图1是根据一示例性实施例示出的一种报警方法所涉及的实施环境的示意图;
图2是根据一示例性实施例示出的一种报警方法的流程图;
图3是根据一示例性实施例示出的另一种报警方法的流程图;
图4是根据一示例性实施例示出的又一种报警方法的流程图;
图5是根据一示例性实施例示出的第一种报警装置的框图;
图6是根据一示例性实施例示出的一种判断模块的框图;
图7是根据一示例性实施例示出的另一种判断模块的框图;
图8是根据一示例性实施例示出的一种监控目标识别单元的框图;
图9是根据一示例性实施例示出的另一种监控目标识别单元的框图;
图10是根据一示例性实施例示出的一种第二判断单元的框图;
图11是根据一示例性实施例示出的第二种报警装置的框图;
图12是根据一示例性实施例示出的第三种报警装置的框图;
图13是根据一示例性实施例示出的一种确定模块的框图;
图14是根据一示例性实施例示出的另一种确定模块的框图;
图15是根据一示例性实施例示出的第四种报警装置的框图;
图16是根据一示例性实施例示出的第五种报警装置的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
图1是根据一示例性实施例示出的一种报警方法所涉及的实施环境的示意图。如图1所示,该实施环境可以包括:服务器101、智能摄像设备102和终端103。服务器101可以是一台服务器,或者是由若干台服务器组成的服务器集群,或者是一个云计算服务中心,智能摄像设备102可以是智能摄像机,终端103可以是移动电话,计算机,平板设备等。服务器101和智能摄像设备102之间可以通过网络进行连接,服务器101与终端103之间也可以通过网络进行连接。服务器101配置为接收智能摄像设备发送的监控视频,并向终端发送报警信息。智能摄像设备102配置为采集监控区域内的监控视频,并将监控视频发送给服务器。终端103配置为接收服务器发送的报警信息,并进行报警。
图2是根据一示例性实施例示出的一种报警方法的流程图,如图2所示,该方法用于服务器中,包括以下步骤。
在步骤201中,获取监控视频。
在步骤202中,判断监控视频的敏感区域中是否存在监控目标。
在步骤203中,当敏感区域中存在监控目标时,向终端发送报警信息,使终端进行报警。
在本公开实施例中,服务器获取监控视频,并判断监控视频的敏感区域中是否存在监控目标,当敏感区域中存在监控目标时,服务器向终端发送报警信息,使终端进行报警,从而预防不安全事件的发生。
在本公开的另一实施例中,判断监控视频的敏感区域中是否存在监控目标,包括:
判断监控视频中是否存在运动目标;
当监控视频中存在运动目标时,判断运动目标是否为监控目标;
当运动目标为监控目标时,判断监控目标是否位于敏感区域;
当监控目标位于敏感区域时,确定监控视频的敏感区域中存在监控目标。
其中,服务器判断监控视频中是否存在运动目标,当监控视频中存在运动目标时,服务器再对运动目标是否为监控目标进行判断,从而可以有效确定监控视频中是否存在监控目标,进而可以有效确定监控目标是否位于敏感区域。
在本公开的另一实施例中,判断监控视频的敏感区域中是否存在监控目标,包括:
判断监控视频的敏感区域中是否存在运动目标;
当敏感区域中存在运动目标时,判断运动目标是否为监控目标;
当运动目标为监控目标时,确定监控视频的敏感区域中存在监控目标。
其中,服务器判断监控视频的敏感区域中是否存在运动目标,当敏感区域中存在运动目标时,服务器再对运动目标是否为监控目标进行判断,从而可以有效确定敏感区域中是否存在监控目标,并且服务器无需对除敏感区域外的其它区域进行检测,有效避免了监控区域中除敏感区域外的其 它区域对检测结果的干扰,提高了检测效率和检测精度。
在本公开的另一实施例中,判断运动目标是否为监控目标,包括:
确定运动目标的特征;
确定运动目标的特征与监控目标的特征之间的匹配度;
当匹配度大于指定数值时,确定运动目标为监控目标。
由于运动目标的特征与监控目标的特征之间的匹配度大于指定数值时,表明运动目标的特征与监控目标的特征相差较小,也即是,运动目标很有可能是监控目标。因此,基于运动目标的特征与监控目标的特征之间的匹配度,可以有效确定运动目标是否为监控目标,提高确定监控目标时的正确率。
在本公开的另一实施例中,确定运动目标的特征,包括:
在监控视频的视频图像中,对运动目标所在的区域进行裁剪,得到目标图像;
对目标图像进行特征提取,得到运动目标的特征。
其中,服务器对运动目标所在的区域进行裁剪,得到目标图像,可以便于服务器对目标图像进行特征提取,得到运动目标的特征,提高了特征提取的效率。
在本公开的另一实施例中,确定运动目标的特征与监控目标的特征之间的匹配度之前,还包括:
接收终端发送的设置信息,设置信息中携带监控目标标识信息;
基于监控目标标识信息,从存储的历史视频中,获取监控目标的跟踪视频;
从跟踪视频的每帧视频图像中,获取监控目标的跟踪图像;
对监控目标的跟踪图像进行特征提取,得到监控目标的特征。
其中,服务器从监控目标的跟踪视频的每帧视频图像中,获取监控目 标的跟踪图像,并对该跟踪图像进行特征提取,提高了特征提取的精度。
在本公开的另一实施例中,设置信息中还携带监控目标对应的敏感区域信息,该敏感区域信息用于获取敏感区域。
其中,设置信息中携带监控目标对应的敏感区域,可以便于服务器基于该敏感区域,确定该敏感区域中是否存在监控目标。
在本公开的另一实施例中,判断监控目标是否位于敏感区域,包括:
对监控目标进行目标跟踪,得到监控目标当前所处的位置;
基于监控目标当前所处的位置,判断监控目标是否位于敏感区域。
为了判断监控目标是否位于敏感区域,服务器需要对监控目标进行目标跟踪,得到监控目标当前所处的位置,基于监控目标当前所处的位置,可以对监控目标是否位于敏感区域进行有效确定。
上述所有可选技术方案,均可按照任意结合形成本公开的可选实施例,本公开实施例对此不再一一赘述。
图3是根据一示例性实施例示出的一种报警方法的流程图,如图3所示,该方法包括以下步骤。
在步骤301中,向服务器发送设置信息,该设置信息中携带监控目标标识信息和监控目标对应的敏感区域信息,使服务器获取监控视频,并在监控视频的敏感区域中存在监控目标时返回报警信息。
在步骤302中,当接收到服务器返回的报警信息时,基于报警信息进行报警。
在本公开实施例中,终端向服务器发送设置信息,该设置信息中携带监控目标标识信息和监控目标对应的敏感区域信息,使服务器获取监控视频,并在监控视频的敏感区域中存在监控目标时返回报警信息,当终端接收到该报警信息时,终端可以进行报警,从而预防不安全事件的发生。
在本公开的另一实施例中,向服务器发送设置信息之前,还包括:
获取历史视频,播放历史视频;
在播放历史视频的过程中,基于历史视频的视频图像,确定监控目标标识信息和监控目标对应的敏感区域信息。
由于服务器需要对敏感区域中是否存在监控目标进行判断,因此,服务器需要确定敏感区域和监控目标,而终端基于服务器发送历史视频的视频图像,确定监控目标标识信息和监控目标对应的敏感区域,可以使服务器在接收到设置信息时,基于该设置信息,快速确定敏感区域和监控目标。
在本公开的另一实施例中,基于历史视频的视频图像,确定监控目标标识信息和监控目标对应的敏感区域信息,包括:
当基于历史视频的视频图像接收到第一选择指令时,将第一选择指令所选择的对象确定为监控目标;
当基于历史视频的视频图像接收到第二选择指令时,将第二选择指令所选择的区域确定为监控目标对应的敏感区域。
获取监控目标的监控目标标识信息,以及获取敏感区域的敏感区域信息。
为了防止监控目标位于敏感区域时,发生不安全事件,所以终端对应的用户需要基于历史视频的视频图像,选择监控目标和敏感区域,以便服务器可以对监控目标和敏感区域进行监测。
在本公开的另一实施例中,基于历史视频的视频图像,确定监控目标标识信息和监控目标对应的敏感区域信息,包括:
获取在历史视频的视频图像中画出的第一区域和在视频图像中选择的目标对象,目标对象为在视频图像中画出的第二区域包括的对象,或者目标对象为在视频图像中检测到的选择操作所选择的对象;
当在第一区域和目标对象中的至少一个上检测到预设手势操作时,将第一区域确定为监控目标对应的敏感区域,以及将目标对象确定为监控目 标;
获取监控目标的监控目标标识信息,以及获取敏感区域的敏感区域信息。
其中,终端获取在历史视频的视频图像中画出的第一区域和在视频图像中选择的目标对象,并当在第一区域和目标对象中的至少一个上检测到预设手势操作时,将第一区域确定为监控目标对应的敏感区域,以及将目标对象确定为监控目标,可以简单直观地确定敏感区域和目标对象,操作简单,并且提高了终端确定监控目标标识信息和敏感区域信息的效率。
上述所有可选技术方案,均可按照任意结合形成本公开的可选实施例,本公开实施例对此不再一一赘述。
图4是根据一示例性实施例示出的一种报警方法的流程图,如图4所示,该方法包括以下步骤。
在步骤401中,服务器获取监控视频。
需要说明的是,服务器可以从智能摄像设备中获取该监控视频,当然,该智能摄像设备也可以将该监控视频发送到其它设备中,以使服务器可以从该其它设备中获取该监控视频,本公开实施例对此不做具体限定。
其中,智能摄像设备配置为采集监控区域内的监控视频,且智能摄像设备采集监控区域内的监控视频的过程可以参考相关技术,本公开实施例在此不进行详细阐述。
另外,智能摄像设备可以通过有线网络或者无线网络和服务器或者其它设备进行通信,而当智能摄像设备通过无线网络和服务器或者其它设备进行通信时,智能摄像设备可以通过内置的无线保真(英文:WIreless-FIdelity,简称:WIFI)、蓝牙或者其它无线通信芯片来和服务器或者其它设备进行通信,本公开实施例对此不做具体限定。
在步骤402中,服务器判断监控视频的敏感区域中是否存在监控目标。
为了预防监控目标位于敏感区域时发生不安全事件,服务器需要判断监控视频的敏感区域中是否存在监控目标,而服务器判断监控视频的敏感区域中是否存在监控目标可以有如下两种方式:
第一种方式:服务器判断监控视频中是否存在运动目标;当监控视频中存在运动目标时,判断该运动目标是否为监控目标;当该运动目标为监控目标时,判断该监控目标是否位于敏感区域;当该监控目标位于敏感区域时,确定监控视频的敏感区域中存在监控目标。
由于智能摄像设备一般是固定的,即该智能摄像设备是对固定监控区域内的监控视频进行采集,则此时为了判断监控视频中是否存在运动目标,可以对该固定监控区域中的背景建立背景模型,从而可以将监控视频中的每帧视频图像和该背景模型进行比较,来确定该固定监控区域中的前景图像,前景图像是指在假设背景为静止的情况下的任何有意义的运动物体的图像。
因此,服务器判断监控视频中是否存在运动目标的操作可以为:对于监控视频中的每帧视频图像,服务器获取该视频图像中每个像素点的像素值;基于该每个像素点的像素值和指定背景模型,判断该视频图像中是否存在前景像素点;当该视频图像中存在前景像素点时,确定该监控视频中存在运动目标,否则,确定该监控视频中不存在运动目标。
其中,指定背景模型配置为表征视频图像中每个背景像素点的像素值在时域上的分布特征,且指定背景模型可以为混合高斯模型,当然,指定背景模型也可以为其他模型,本公开实施例对此不做具体限定。
另外,指定背景模型可以预先建立,如可以预先根据监控视频的指定视频图像中每个像素点的像素值在时域上的分布情况,建立指定背景模型,当然,也可以以其它方式建立指定背景模型,本公开实施例同样对此不做具体限定。
由于颜色特征是图像的本质特征之一,颜色特征可以表现为图像的像素点的像素值,像素值是指图像的像素点的位置、颜色、亮度等数值,因此,服务器可以基于视频图像中每个像素点的像素值和指定背景模型,判断该视频图像中是否存在前景像素点。而当该视频图像中存在前景像素点时,则表明该视频图像中存在有意义的运动物体,也即是监控视频中存在运动目标。
其中,服务器基于每个像素点的像素值和指定背景模型,判断该视频图像中是否存在前景像素点时,服务器可以将每个像素点的像素值与指定背景模型进行匹配,当每个像素点的像素值均与指定背景模型匹配成功时,确定该视频图像中不存在前景像素点,否则,确定该视频图像中存在前景像素点,且该前景像素点为与指定背景模型匹配不成功的像素值对应的像素点。
另外,服务器基于每个像素点的像素值与指定背景模型进行匹配的过程可以参考相关技术,本公开实施例对此不进行详细阐述。
进一步地,服务器确定监控视频中不存在运动目标之后,服务器还可以基于该视频图像中每个像素点的像素值,更新指定背景模型。
由于指定背景模型是服务器预先建立的,且由于光照变化、摄像头抖动等不可测因素的影响,会使背景产生变化,因此,为了避免该不可测因素导致的变化累积,使指定背景模型对运动目标的检测出现误差,当服务器确定监控视频中不存在运动目标时,该服务器可以基于该视频图像中每个像素点的像素值,对该指定背景模型进行实时更新,以使指定背景模型具有自适应性,可以不断贴近当前背景像素点的像素值在时域上的分布特征,进而提高运动目标检测的准确性。
需要说明的是,服务器基于视频图像中每个像素点的像素值,更新指定背景模型的过程可以参考相关技术,本公开实施例在此不进行详细阐述。
其中,当监控视频中存在运动目标时,服务器判断该运动目标是否为监控目标可以包括步骤(1)-(3):
(1)、服务器确定该运动目标的特征。
当监控视频中存在运动目标时,为了确定该运动目标是否为监控目标,服务器需要确定该运动目标的特征,而服务器确定该运动目标的特征的操作可以为:服务器在监控视频的视频图像中,对该运动目标所在的区域进行裁剪,得到目标图像,并对目标图像进行特征提取,得到该运动目标的特征。
其中,服务器在监控视频的视频图像中,对该运动目标所在的区域进行裁剪,得到目标图像时,服务器可以从该运动目标所在的视频图像中,截取该运动目标的外接矩形,并将该外接矩形确定为该运动目标在监控视频中所处的图像区域,即目标图像。或者,服务器还可以从该运动目标所在的视频图像中,获取前景像素点,并将获取的前景像素点进行组合,得到该运动目标在监控视频中所处的图像区域,即目标图像。又或者,服务器还可以清除该运动目标所在视频图像中的背景像素点,得到该运动目标在监控视频中所处的图像区域,即目标图像,其中,背景像素点为与指定背景模型匹配成功的像素值对应的像素点。
另外,服务器对目标图像进行特征提取时,服务器可以通过指定特征提取算法对目标图像进行特征提取,且指定特征提取算法可以为小波变换法、最小二乘法、边界方法直方图法等等,本公开实施例对此不做具体限定。而服务器通过指定特征提取算法对目标图像进行特征提取的过程可以参考相关技术,本公开实施例在此不进行详细阐述。
需要说明的是,运动目标的特征可以包括一个,也可以包括多个,且该特征可以为颜色特征、纹理特征、形状特征等等,本公开实施例对此不做具体限定。
(2)、服务器确定运动目标的特征与监控目标的特征之间的匹配度。
为了确定运动目标是否为监控目标,服务器需要将运动目标的特征与监控目标的特征进行匹配,以确定运动目标的特征与监控目标的特征之间的匹配度,而服务器确定运动目标的特征与监控目标的特征之间的匹配度时,服务器可以将运动目标的特征与监控目标的特征分别进行匹配,确定匹配成功的特征个数,之后,计算匹配成功的特征个数与监控目标的特征个数的比值,并将该比值确定为运动目标的特征与监控目标的特征之间的匹配度。或者,当运动目标的特征包括一个时,服务器确定运动目标的特征与监控目标的特征之间的相似度,得到特征相似度,并将该特征相似度确定为运动目标的特征与监控目标的特征之间的匹配度;当运动目标的特征包括多个时,服务器分别确定运动目标的多个特征与监控目标的多个特征之间的相似度,得到多个特征相似度,之后,计算该多个特征相似度的加权值,并将该加权值确定为运动目标的特征与监控目标的特征之间的匹配度。
需要说明的是,监控目标的特征可以为一个或者多个,本公开实施例对此不做具体限定。
另外,服务器将运动目标的特征与监控目标的特征进行匹配的过程可以参考相关技术,本公开实施例在此不进行详细阐述。
例如,匹配成功的特征个数为4,监控目标的特征个数为5,则匹配成功的特征个数与监控目标的特征个数的比值为0.8,服务器可以将0.8确定为动目标的特征与监控目标的特征之间的匹配度。
其中,服务器计算该多个特征相似度的加权值时,服务器可以将该多个特征相似度与该多个特征分别对应的权重相乘,得到多个数值,并将该多个数值相加,得到该多个特征相似度的加权值。
需要说明的是,该多个特征分别对应的权重是指在判断运动目标是否 为监控目标时,该多个特征分别所能提供的参考作用的大小,且该多个特征分别对应的权重可以预先设置。
例如,运动目标的特征包括一个,且该特征为颜色特征,服务器将运动目标的颜色特征与监控目标的颜色特征进行匹配,得到颜色特征相似度为0.8,则服务器可以将0.8确定为运动目标的特征与监控目标的特征之间的匹配度。
再例如,运动目标的特征包括多个,且该多个特征分别为颜色特征和纹理特征,服务器将运动目标的颜色特征与监控目标的颜色特征进行匹配,得到颜色特征相似度为0.8,并将运动目标的纹理特征与监控目标的纹理特征进行匹配,得到纹理特征相似度为0.6,假设颜色特征对应的权重为1/2,纹理特征对应的权重为1/2,则颜色特征相似度与纹理特征相似度的加权值为:1/2×0.8+1/2×0.6=0.7,服务器可以将0.7确定为运动目标的特征与监控目标的特征之间的匹配度。
需要说明的是,服务器确定运动目标的特征与监控目标的特征之间的相似度的过程可以参考相关技术,本公开实施例在此不进行详细阐述。
进一步地,服务器确定运动目标的特征与监控目标的特征之间的匹配度之前,服务器还可以接收终端发送的设置信息,该设置信息中携带监控目标标识信息;基于监控目标标识信息,从存储的历史视频中,获取监控目标的跟踪视频;从跟踪视频的每帧视频图像中,获取监控目标的跟踪图像;对监控目标的跟踪图像进行特征提取,得到监控目标的特征。
其中,该设置信息中还携带监控目标对应的敏感区域信息,该敏感区域信息用于获取敏感区域。
需要说明的是,通过监控目标标识信息能够区分出各个不同的监控目标,比如,当监控目标为人时,监控目标标识信息可以为该人的人脸特征等等,当监控目标为具有固定形状的物体时,监控目标标识信息可以为该 物体的形状,当监控目标为宠物时,监控目标标识信息可以通过扫描该宠物携带的二维码得到,当然,监控目标标识信息还可以为监控目标的图像信息等等,本公开实施例对此不做具体限定。
还需要说明的是,通过敏感区域信息可以区分出各个不同的敏感区域,且该敏感区域信息可以为该敏感区域的边缘信息,该边缘信息可以为该敏感区域的边缘在该视频图像所经过的像素点的坐标,当然,该敏感区域信息还可以为其他形式的信息,本公开实施例对此不做具体限定。
另外,服务器基于监控目标标识信息,从存储的历史视频中,获取监控目标的跟踪视频时,可以基于监控目标标识信息,通过指定跟踪算法从存储的历史视频中,获取监控目标的跟踪视频,且指定跟踪算法可以为粒子群优化算法、连续自适应均值漂移(英文:Continuously Adaptive Mean-SHIFT,简称:CamShift)算法等等,本公开实施例对此不做具体限定。而服务器基于监控目标标识信息,通过指定跟踪算法从存储的历史视频中,获取监控目标的跟踪视频的过程可以参考相关技术,本公开实施例在此不进行详细阐述。
再者,服务器对监控目标的跟踪图像进行特征提取时,可以通过指定特征提取算法对监控目标的跟踪图像进行特征提取,而服务器通过指定特征提取算法对监控目标的跟踪图像进行特征提取的过程可以参考相关技术,本公开实施例在此不进行详细阐述。
需要说明的是,终端发送设置信息之前,终端还可以获取历史视频,比如,向服务器发送历史视频获取请求,使服务器返回历史视频,之后,终端播放该历史视频,并在播放该历史视频的过程中,基于该历史视频的视频图像,确定监控目标标识信息和监控目标对应的敏感区域信息。
其中,终端基于该历史视频的视频图像,确定监控目标标识信息和监控目标对应的敏感区域信息的操作可以包括:当终端基于该历史视频的视 频图像接收到第一选择指令时,将第一选择指令所选择的对象确定为监控目标,当终端基于历史视频的视频图像接收到第二选择指令时,将第二选择指令所选择的区域确定为监控目标对应的敏感区域,之后终端获取监控目标的监控目标标识信息,以及获取敏感区域的敏感区域信息。或者,终端获取在历史视频的视频图像中画出的第一区域和在该视频图像中选择的目标对象,目标对象为在该视频图像中画出的第二区域包括的对象,或者目标对象为在该视频图像中检测到的选择操作所选择的对象,当在第一区域和目标对象中的至少一个上检测到预设手势操作时,将第一区域确定为监控目标对应的敏感区域,以及将目标对象确定为监控目标,之后,终端获取监控目标的监控目标标识信息,以及获取敏感区域的敏感区域信息。
其中,第一选择指令用于从该历史视频的视频图像包括的对象中选择监控目标,且该第一选择指令可以由用户触发,该用户可以通过第一指定操作触发,第一指定操作可以为单击操作、双击操作等等,本公开实施例对此不做具体限定。
另外,第二选择指令用于从该历史视频的视频图像包括的区域中选择监控目标对应的敏感区域,且该第二选择指令可以由用户触发,该用户可以通过第二指定操作触发,第二指定操作可以为滑动操作等等,本公开实施例对此不做具体限定。
其中,第一区域和第二区域均为闭合区域或者近乎闭合的区域,第一区域可以包括一个或多个区域,第二区域也可以包括一个或多个区域,且第一区域可以包括第二区域,也可以不包括第二区域,本公开实施例对此不做具体限定。
另外,选择操作用于从该视频图像包括的对象中选择目标对象,且该选择操作可以由用户触发,该选择操作可以为单击操作、双击操作等等,本公开实施例对此不做具体限定。
再者,预设手势操作用于确定监控目标和敏感区域,且该预设手势操作可以由用户触发,该预设手势操作可以为画叉操作、打钩操作等等,本公开实施例对此不做具体限定。
例如,预设手势操作为画叉操作,用户在历史视频的视频图像上画出第一区域,并在该视频图像上画出第二区域,该第二区域包括目标对象,之后,用户在该第一区域上画叉或者在该第二区域上画叉,又或者在该第一区域和第二区域上同时画叉,则此时终端可以将该第一区域确定为敏感区域,将该第二区域包括的目标对象确定为监控目标,之后,服务器可以获取该监控目标的监控目标标识,以及获取该敏感区域的敏感区域标识。
再例如,选择操作为单击操作,预设手势操作为画叉操作,用户在历史视频的视频图像画出第一区域,并单击该视频图像中的目标对象,之后,用户在该第一区域上画叉或者在该目标对象上画叉,又或者在该第一区域和该目标对象上同时画叉,则此时终端可以将该第一区域确定为敏感区域,将该目标对象确定为监控目标,之后,服务器可以获取该监控目标的监控目标标识,以及获取该敏感区域的敏感区域标识。
需要说明的是,本公开实施例中,用户可以手动地在视频图像上画出第一区域,以及手动选择目标对象,并且可以通过预设手势操作将该第一区域确定为敏感区域,将该目标对象确定为监控目标,简单直观地确定了监控目标和敏感区域,提高了终端确定监控目标和敏感区域的效率。
(3)、当运动目标的特征与监控目标的特征之间的匹配度大于指定数值时,服务器确定该运动目标为监控目标,否则,服务器确定该运动目标不为监控目标。
运动目标的特征与监控目标的特征之间的匹配度大于指定数值时,表明运动目标的特征与监控目标的特征相差较小,也即是,运动目标很有可能是监控目标,则此时服务器可以确定该运动目标为监控目标。而当运动 目标的特征与监控目标的特征之间的匹配度小于或等于指定数值时,表明运动目标的特征与监控目标的特征相差较大,也即是,运动目标不太可能是监控目标,则此时服务器可以确定该运动目标不为监控目标。
需要说明的是,指定数值可以预先设置,如指定数值可以为0.7、0.8、0.9等等,本公开实施例对此不做具体限定。
例如,指定数值为0.7,运动目标的特征与监控目标的特征之间的匹配度为0.8,由于0.8>0.7,因此,服务器可以确定该运动目标为监控目标。
再例如,运动目标的特征与监控目标的特征之间的匹配度为0.6,由于0.6<0.7,因此,服务器可以确定该运动目标不为监控目标。
其中,当该运动目标为监控目标时,服务器判断监控目标是否位于敏感区域的操作可以为:服务器对监控目标进行目标跟踪,得到监控目标当前所处的位置,进而基于监控目标当前所处的位置,判断监控目标是否位于敏感区域。。
其中,服务器对监控目标进行目标跟踪,得到监控目标当前所处的位置时,服务器可以对监控目标进行目标跟踪,得到监控目标的跟踪图像,并基于该跟踪图像和指定坐标系,确定监控目标当前所处的位置。
另外,指定坐标系可以预先建立,且指定坐标系可以基于智能摄像设备的监控区域建立,当然,指定坐标系也可以按照其他方式建立,如可以基于智能摄像设备的镜头建立,本公开实施例对此不做具体限定。
需要说明的是,服务器基于该跟踪图像和指定坐标系,确定监控目标当前所处的位置时,服务器可以基于该跟踪图像和指定坐标系,利用指定定位算法确定监控目标当前所处的位置。该指定定位算法可以预先设置,且该指定定位算法可以为区域生长算法、区域扩展算法等等,本公开实施例对此不做具体限定。而服务器基于该跟踪图像和指定坐标系,利用指定定位算法确定监控目标当前所处的位置的过程可以参考相关技术,本公开 实施例在此不进行详细阐述。
其中,服务器基于监控目标当前所处的位置,判断监控目标是否位于敏感区域时,服务器基于监控目标当前所处的位置,确定监控目标当前所处的目标区域,并判断监控目标当前所处的目标区域与敏感区域是否存在重叠区域,当监控目标当前所处的目标区域与敏感区域存在重叠区域时,服务器确定监控目标位于敏感区域,否则,服务器确定监控目标没有位于敏感区域。
需要说明的是,服务器判断监控目标当前所处的目标区域与敏感区域是否存在重叠区域的过程可以参考相关技术,本公开实施例在此不进行详细阐述。
第二种方式:服务器判断监控视频的敏感区域中是否存在运动目标;当该敏感区域中存在运动目标时,判断该运动目标是否为监控目标;当该运动目标为监控目标时,确定该监控视频的敏感区域中存在监控目标。
由于敏感区域一般是固定的,则此时为了判断该敏感区域中是否存在运动目标,可以对该敏感区域中的背景建立区域背景模型,从而可以将该敏感区域的区域图像和该区域背景模型进行比较,来确定该敏感区域中的前景图像,前景图像是指在假设背景为静止的情况下的任何有意义的运动物体的图像。
因此,服务器判断监控视频的敏感区域中是否存在运动目标的操作可以为:对于监控视频中的每帧视频图像,对敏感区域进行裁剪,得到敏感区域的区域图像,服务器获取该区域图像中每个像素点的像素值;基于该每个像素点的像素值和指定区域背景模型,判断该区域图像中是否存在前景像素点;当该区域图像中存在前景像素点时,确定该监控视频的敏感区域中存在运动目标,否则,确定该监控视频的敏感区域中不存在运动目标。
其中,指定区域背景模型用于表征敏感区域的区域图像中每个背景像 素点的像素值在时域上的分布特征,且指定区域背景模型可以为混合高斯模型,当然,指定区域背景模型也可以为其他模型,本公开实施例对此不做具体限定。
另外,指定区域背景模型可以预先建立,如可以预先根据敏感区域的指定区域图像中每个像素点的像素值在时域上的分布情况,建立指定区域背景模型,当然,也可以以其它方式建立指定区域背景模型,本公开实施例同样对此不做具体限定。
需要说明的是,服务器基于区域图像中每个像素点的像素值和指定区域背景模型,判断该区域图像中是否存在前景像素点的过程与步骤402第一种方式的判断过程类似,本公开实施例对此不再赘述。
进一步地,服务器确定监控视频的敏感区域中不存在运动目标之后,服务器还可以基于该区域图像中每个像素点的像素值,更新指定区域背景模型。
其中,当该敏感区域中存在运动目标时,服务器判断该运动目标是否为监控目标的过程与步骤402第一种方式中判断过程类似,本公开实施例对此不再赘述。
需要说明的是,第二种方式中只需对敏感区域的区域图像进行检测,以确定该敏感区域中是否存在监控目标,从而有效避免了视频图像中除敏感区域外的其它区域的图像对检测结果的干扰,提高了检测效率和检测精度。
在步骤403中,当监控视频的敏感区域中存在监控目标时,服务器向终端发送报警信息,使终端进行报警。
当监控视频的敏感区域中存在监控目标时,为了预防不安全事件的发生,服务器可以向终端发送报警信息,该报警信息用于提醒用户监控目标位于敏感区域。
需要说明的是,终端可以通过有线网络或者无线网络与服务器进行连接。
另外,终端进行报警时,可以通过终端上设置的扬声器直接播放报警信息,当然,终端也可以通过其它方式进行报警,本公开实施例对此不做具体限定。
在本公开实施例中,服务器获取监控视频,并判断监控视频的敏感区域中是否存在监控目标,当该敏感区域中存在监控目标时,服务器向终端发送报警信息,使终端进行报警,从而预防不安全事件的发生。
图5是根据一示例性实施例示出的一种报警装置的框图。参照图5,该装置包括获取模块501,判断模块502,发送模块503。
获取模块501,配置为获取监控视频;
判断模块502,配置为判断监控视频的敏感区域中是否存在监控目标;
发送模块503,配置为当敏感区域中存在监控目标时,向终端发送报警信息,使终端进行报警。
在本公开的另一实施例中,参照图6,该判断模块502包括第一判断单元5021,监控目标识别单元5022,第二判断单元5023,第一确定单元5024。
第一判断单元5021,配置为判断监控视频中是否存在运动目标;
监控目标识别单元5022,配置为当监控视频中存在运动目标时,判断运动目标是否为监控目标;
第二判断单元5023,配置为当运动目标为监控目标时,判断监控目标是否位于敏感区域;
第一确定单元5024,配置为当监控目标位于敏感区域时,确定监控视频的敏感区域中存在监控目标。
在本公开的另一实施例中,参照图7,该判断模块502包括第三判断单元5025,监控目标识别单元5022,第二确定单元5026。
第三判断单元5025,配置为判断监控视频的敏感区域中是否存在运动目标;
监控目标识别单元5022,配置为当敏感区域中存在运动目标时,判断运动目标是否为监控目标;
第二确定单元5026,配置为当运动目标为监控目标时,确定监控视频的敏感区域中存在监控目标。
在本公开的另一实施例中,参照图8,该监控目标识别单元5022包括第一确定子单元50221,第二确定子单元50222,第三确定子单元50223。
第一确定子单元50221,配置为确定运动目标的特征;
第二确定子单元50222,配置为确定运动目标的特征与监控目标的特征之间的匹配度;
第三确定子单元50223,配置为当匹配度大于指定数值时,确定运动目标为监控目标。
在本公开的另一实施例中,该第一确定子单元50221,配置为:
当监控视频中存在运动目标时,在监控视频的视频图像中,对运动目标所在的区域进行裁剪,得到目标图像;
对目标图像进行特征提取,得到运动目标的特征。
在本公开的另一实施例中,参照图9,该监控目标识别单元5022还包括接收子单元50224,第一获取子单元50225,第二获取子单元50226,提取子单元50227。
接收子单元50224,配置为接收终端发送的设置信息,设置信息中携带监控目标标识信息;
第一获取子单元50225,配置为基于监控目标标识信息,从存储的历史视频中,获取监控目标的跟踪视频;
第二获取子单元50226,配置为从跟踪视频的每帧视频图像中,获取监 控目标的跟踪图像;
提取子单元50227,配置为对监控目标的跟踪图像进行特征提取,得到监控目标的特征。
在本公开的另一实施例中,该设置信息中还携带监控目标对应的敏感区域信息,该敏感区域信息用于获取敏感区域。
在本公开的另一实施例中,参照图10,该第二判断单元5023包括跟踪子单元50231,判断子单元50232。
跟踪子单元50231,配置为当运动目标为监控目标时,对监控目标进行目标跟踪,得到监控目标当前所处的位置;
判断子单元50232,配置为基于监控目标当前所处的位置,判断监控目标是否位于敏感区域。
在本公开实施例中,服务器获取监控视频,并判断监控视频的敏感区域中是否存在监控目标,当敏感区域中存在监控目标时,服务器向终端发送报警信息,使终端进行报警,从而预防不安全事件的发生。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图11是根据一示例性实施例示出的一种报警装置的框图。参照图11,该装置包括第一发送模块1101,报警模块1102。
第一发送模块1101,配置为向服务器发送设置信息,设置信息中携带监控目标标识信息和监控目标对应的敏感区域信息,使服务器获取监控视频,并在监控视频的敏感区域中存在监控目标时返回报警信息;
报警模块1102,配置为当接收到服务器返回的报警信息时,基于报警信息进行报警。
在本公开的另一实施例中,参照图12,该装置还包括获取模块1103,播放模块1104,确定模块1105。
获取模块1103,配置为获取历史视频;
播放模块1104,配置为播放历史视频;
确定模块1105,配置为在播放历史视频的过程中,基于历史视频的视频图像,确定监控目标标识信息和监控目标对应的敏感区域信息。
在本公开的另一实施例中,参照图13,该确定模块1105包括第一确定单元11051,第二确定单元11052,第一获取单元11053。
第一确定单元11051,配置为在播放历史视频的过程中,当基于历史视频的视频图像接收到第一选择指令时,将第一选择指令中所选择对象确定为监控目标;
第二确定单元11052,配置为当基于历史视频的视频图像接收到第二选择指令时,将第二选择指令所选择的区域确定为所述监控目标对应的敏感区域;
第一获取单元11053,配置为获取监控目标的监控目标标识信息,以及获取敏感区域的敏感区域信息。
在本公开的另一实施例中,参照图14,该确定模块1105包括第二获取单元11054,第三确定单元11055,第三获取单元11056。
第二获取单元11054,配置为获取在历史视频的视频图像中画出的第一区域和在视频图像中选择的目标对象,目标对象为在视频图像中画出的第二区域包括的对象,或者目标对象为在视频图像中检测到的选择操作所选择的对象;
第三确定单元11055,配置为当在第一区域和目标对象中的至少一个上检测到预设手势操作时,将第一区域确定为监控目标对应的敏感区域,以及将目标对象确定为监控目标;
第三获取单元11056,配置为获取监控目标的监控目标标识信息,以及获取敏感区域的敏感区域信息。
在本公开实施例中,终端向服务器发送设置信息,设置信息中携带监控目标标识信息和监控目标对应的敏感区域信息,使服务器获取监控视频,并在监控视频的敏感区域中存在监控目标时返回报警信息,当终端接收到该报警信息时,终端可以进行报警,从而预防不安全事件的发生。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图15是根据一示例性实施例示出的一种用于报警的装置1500的框图。例如,装置1500可以被提供为一服务器。参照图15,装置1500包括处理组件1522,其进一步包括一个或多个处理器,以及由存储器1532所代表的存储器资源,配置为存储可由处理组件1522的执行的指令,例如应用程序。存储器1532中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。
装置1500还可以包括一个电源组件1526被配置为执行装置1500的电源管理,一个有线或无线网络接口1550被配置为将装置1500连接到网络,和一个输入输出(I/O)接口1558。装置1500可以操作基于存储在存储器1532的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
此外,处理组件1522被配置为执行指令,以执行下述报警方法,所述方法包括:
获取监控视频。
判断监控视频的敏感区域中是否存在监控目标。
当敏感区域中存在监控目标时,向终端发送报警信息,使终端进行报警。
在本公开的另一实施例中,判断监控视频的敏感区域中是否存在监控目标,包括:
判断监控视频中是否存在运动目标;
当监控视频中存在运动目标时,判断运动目标是否为监控目标;
当运动目标为监控目标时,判断监控目标是否位于敏感区域;
当监控目标位于敏感区域时,确定监控视频的敏感区域中存在监控目标。
在本公开的另一实施例中,判断监控视频的敏感区域中是否存在监控目标,包括:
判断监控视频的敏感区域中是否存在运动目标;
当敏感区域中存在运动目标时,判断运动目标是否为监控目标;
当运动目标为监控目标时,确定监控视频的敏感区域中存在监控目标。
在本公开的另一实施例中,判断运动目标是否为监控目标,包括:
确定运动目标的特征;
确定运动目标的特征与监控目标的特征之间的匹配度;
当匹配度大于指定数值时,确定运动目标为监控目标。
在本公开的另一实施例中,确定运动目标的特征,包括:
在监控视频的视频图像中,对运动目标所在的区域进行裁剪,得到目标图像;
对目标图像进行特征提取,得到运动目标的特征。
在本公开的另一实施例中,确定运动目标的特征与监控目标的特征之间的匹配度之前,还包括:
接收终端发送的设置信息,设置信息中携带监控目标标识信息;
基于监控目标标识信息,从存储的历史视频中,获取监控目标的跟踪视频;
从跟踪视频的每帧视频图像中,获取监控目标的跟踪图像;
对监控目标的跟踪图像进行特征提取,得到监控目标的特征。
在本公开的另一实施例中,设置信息中还携带监控目标对应的敏感区域信息,该敏感区域信息用于获取敏感区域。
在本公开的另一实施例中,判断监控目标是否位于敏感区域,包括:
对监控目标进行目标跟踪,得到监控目标当前所处的位置;
基于监控目标当前所处的位置,判断监控目标是否位于敏感区域。
在本公开实施例中,服务器获取监控视频,并判断监控视频的敏感区域中是否存在监控目标,当敏感区域中存在监控目标时,服务器向终端发送报警信息,使终端进行报警,从而预防不安全事件的发生。
图16是根据一示例性实施例示出的一种用于报警的装置1600的框图。例如,装置1600可以是移动电话,计算机,数字广播终端,消息收发设备,平板设备,医疗设备,健身设备,个人数字助理等。
参照图16,装置1600可以包括以下一个或多个组件:处理组件1602,存储器1604,电源组件1606,多媒体组件1608,音频组件1610,输入/输出(I/O)接口1612,传感器组件1614,以及通信组件1616。
处理组件1602通常控制装置1600的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件1602可以包括一个或多个处理器1620来执行指令,以完成上述报警方法的全部或部分步骤。此外,处理组件1602可以包括一个或多个模块,便于处理组件1602和其他组件之间的交互。例如,处理组件1602可以包括多媒体模块,以方便多媒体组件1608和处理组件1602之间的交互。
存储器1604被配置为存储各种类型的数据以支持在装置1600的操作。这些数据的示例包括用于在装置1600上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器1604可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可 编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件1606为装置1600的各种组件提供电力。电源组件1606可以包括电源管理系统,一个或多个电源,及其他与为装置1600生成、管理和分配电源相关联的组件。
多媒体组件1608包括在所述装置1600和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件1608包括一个前置摄像头和/或后置摄像头。当装置1600处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件1610被配置为输出和/或输入音频信号。例如,音频组件1610包括一个麦克风(MIC),当装置1600处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器1604或经由通信组件1616发送。在一些实施例中,音频组件1610还包括一个扬声器,配置为输出音频信号。
I/O接口1612为处理组件1602和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件1614包括一个或多个传感器,配置为为装置1600提供各个方面的状态评估。例如,传感器组件1614可以检测到装置1600的打开/ 关闭状态,组件的相对定位,例如所述组件为装置1600的显示器和小键盘,传感器组件1614还可以检测装置1600或装置1600一个组件的位置改变,用户与装置1600接触的存在或不存在,装置1600方位或加速/减速和装置1600的温度变化。传感器组件1614可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件1614还可以包括光传感器,如CMOS或CCD图像传感器,配置为在成像应用中使用。在一些实施例中,该传感器组件1614还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件1616被配置为便于装置1600和其他设备之间有线或无线方式的通信。装置1600可以接入基于通信标准的无线网络,如WiFi、2G或3G、或它们的组合。在一个示例性实施例中,通信组件1616经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件1616还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置1600可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,配置为执行上述报警方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器1604,上述指令可由装置1600的处理器1620执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动 终端的处理器执行时,使得移动终端能够执行一种报警方法,所述方法包括:
向服务器发送设置信息,设置信息中携带监控目标标识信息和监控目标对应的敏感区域信息,使服务器获取监控视频,并在监控视频的敏感区域中存在监控目标时返回报警信息;
当接收到服务器返回的报警信息时,基于报警信息进行报警。
在本公开的另一实施例中,向服务器发送设置信息之前,还包括:
获取历史视频,播放历史视频;
在播放历史视频的过程中,基于历史视频的视频图像,确定监控目标标识信息和监控目标对应的敏感区域信息。
在本公开的另一实施例中,基于历史视频的视频图像,确定监控目标标识信息和监控目标对应的敏感区域信息,包括:
当基于历史视频的视频图像接收到第一选择指令时,将第一选择指令所选择的对象确定为监控目标;
当基于历史视频的视频图像接收到第二选择指令时,将第二选择指令所选择的区域确定为监控目标对应的敏感区域;
获取监控目标的监控目标标识信息,以及获取敏感区域的敏感区域信息。
在本公开的另一实施例中,基于历史视频的视频图像,确定监控目标标识信息和监控目标对应的敏感区域信息,包括:
获取在历史视频的视频图像中画出的第一区域和在视频图像中选择的目标对象,目标对象为在视频图像中画出的第二区域包括的对象,或者目标对象为在视频图像中检测到的选择操作所选择的对象;
当在第一区域和目标对象中的至少一个上检测到预设手势操作时,将第一区域确定为监控目标对应的敏感区域,以及将目标对象确定为监控目 标;
获取监控目标的监控目标标识信息,以及获取敏感区域的敏感区域信息。
在本公开实施例中,终端向服务器发送设置信息,设置信息中携带监控目标标识信息和监控目标对应的敏感区域信息,使服务器获取监控视频,并在监控视频的敏感区域中存在监控目标时返回报警信息,当终端接收到该报警信息时,终端可以进行报警,从而预防不安全事件的发生。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本发明的其它实施方案。本申请旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开实施例未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本发明的真正范围和精神由下面的权利要求指出。
应当理解的是,本发明并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本发明的范围仅由所附的权利要求来限制。
工业实用性
在本公开实施例中,服务器获取监控视频,并判断监控视频的敏感区域中是否存在监控目标,当敏感区域中存在监控目标时,服务器向终端发送报警信息,使终端进行报警,从而预防不安全事件的发生。

Claims (26)

  1. 一种报警方法,所述方法包括:
    获取监控视频;
    判断所述监控视频的敏感区域中是否存在监控目标;
    当所述敏感区域中存在所述监控目标时,向终端发送报警信息,使所述终端进行报警。
  2. 如权利要求1所述的方法,其中,所述判断所述监控视频的敏感区域中是否存在监控目标,包括:
    判断所述监控视频中是否存在运动目标;
    当所述监控视频中存在所述运动目标时,判断所述运动目标是否为监控目标;
    当所述运动目标为所述监控目标时,判断所述监控目标是否位于敏感区域;
    当所述监控目标位于所述敏感区域时,确定所述监控视频的敏感区域中存在所述监控目标。
  3. 如权利要求1所述的方法,其中,所述判断所述监控视频的敏感区域中是否存在监控目标,包括:
    判断所述监控视频的敏感区域中是否存在运动目标;
    当所述敏感区域中存在所述运动目标时,判断所述运动目标是否为监控目标;
    当所述运动目标为所述监控目标时,确定所述监控视频的敏感区域中存在所述监控目标。
  4. 如权利要求2或3所述的方法,其中,所述判断所述运动目标是否为监控目标,包括:
    确定所述运动目标的特征;
    确定所述运动目标的特征与所述监控目标的特征之间的匹配度;
    当所述匹配度大于指定数值时,确定所述运动目标为所述监控目标。
  5. 如权利要求4所述的方法,其中,所述确定所述运动目标的特征,包括:
    在所述监控视频的视频图像中,对所述运动目标所在的区域进行裁剪,得到目标图像;
    对所述目标图像进行特征提取,得到所述运动目标的特征。
  6. 如权利要求4所述的方法,其中,所述确定所述运动目标的特征与所述监控目标的特征之间的匹配度之前,还包括:
    接收所述终端发送的设置信息,所述设置信息中携带监控目标标识信息;
    基于所述监控目标标识信息,从存储的历史视频中,获取所述监控目标的跟踪视频;
    从所述跟踪视频的每帧视频图像中,获取所述监控目标的跟踪图像;
    对所述监控目标的跟踪图像进行特征提取,得到所述监控目标的特征。
  7. 如权利要求6所述的方法,其中,所述设置信息中还携带所述监控目标对应的敏感区域信息,所述敏感区域信息用于获取敏感区域。
  8. 如权利要求2所述的方法,其中,所述判断所述监控目标是否位于敏感区域,包括:
    对所述监控目标进行目标跟踪,得到所述监控目标当前所处的位置;
    基于所述监控目标当前所处的位置,判断所述监控目标是否位于敏感区域。
  9. 一种报警方法,所述方法包括:
    向服务器发送设置信息,所述设置信息中携带监控目标标识信息和 监控目标对应的敏感区域信息,使所述服务器获取监控视频,并在所述监控视频的敏感区域中存在所述监控目标时返回报警信息;
    当接收到所述服务器返回的报警信息时,基于所述报警信息进行报警。
  10. 如权利要求9所述的方法,其中,所述向服务器发送设置信息之前,还包括:
    获取历史视频,播放所述历史视频;
    在播放所述历史视频的过程中,基于所述历史视频的视频图像,确定所述监控目标标识信息和所述监控目标对应的敏感区域信息。
  11. 如权利要求10所述的方法,其中,所述基于所述历史视频的视频图像,确定所述监控目标标识信息和所述监控目标对应的敏感区域信息,包括:
    当基于所述历史视频的视频图像接收到第一选择指令时,将所述第一选择指令所选择的对象确定为所述监控目标;
    当基于所述历史视频的视频图像接收到第二选择指令时,将所述第二选择指令所选择的区域确定为所述监控目标对应的敏感区域;
    获取所述监控目标的监控目标标识信息,以及获取所述敏感区域的敏感区域信息。
  12. 如权利要求10所述的方法,其中,所述基于所述历史视频的视频图像,确定所述监控目标标识信息和所述监控目标对应的敏感区域信息,包括:
    获取在所述历史视频的视频图像中画出的第一区域和在所述视频图像中选择的目标对象,所述目标对象为在所述视频图像中画出的第二区域包括的对象,或者所述目标对象为在所述视频图像中检测到的选择操作所选择的对象;
    当在所述第一区域和所述目标对象中的至少一个上检测到预设手势操作时,将所述第一区域确定为所述监控目标对应的敏感区域,以及将所述目标对象确定为所述监控目标;
    获取所述监控目标的监控目标标识信息,以及获取所述敏感区域的敏感区域信息。
  13. 一种报警装置,所述装置包括:
    获取模块,配置为获取监控视频;
    判断模块,配置为判断所述监控视频的敏感区域中是否存在监控目标;
    发送模块,配置为当所述敏感区域中存在所述监控目标时,向终端发送报警信息,使所述终端进行报警。
  14. 如权利要求13所述的装置,其中,所述判断模块包括:
    第一判断单元,配置为判断所述监控视频中是否存在运动目标;
    监控目标识别单元,配置为当所述监控视频中存在所述运动目标时,判断所述运动目标是否为监控目标;
    第二判断单元,配置为当所述运动目标为所述监控目标时,判断所述监控目标是否位于敏感区域;
    第一确定单元,配置为当所述监控目标位于所述敏感区域时,确定所述监控视频的敏感区域中存在所述监控目标。
  15. 如权利要求13所述的装置,其中,所述判断模块包括:
    第三判断单元,配置为判断所述监控视频的敏感区域中是否存在运动目标;
    监控目标识别单元,配置为当所述敏感区域中存在所述运动目标时,判断所述运动目标是否为监控目标;
    第二确定单元,配置为当所述运动目标为所述监控目标时,确定所 述监控视频的敏感区域中存在所述监控目标。
  16. 如权利要求14或15所述的装置,其中,所述监控目标识别单元包括:
    第一确定子单元,配置为确定所述运动目标的特征;
    第二确定子单元,配置为确定所述运动目标的特征与所述监控目标的特征之间的匹配度;
    第三确定子单元,配置为当所述匹配度大于指定数值时,确定所述运动目标为所述监控目标。
  17. 如权利要求16所述的装置,其中,
    所述第一确定子单元,配置为:
    当所述监控视频中存在所述运动目标时,在所述监控视频的视频图像中,对所述运动目标所在的区域进行裁剪,得到目标图像;
    对所述目标图像进行特征提取,得到所述运动目标的特征。
  18. 如权利要求16所述的装置,其中,所述监控目标识别单元还包括:
    接收子单元,配置为接收所述终端发送的设置信息,所述设置信息中携带监控目标标识信息;
    第一获取子单元,配置为基于所述监控目标标识信息,从存储的历史视频中,获取所述监控目标的跟踪视频;
    第二获取子单元,配置为从所述跟踪视频的每帧视频图像中,获取所述监控目标的跟踪图像;
    提取子单元,配置为对所述监控目标的跟踪图像进行特征提取,得到所述监控目标的特征。
  19. 如权利要求18所述的装置,其中,所述设置信息中还携带所述监控目标对应的敏感区域信息,所述敏感区域信息用于获取敏感区域。
  20. 如权利要求14所述的装置,其中,所述第二判断单元包括:
    跟踪子单元,配置为当所述运动目标为所述监控目标时,对所述监控目标进行目标跟踪,得到所述监控目标当前所处的位置;
    判断子单元,配置为基于所述监控目标当前所处的位置,判断所述监控目标是否位于敏感区域。
  21. 一种报警装置,所述装置包括:
    第一发送模块,配置为向服务器发送设置信息,所述设置信息中携带监控目标标识信息和监控目标对应的敏感区域信息,使所述服务器获取监控视频,并在所述监控视频的敏感区域中存在所述监控目标时返回报警信息;
    报警模块,配置为当接收到所述服务器返回的报警信息时,基于所述报警信息进行报警。
  22. 如权利要求21所述的装置,其中,所述装置还包括:
    获取模块,配置为获取历史视频;
    播放模块,配置为播放所述历史视频;
    确定模块,配置为在播放所述历史视频的过程中,基于所述历史视频的视频图像,确定所述监控目标标识信息和所述监控目标对应的敏感区域信息。
  23. 如权利要求22所述的装置,其中,所述确定模块包括:
    第一确定单元,配置为在播放所述历史视频的过程中,当基于所述历史视频的视频图像接收到第一选择指令时,将所述第一选择指令所选择的对象确定为所述监控目标;
    第二确定单元,配置为当基于所述历史视频的视频图像接收到第二选择指令时,将所述第二选择指令所选择的区域确定为所述监控目标对应的敏感区域;
    第一获取单元,配置为获取所述监控目标的监控目标标识信息,以及获取所述敏感区域的敏感区域信息。
  24. 如权利要求22所述的装置,其中,所述确定模块包括:
    第二获取单元,配置为获取在所述历史视频的视频图像中画出的第一区域和在所述视频图像中选择的目标对象,所述目标对象为在所述视频图像中画出的第二区域包括的对象,或者所述目标对象为在所述视频图像中检测到的选择操作所选择的对象;
    第三确定单元,配置为当在所述第一区域和所述目标对象中的至少一个上检测到预设手势操作时,将所述第一区域确定为所述监控目标对应的敏感区域,以及将所述目标对象确定为所述监控目标;
    第三获取单元,配置为获取所述监控目标的监控目标标识信息,以及获取所述敏感区域的敏感区域信息。
  25. 一种报警装置,所述装置包括:
    处理器;
    配置为存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    获取监控视频;
    判断所述监控视频的敏感区域中是否存在监控目标;
    当所述敏感区域中存在所述监控目标时,向终端发送报警信息,使所述终端进行报警。
  26. 一种报警装置,所述装置包括:
    处理器;
    配置为存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    向服务器发送设置信息,所述设置信息中携带监控目标标识信息和 监控目标对应的敏感区域信息,使所述服务器获取监控视频,并在所述监控视频的敏感区域中存在所述监控目标时返回报警信息;
    当接收到所述服务器返回的报警信息时,基于所述报警信息进行报警。
PCT/CN2015/099586 2015-10-28 2015-12-29 报警方法及装置 WO2017071085A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2016549719A JP2017538978A (ja) 2015-10-28 2015-12-29 警報方法および装置
KR1020167021748A KR101852284B1 (ko) 2015-10-28 2015-12-29 경보 방법 및 장치
MX2016005066A MX360586B (es) 2015-10-28 2015-12-29 Método y dispositivo de alarma.
RU2016117967A RU2648214C1 (ru) 2015-10-28 2015-12-29 Способ сигнализации и сигнализирующее устройство

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510713143.1 2015-10-28
CN201510713143.1A CN105279898A (zh) 2015-10-28 2015-10-28 报警方法及装置

Publications (1)

Publication Number Publication Date
WO2017071085A1 true WO2017071085A1 (zh) 2017-05-04

Family

ID=55148833

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099586 WO2017071085A1 (zh) 2015-10-28 2015-12-29 报警方法及装置

Country Status (8)

Country Link
US (1) US9953506B2 (zh)
EP (1) EP3163498B1 (zh)
JP (1) JP2017538978A (zh)
KR (1) KR101852284B1 (zh)
CN (1) CN105279898A (zh)
MX (1) MX360586B (zh)
RU (1) RU2648214C1 (zh)
WO (1) WO2017071085A1 (zh)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106027931B (zh) * 2016-04-14 2018-03-16 平安科技(深圳)有限公司 视频录制方法及服务器
CN105847763B (zh) * 2016-05-19 2019-04-16 北京小米移动软件有限公司 监控方法及装置
WO2018083738A1 (ja) 2016-11-01 2018-05-11 三菱電機株式会社 情報処理装置、報知システム、情報処理方法及びプログラム
CN107657687A (zh) * 2017-08-25 2018-02-02 深圳市盛路物联通讯技术有限公司 一种基于图像处理技术的门禁管理方法及门禁管理服务器
CN107644479A (zh) * 2017-08-25 2018-01-30 深圳市盛路物联通讯技术有限公司 一种基于智能摄像头的安全门禁方法及智能终端
CN107707872A (zh) * 2017-08-30 2018-02-16 深圳市盛路物联通讯技术有限公司 一种基于图像处理技术的监控方法及相关设备
CN107846578A (zh) * 2017-10-30 2018-03-27 北京小米移动软件有限公司 信息订阅方法及装置
CN108038872B (zh) * 2017-12-22 2021-08-31 中国海洋大学 一种基于动静态目标检测与实时压缩感知追踪研究方法
CN108540777A (zh) * 2018-04-28 2018-09-14 上海与德科技有限公司 一种智能监控方法、装置、设备和存储介质
CN108809990B (zh) * 2018-06-14 2021-06-29 北京中飞艾维航空科技有限公司 一种众包数据安全加密方法、服务器及存储介质
CN108834066A (zh) * 2018-06-27 2018-11-16 三星电子(中国)研发中心 用于生成信息的方法和装置
CN109040774B (zh) * 2018-07-24 2021-10-26 成都优地技术有限公司 一种节目信息提取方法、终端设备、服务器及存储介质
CN110874905A (zh) * 2018-08-31 2020-03-10 杭州海康威视数字技术股份有限公司 监控方法及装置
CN110895861B (zh) * 2018-09-13 2022-03-08 杭州海康威视数字技术股份有限公司 异常行为预警方法、装置、监控设备和存储介质
KR102093477B1 (ko) 2018-10-26 2020-03-25 오토아이티(주) 이종 카메라 기반의 위험지역 안전관리 방법 및 장치
KR102085168B1 (ko) 2018-10-26 2020-03-04 오토아이티(주) 인체추적 기반 위험지역 안전관리 방법 및 장치
CN111260885B (zh) * 2018-11-30 2021-08-06 百度在线网络技术(北京)有限公司 跟踪游泳的方法、装置、存储介质和终端设备
CN109839614B (zh) * 2018-12-29 2020-11-06 深圳市天彦通信股份有限公司 固定式采集设备的定位系统及方法
CN109816906B (zh) * 2019-01-03 2022-07-08 深圳壹账通智能科技有限公司 安保监控方法及装置、电子设备、存储介质
CN109922310B (zh) * 2019-01-24 2020-11-17 北京明略软件系统有限公司 目标对象的监控方法、装置及系统
CN109886999B (zh) * 2019-01-24 2020-10-02 北京明略软件系统有限公司 位置确定方法、装置、存储介质和处理器
CN109919966A (zh) * 2019-01-24 2019-06-21 北京明略软件系统有限公司 区域确定方法、装置、存储介质和处理器
US10964187B2 (en) 2019-01-29 2021-03-30 Pool Knight, Llc Smart surveillance system for swimming pools
CN110009900A (zh) * 2019-03-12 2019-07-12 浙江吉利汽车研究院有限公司 一种车辆监控方法及系统
CN110490037A (zh) * 2019-05-30 2019-11-22 福建知鱼科技有限公司 一种人像识别系统
CN112489338B (zh) * 2019-09-11 2023-03-14 杭州海康威视数字技术股份有限公司 一种报警方法、系统、装置、设备及存储介质
CN110889334A (zh) * 2019-11-06 2020-03-17 江河瑞通(北京)技术有限公司 人员闯入识别方法及装置
CN110969115B (zh) * 2019-11-28 2023-04-07 深圳市商汤科技有限公司 行人事件的检测方法及装置、电子设备和存储介质
CN110942578A (zh) * 2019-11-29 2020-03-31 韦达信息技术(深圳)有限公司 智能分析防盗报警系统
CN110991550B (zh) * 2019-12-13 2023-10-17 歌尔科技有限公司 一种视频监控方法、装置、电子设备及存储介质
CN113473076B (zh) * 2020-07-21 2023-03-14 青岛海信电子产业控股股份有限公司 社区报警方法及服务器
CN113473085A (zh) * 2021-07-01 2021-10-01 成都市达岸信息技术有限公司 一种基于人工智能技术的视频监控识别系统
CN114170710A (zh) * 2021-12-10 2022-03-11 杭州萤石软件有限公司 基于智能锁设备的人员检测方法、装置、系统及设备
CN115345907B (zh) * 2022-10-18 2022-12-30 广东电网有限责任公司中山供电局 一种基于边缘计算的目标动态跟踪装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6618074B1 (en) * 1997-08-01 2003-09-09 Wells Fargo Alarm Systems, Inc. Central alarm computer for video security system
CN101252680A (zh) * 2008-04-14 2008-08-27 中兴通讯股份有限公司 一种采用不同监控精度进行监控的方法及终端
CN102158689A (zh) * 2011-05-17 2011-08-17 无锡中星微电子有限公司 视频监控系统及方法
CN103714648A (zh) * 2013-12-06 2014-04-09 乐视致新电子科技(天津)有限公司 一种监控预警方法和设备
CN104933827A (zh) * 2015-06-11 2015-09-23 广东欧珀移动通信有限公司 一种基于旋转摄像头的报警方法及终端

Family Cites Families (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4511886A (en) * 1983-06-01 1985-04-16 Micron International, Ltd. Electronic security and surveillance system
JP2528789B2 (ja) * 1985-06-26 1996-08-28 中央電子 株式会社 映像情報管理装置
US4814869A (en) * 1987-04-27 1989-03-21 Oliver Jr Robert C Video surveillance system
US4992866A (en) * 1989-06-29 1991-02-12 Morgan Jack B Camera selection and positioning system and method
KR920010745B1 (ko) * 1989-11-21 1992-12-14 주식회사 금성사 부재중 비상사태 원격감시시스템 및 화상 송,수신 처리방법
US20060083305A1 (en) * 2004-10-15 2006-04-20 James Dougherty Distributed motion detection event processing
RU2271577C1 (ru) * 2005-04-18 2006-03-10 Общество с ограниченной ответственностью "АЛЬТОНИКА" (ООО "АЛЬТОНИКА") Устройство охранной сигнализации для противодействия угрозам личной безопасности
US20070237358A1 (en) * 2006-04-11 2007-10-11 Wei-Nan William Tseng Surveillance system with dynamic recording resolution and object tracking
JP4631806B2 (ja) * 2006-06-05 2011-02-16 日本電気株式会社 物体検出装置、物体検出方法および物体検出プログラム
GB0709329D0 (en) * 2007-05-15 2007-06-20 Ipsotek Ltd Data processing apparatus
JP4356774B2 (ja) * 2007-06-06 2009-11-04 ソニー株式会社 情報処理装置、映像再生方法、プログラム、および映像再生システム
JP2009055447A (ja) * 2007-08-28 2009-03-12 Toshiba Corp 映像検索システム及び映像検索装置
JP2010003177A (ja) * 2008-06-20 2010-01-07 Secom Co Ltd 画像処理装置
JP5371083B2 (ja) * 2008-09-16 2013-12-18 Kddi株式会社 顔識別特徴量登録装置、顔識別特徴量登録方法、顔識別特徴量登録プログラム及び記録媒体
US8749347B1 (en) * 2009-01-29 2014-06-10 Bank Of America Corporation Authorized custodian verification
US10282563B2 (en) * 2009-02-06 2019-05-07 Tobii Ab Video-based privacy supporting system
CN101872524B (zh) * 2009-08-14 2012-07-18 杭州海康威视数字技术股份有限公司 基于虚拟墙的视频监控方法、系统及装置
JP5388829B2 (ja) * 2009-12-11 2014-01-15 セコム株式会社 侵入物体検知装置
US20110275045A1 (en) * 2010-01-22 2011-11-10 Foerster Bhupathi International, L.L.C. Video Overlay Sports Motion Analysis
CN101840422A (zh) * 2010-04-09 2010-09-22 江苏东大金智建筑智能化系统工程有限公司 基于目标特征和报警行为的智能视频检索系统和方法
WO2012106075A1 (en) * 2011-02-05 2012-08-09 Wifislam, Inc. Method and apparatus for mobile location determination
KR101207197B1 (ko) * 2011-08-30 2012-12-03 주식회사 아이디스 가상 감시 영역 설정을 통한 디지털 영상 감시 장치 및 방법
KR101394242B1 (ko) * 2011-09-23 2014-05-27 광주과학기술원 영상 감시 장치 및 영상 감시 방법
US20130091213A1 (en) * 2011-10-08 2013-04-11 Broadcom Corporation Management of social device interaction with social network infrastructure
CN103188474A (zh) * 2011-12-30 2013-07-03 中兴通讯股份有限公司 一种视频智能分析系统及其监控录像的存储和播放方法
US9106789B1 (en) * 2012-01-20 2015-08-11 Tech Friends, Inc. Videoconference and video visitation security
US20130204408A1 (en) * 2012-02-06 2013-08-08 Honeywell International Inc. System for controlling home automation system using body movements
US20140258117A1 (en) * 2012-03-26 2014-09-11 Daniel Holland Methods and systems for handling currency
US8781293B2 (en) * 2012-08-20 2014-07-15 Gorilla Technology Inc. Correction method for object linking across video sequences in a multiple camera video surveillance system
CN102868875B (zh) 2012-09-24 2015-11-18 天津市亚安科技股份有限公司 多方向监控区域预警定位自动跟踪监控装置
JP6080501B2 (ja) * 2012-11-05 2017-02-15 大和ハウス工業株式会社 監視システム
CN104143078B (zh) * 2013-05-09 2016-08-24 腾讯科技(深圳)有限公司 活体人脸识别方法、装置和设备
KR101380628B1 (ko) * 2013-10-18 2014-04-02 브이씨에이 테크놀러지 엘티디 복수의 카메라를 사용한 객체 추적 방법 및 장치
US9519853B2 (en) * 2013-11-01 2016-12-13 James P Tolle Wearable, non-visible identification device for friendly force identification and intruder detection
CN104636709B (zh) 2013-11-12 2018-10-02 中国移动通信集团公司 一种定位监控目标的方法及装置
CN103607569B (zh) 2013-11-22 2017-05-17 广东威创视讯科技股份有限公司 视频监控中的监控目标跟踪方法和系统
KR102197098B1 (ko) * 2014-02-07 2020-12-30 삼성전자주식회사 콘텐츠 추천 방법 및 장치
JP2015176198A (ja) * 2014-03-13 2015-10-05 大和ハウス工業株式会社 監視システム
CN104980719A (zh) * 2014-04-03 2015-10-14 索尼公司 图像处理方法、装置以及电子设备
US9819910B2 (en) * 2014-06-20 2017-11-14 Bao Tran Smart system powered by light socket
US20160180239A1 (en) * 2014-12-17 2016-06-23 Cloudtalk Llc Motion detection and recognition employing contextual awareness
CN104767911A (zh) * 2015-04-28 2015-07-08 腾讯科技(深圳)有限公司 图像处理方法及装置
US10275672B2 (en) * 2015-04-29 2019-04-30 Beijing Kuangshi Technology Co., Ltd. Method and apparatus for authenticating liveness face, and computer program product thereof
US10146797B2 (en) * 2015-05-29 2018-12-04 Accenture Global Services Limited Face recognition image data cache

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6618074B1 (en) * 1997-08-01 2003-09-09 Wells Fargo Alarm Systems, Inc. Central alarm computer for video security system
CN101252680A (zh) * 2008-04-14 2008-08-27 中兴通讯股份有限公司 一种采用不同监控精度进行监控的方法及终端
CN102158689A (zh) * 2011-05-17 2011-08-17 无锡中星微电子有限公司 视频监控系统及方法
CN103714648A (zh) * 2013-12-06 2014-04-09 乐视致新电子科技(天津)有限公司 一种监控预警方法和设备
CN104933827A (zh) * 2015-06-11 2015-09-23 广东欧珀移动通信有限公司 一种基于旋转摄像头的报警方法及终端

Also Published As

Publication number Publication date
EP3163498A2 (en) 2017-05-03
US20170124833A1 (en) 2017-05-04
EP3163498A3 (en) 2017-07-26
US9953506B2 (en) 2018-04-24
MX2016005066A (es) 2017-08-09
KR101852284B1 (ko) 2018-04-25
EP3163498B1 (en) 2020-02-05
CN105279898A (zh) 2016-01-27
RU2648214C1 (ru) 2018-03-22
MX360586B (es) 2018-11-08
JP2017538978A (ja) 2017-12-28

Similar Documents

Publication Publication Date Title
WO2017071085A1 (zh) 报警方法及装置
KR101649596B1 (ko) 피부색 조절방법, 장치, 프로그램 및 기록매체
US9674395B2 (en) Methods and apparatuses for generating photograph
TWI755833B (zh) 一種圖像處理方法、電子設備和儲存介質
CN105488527B (zh) 图像分类方法及装置
TWI702544B (zh) 圖像處理方法、電子設備和電腦可讀儲存介質
WO2020135529A1 (zh) 位姿估计方法及装置、电子设备和存储介质
US10284773B2 (en) Method and apparatus for preventing photograph from being shielded
JP6335289B2 (ja) 画像フィルタを生成する方法及び装置
WO2021031609A1 (zh) 活体检测方法及装置、电子设备和存储介质
WO2020062969A1 (zh) 动作识别方法及装置、驾驶员状态分析方法及装置
US20170154206A1 (en) Image processing method and apparatus
CN108010060B (zh) 目标检测方法及装置
WO2016026269A1 (zh) 实时视频提供方法、装置及服务器、终端设备
WO2016192325A1 (zh) 视频文件的标识处理方法及装置
CN106648063B (zh) 手势识别方法及装置
CN109034150B (zh) 图像处理方法及装置
US20170339287A1 (en) Image transmission method and apparatus
US11574415B2 (en) Method and apparatus for determining an icon position
CN107025441B (zh) 肤色检测方法及装置
CN109784327B (zh) 边界框确定方法、装置、电子设备及存储介质
WO2022099988A1 (zh) 目标跟踪方法及装置、电子设备和存储介质
CN107292901B (zh) 边缘检测方法及装置
US20230048952A1 (en) Image registration method and electronic device
CN109862252B (zh) 图像拍摄方法及装置

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: MX/A/2016/005066

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 2016549719

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020167021748

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 2016117967

Country of ref document: RU

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15907146

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15907146

Country of ref document: EP

Kind code of ref document: A1