CN113706807B - Method, device, equipment and storage medium for sending alarm information - Google Patents

Method, device, equipment and storage medium for sending alarm information Download PDF

Info

Publication number
CN113706807B
CN113706807B CN202010432607.2A CN202010432607A CN113706807B CN 113706807 B CN113706807 B CN 113706807B CN 202010432607 A CN202010432607 A CN 202010432607A CN 113706807 B CN113706807 B CN 113706807B
Authority
CN
China
Prior art keywords
moving object
video frame
time point
video
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010432607.2A
Other languages
Chinese (zh)
Other versions
CN113706807A (en
Inventor
周明
庄志兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010432607.2A priority Critical patent/CN113706807B/en
Publication of CN113706807A publication Critical patent/CN113706807A/en
Application granted granted Critical
Publication of CN113706807B publication Critical patent/CN113706807B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19654Details concerning communication with a camera
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19665Details related to the storage of video surveillance data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

The application discloses a method, a device, equipment and a storage medium for sending alarm information, and belongs to the technical field of internet. The method comprises the following steps: acquiring video frames in a target video based on a preset detection period; determining whether a moving object exists in a currently acquired video frame, and if the moving object exists in the currently acquired video frame, determining a moving object detection result of the currently acquired video frame based on the currently acquired video frame and a trained object detection model, wherein the moving object detection result is used for indicating whether a moving object of a preset object type is included in the video frame; and if the detection result of the moving object is that the moving object of the preset object type exists, sending alarm information. By the method and the device, the alarm of the appointed moving object can be realized.

Description

Method, device, equipment and storage medium for sending alarm information
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method, an apparatus, a device, and a storage medium for sending alarm information.
Background
With the development of image recognition technology, more and more monitoring devices are added with a motion detection function, that is, the monitoring devices can detect video frames in the acquired video and detect whether a moving object exists in each video frame. When the mobile object in the video is detected, alarm information can be sent to a terminal connected with the monitoring equipment to remind a user to further confirm whether the mobile object in the video frame is dangerous or illegal.
In the course of implementing the present application, the inventors found that the related art has at least the following problems:
when a moving frame is detected in a video frame, that is, a difference exists between a picture of a current video frame and a picture of a previous video frame, it is determined that a moving object appears in a video, for example, an animal, a person, an automobile, or the like appears in the video, and corresponding alarm information is generated, so that a terminal of a user receives a large amount of alarm information. When a user searches for a video of a moving object concerned by the user, the user needs to search in the video according to the sending time of a large amount of alarm information, so that the efficiency of searching for the video of the video by the user is low.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for sending alarm information, which can realize the alarm of a specific mobile object. The technical scheme is as follows:
in a first aspect, a method for sending alarm information is provided, the method including:
acquiring video frames in a target video based on a preset detection period;
determining whether a moving object exists in a currently acquired video frame, and if the moving object exists in the currently acquired video frame, determining a moving object detection result of the currently acquired video frame based on the currently acquired video frame and a trained object detection model, wherein the moving object detection result is used for indicating whether a moving object of a preset object type is included in the video frame;
and if the detection result of the mobile object is that the mobile object of the preset object type exists, sending alarm information.
Optionally, the determining a moving object detection result of the currently acquired video frame based on the currently acquired video frame and the trained object detection model includes:
inputting a currently acquired video frame into a trained object detection model to obtain a model output value, wherein the model output value is used for indicating whether the input video frame comprises a moving object or not and indicating the type of the moving object when the input video frame comprises the moving object;
and determining whether the currently acquired video frame comprises a moving object of a preset object type or not based on the model output value, so as to obtain a moving object detection result of the video frame.
Optionally, the determining whether a moving object exists in the currently acquired video frame, and if a moving object exists in the currently acquired video frame, determining a moving object detection result of the currently acquired video frame based on the currently acquired video frame and the trained object detection model includes:
determining whether a moving object exists in an image of a currently acquired video frame in a preset area, and if the moving object exists in the image of the currently acquired video frame in the preset area, determining a moving object detection result of the image of the currently acquired video frame in the preset area based on the currently acquired video frame and a trained object detection model.
Optionally, the method further includes:
when the moving object detection result of a first video frame is that a moving object of a preset object type exists and the moving object detection result of a video frame acquired before the first video frame is that a moving object of the preset object type does not exist, determining a first starting time point according to a time point corresponding to the first video frame, and when the moving object detection result of a second video frame is that a moving object of the preset object type does not exist and the moving object detection result of a video frame acquired before the second video frame is that a moving object of the preset object type exists, determining a first ending time point according to a time point corresponding to the second video frame;
when a moving object segment playback instruction is received, playing a video segment between the first start time point and the first end time point in the target video.
Optionally, the determining a first start time point according to a time point corresponding to a first video frame and determining a first end time point according to a time point corresponding to a second video frame includes:
and determining a time point which is a first preset time length before the time point corresponding to the first video frame as the first starting time point, and determining a time point which is a second preset time length after the time point corresponding to the second video frame as the first ending time point.
Optionally, the method further includes:
determining object information of a moving object in each video frame acquired between a first starting time point and a first ending time point, wherein the object information comprises an object type and/or a movement type, and the movement type comprises entering a target area or leaving the target area;
and adding the first starting time point, the first ending time point and the object information of the moving object to the corresponding relation of the starting time point, the ending time point and the object information.
Optionally, the method further includes:
when a fragment query instruction is received, acquiring object information corresponding to the fragment query instruction;
determining at least one pair of starting time point and ending time point corresponding to the fragment query instruction based on the corresponding relation and the object information corresponding to the fragment query instruction;
playing the video segment between the at least one pair of start and end time points.
Optionally, when the moving object detection result of the second video frame is that no moving object of the preset object type exists and the moving object detection result of the previously acquired video frame of the second video frame is that a moving object of the preset object type exists, after determining the first end time point according to the time point corresponding to the second video frame, the method further includes:
and when the moving object detection result of the third video frame within a third preset time period after the first end time point is a moving object with a preset object type and the moving object detection result of the video frame acquired before the third video frame is a moving object without the preset object type, determining a second start time point according to the time point corresponding to the third video frame, and when the moving object detection result of the fourth video frame is a moving object without the preset object type and the moving object detection result of the video frame acquired before the fourth video frame is a moving object with the preset object type, determining a second end time point according to the time point corresponding to the fourth video frame.
Optionally, after playing a video segment between the first start time point and the first end time point in the target video when receiving a moving object segment playback instruction, the method further includes:
playing the video clip between the first end time point and the second start time point in a double-speed playing mode;
playing the video segment between the second start time point and the second end time point.
Optionally, the method further includes:
when receiving a video playback instruction associated with a moving object segment, playing a video segment located between the first start time point and the first end time point in the video associated with the target video.
In a second aspect, there is provided an apparatus for sending alarm information, the apparatus comprising:
the acquisition module is used for acquiring video frames in the target video based on a preset detection period;
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining whether a moving object exists in a currently acquired video frame, and if the moving object exists in the currently acquired video frame, determining a moving object detection result of the currently acquired video frame based on the currently acquired video frame and a trained object detection model, wherein the moving object detection result is used for indicating whether a moving object of a preset object type is included in the video frame;
and the alarm module is used for sending alarm information if the detection result of the moving object is that the moving object of the preset object type exists.
Optionally, the first determining module is configured to:
inputting a currently acquired video frame into a trained object detection model to obtain a model output value, wherein the model output value is used for indicating whether the input video frame comprises a moving object or not and indicating the object type of the moving object when the input video frame comprises the moving object;
and determining whether the currently acquired video frame comprises a moving object of a preset object type or not based on the model output value, so as to obtain a moving object detection result of the video frame.
Optionally, the first determining module is configured to:
determining whether a moving object exists in an image of a currently acquired video frame in a preset area, and if the moving object exists in the image of the currently acquired video frame in the preset area, determining a moving object detection result of the image of the currently acquired video frame in the preset area based on the currently acquired video frame and a trained object detection model.
Optionally, the apparatus further comprises a playback module, configured to:
when the moving object detection result of a first video frame is that a moving object of a preset object type exists and the moving object detection result of a video frame acquired before the first video frame is that a moving object of the preset object type does not exist, determining a first starting time point according to a time point corresponding to the first video frame, and when the moving object detection result of a second video frame is that a moving object of the preset object type does not exist and the moving object detection result of a video frame acquired before the second video frame is that a moving object of the preset object type exists, determining a first ending time point according to a time point corresponding to the second video frame;
when a moving object segment playback instruction is received, playing a video segment between the first start time point and the first end time point in the target video.
Optionally, the playback module is configured to:
and determining a time point which is a first preset time length before the time point corresponding to the first video frame as the first starting time point, and determining a time point which is a second preset time length after the time point corresponding to the second video frame as the first ending time point.
Optionally, the apparatus further includes a second determining module, configured to:
determining object information of a moving object in each video frame acquired between a first starting time point and a first ending time point, wherein the object information comprises an object type and/or a movement type, and the movement type comprises entering a target area or leaving the target area;
and adding the first starting time point, the first ending time point and the object information of the moving object to the corresponding relation of the starting time point, the ending time point and the object information.
Optionally, the apparatus further includes a query module, configured to:
when a fragment query instruction is received, acquiring object information corresponding to the fragment query instruction;
determining at least one pair of starting time point and ending time point corresponding to the fragment query instruction based on the corresponding relation and the object information corresponding to the fragment query instruction;
playing the video segment between the at least one pair of start and end time points.
Optionally, the apparatus further includes a third determining module, configured to:
and when the moving object detection result of the third video frame within a third preset time period after the first end time point is a moving object with a preset object type and the moving object detection result of the video frame acquired before the third video frame is a moving object without the preset object type, determining a second start time point according to the time point corresponding to the third video frame, and when the moving object detection result of the fourth video frame is a moving object without the preset object type and the moving object detection result of the video frame acquired before the fourth video frame is a moving object with the preset object type, determining a second end time point according to the time point corresponding to the fourth video frame.
Optionally, the apparatus further comprises a second playback module, configured to:
playing a video clip between the first end time point and the second start time point in a double-speed playing mode;
playing the video segment between the second start time point and the second end time point.
Optionally, the apparatus further includes a playing module, configured to:
when receiving a video playback instruction associated with a moving object segment, playing a video segment located between the first start time point and the first end time point in the video associated with the target video.
In a third aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction is stored in the memory, and the at least one instruction is loaded by the processor and executed to implement the operations performed by the method for sending alarm information as described above.
In a fourth aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the method for sending alarm information as described above.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
the method comprises the steps of detecting a moving object in a video frame in a target video, determining whether the video frame comprises the moving object of a preset object type, and sending out corresponding alarm information if the video frame comprises the moving object of the preset object type. By the method and the device, the designated mobile object can be alarmed, the alarm information received by the terminal can be reduced, a subsequent user can check the corresponding video clip through the alarm information of the designated mobile object, and the efficiency of playing the video record can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a flowchart of a method for sending alarm information according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a method for sending alarm information according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of an alarm-sending device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a video storage device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The method for sending the alarm information provided by the embodiment of the application can be applied to a video monitoring system, and can be realized by the camera equipment and the terminal together, or realized by the camera equipment, the terminal and the video storage equipment together.
The camera shooting device can shoot images or videos, can also be provided with a processor and a storage device, can process and store the shot images, and can be connected with a terminal or a video storage device in the modes of the internet, a local area network and the like to realize data transmission with the camera shooting device or the terminal.
The terminal can run a video management application program and can play and playback videos shot by the camera equipment. The terminal is provided with a display screen, a processor, a storage device and other components, can process and display images, has a communication function, can be accessed to the Internet or a local area network, and can transmit data with the camera device and the video storage device. The terminal may be a smartphone, a tablet, a desktop computer, a notebook, or the like.
The video storage device is provided with a processor, can process and store images, can access the Internet or a local area network, and can transmit data with the camera device and the terminal.
When the method for sending the alarm information is implemented by the camera device and the terminal together, the method can be divided into two situations according to whether the camera device has the image processing capability or not, when the camera device does not have the image processing capability, the camera device can send the shot image data to the terminal, and the terminal processes the received image data to realize the method for sending the alarm information. When the image pickup equipment has the image processing capability, the image pickup equipment can process the shot image data and send the processing result to the terminal, so that the method for sending the alarm information is realized.
When the method for sending alarm information provided by the embodiment is implemented by the image pickup device, the terminal, and the video storage device together, as shown in fig. 1, fig. 1 is a schematic diagram of an implementation environment provided by the embodiment of the present application. The image pickup device can send the shot image data to the storage device video storage device, and the storage device can process the received image data to obtain a processing result. After the storage device obtains the processing result, the processing result can be sent to the terminal, so that the method for sending the alarm information is realized.
Fig. 2 is a flowchart of a method for sending alarm information according to an embodiment of the present application. Referring to fig. 2, the embodiment includes:
step 201, acquiring a video frame in a target video based on a preset detection period.
In an implementation, the target video may be a video captured by the image capturing apparatus, the image capturing apparatus may transmit the captured video to the terminal, and the terminal performs moving object detection on video frames in the received video according to a preset detection period, for example, the moving object detection may be performed every 10 frames. In addition, the user can select an execution device for detecting the moving object of the video frame through a video management application program running in the terminal. An equipment selection page is arranged in the management application program, a terminal option, a camera shooting equipment option, a video storage equipment option and the like can be arranged in the equipment selection page, and a user can select an execution device for detecting a moving object of a video frame according to actual conditions. For example, when the Camera device has image processing capability, such as IPC (IP Camera), the user may click on the Camera device option to complete the moving object detection of the Video frame by the Camera device, or when a Video storage device, such as NVR (Network Video Recorder), is provided in the Video monitoring system of the user, the Camera device may send the captured Video to the Video storage device, and the user may click on the storage device option to select to complete the moving object detection of the Video frame by the storage device.
Step 202, determining whether a moving object exists in the currently acquired video frame, and if the moving object exists in the currently acquired video frame, determining a moving object detection result of the currently acquired video frame based on the currently acquired video frame and the trained object detection model.
The moving object detection result is used for indicating whether the video frame comprises a moving object of a preset object type or not.
In implementation, after the current video frame is acquired, whether the current video frame is a moving frame may be detected, that is, whether a moving object exists in the current video frame. For example, the image change ratio may be determined by comparing the current video frame with the previously acquired video frame, determining the area occupied by the non-overlapping portions of the two video frames, and then determining the ratio of the area of the non-overlapping portions of the two video frames to the area of the current video frame. If the image change proportion exceeds a preset threshold value, the currently acquired video frame can be considered to have a moving object. Or, it may also be determined whether a moving object exists in the currently acquired video frame in another manner, which is not limited herein. If the moving object exists in the currently acquired video frame, the currently acquired video frame can be input into the trained object detection model, and the moving object detection result of the currently acquired video frame is determined.
The process of determining the moving object detection result of the currently acquired video frame by the trained object detection model may be as follows: inputting a currently acquired video frame into a trained object detection model to obtain a model output value, wherein the model output value is used for indicating whether the input video frame comprises a moving object or not and indicating the object type of the moving object when the input video frame comprises the moving object; and determining whether the currently acquired video frame comprises a moving object of a preset object type or not based on the model output value to obtain a moving object detection result of the video frame.
In practice, the trained object detection model has an object recognition capability (hereinafter, the trained object detection model may be referred to as an object detection model), and is capable of recognizing an object type to which a moving object in a video frame belongs, for example, the object type to which the moving object belongs may be a person, an animal, a vehicle, and the like. After the currently acquired video frame is input to the object detection model, the object detection model may output a numerical value corresponding to an object type to which a moving object belongs in the currently acquired video frame. For example, 0 corresponds to unrecognizable, 1 corresponds to a person, 2 corresponds to a vehicle, and so on. If the value output by the object detection model is a value corresponding to the preset object type, whether a moving object of the preset object type is included in the currently acquired video frame can be determined. If the value output by the object detection model is not the value corresponding to the preset object type, it can be determined whether the currently acquired video frame does not include the moving object of the preset object type. When the value output by the object detection model is 0, it may be that the moving object cannot be identified due to factors such as environment and image resolution, or that the moving object is detected by mistake due to a moving object that is not focused by the user, for example, leaves drift in a video frame, or that the moving object is detected by mistake due to factors such as light.
The preset object type can be set by a user, and the object type can include people, vehicles, animals and the like. In the video management application program, a monitoring object selection page, and a user can select an object type (i.e., a preset object type) to be monitored in the monitoring object selection page. If the device for detecting the moving object is the camera device, the terminal may send the configuration information carrying the preset object type to the camera device, and the camera device detects the moving object of the preset object type.
Optionally, in order to improve the detection accuracy, the user may set a detection area for detecting a moving object in the video frame, and the corresponding process is as follows: determining whether a moving object exists in an image of a currently acquired video frame in a preset area, and if the moving object exists in the image of the currently acquired video frame in the preset area, determining a moving object detection result of the image of the currently acquired video frame in the preset area based on the currently acquired video frame and a trained object detection model.
In implementation, a detection area setting page is set in the video management application, a video frame currently captured by the camera device can be displayed in the detection area setting page, and a user can select at least one detection area (i.e., a preset area) in which a moving object needs to be detected in a frame of the video frame according to the displayed video, where the at least one detection area may be an area framed by a polygon. The user can mark a plurality of marking points on the video picture, the video management application program obtains the position information of the marking points in the video picture, and the detection area information of a polygon which is formed by the marking points as vertexes is determined according to the position information of the marking points in the video picture. As shown in fig. 3, the user can set the detection area and only perform moving object detection on the image in the detection area, so that the moving object detection on the video frame can be prevented from being influenced by the moving object existing in the non-detection area. For example, a detection area or the like may be set in accordance with a doorway, a road, or the like in an area in the video screen, and a non-detection area other than the detection area in the video screen may be an area corresponding to the sky, a blocking object blocking the image pickup apparatus, or the like. If the current device for detecting the moving object is the image pickup device or the video storage device, the terminal may send the detection area information to the image pickup device or the video storage device. If a moving object of a preset object type exists in the preset area of the currently acquired video frame, the image in the corresponding preset area can be input into the object detection model, and the type of the moving object existing in the image in the preset area is output by the object detection model, so that a moving object detection result of the currently acquired video frame is obtained.
And 203, if the detection result of the moving object is that the moving object of the preset object type exists, sending alarm information.
The preset object type may be set by a user, and the object type may include a person, a vehicle, an animal, and the like. In the video management application program, a monitoring object selection page, and a user can select an object type (i.e., a preset object type) to be monitored in the monitoring object selection page. If the device for detecting the moving object is the camera device, the terminal may send the configuration information carrying the preset object type to the camera device, and the camera device detects the moving object of the preset object type. When it is detected that a moving object of a preset object type exists in a currently acquired video frame, alarm information can be sent to the terminal. If the device for detecting the moving object is the video storage device, the terminal may send configuration information carrying the preset object type to the video storage device, receive the video frame sent by the camera device by the video storage device, and detect the moving object of the preset object type. When it is detected that a moving object of a preset object type exists in a currently received video frame, alarm information may be sent to the terminal, where the alarm information may include the preset object type and a time point corresponding to the currently acquired video frame. The alarm information can be displayed on a screen of the terminal in a notification mode, or displayed in a voice broadcast mode, or can be a whistle sent by the terminal, and the display mode of the alarm information is not limited. In addition, in order to avoid the problem that the number of alarms is too frequent due to the fact that the first video frame with the moving object is continuously detected for multiple times, for example, people or vehicles appear in a video in a crossed mode, within a preset time length after the alarm information is sent out, if the video frame with the preset object type is detected again, the alarm information does not need to be sent to the terminal.
Optionally, after detecting that a moving object of the preset object type exists in the video frame, a time period in which the moving object of the preset object type appears may also be determined, and the corresponding processing is as follows: when the moving object detection result of the first video frame is that a moving object of a preset object type exists and the moving object detection result of a video frame acquired before the first video frame is that a moving object of the preset object type does not exist, determining a first starting time point according to a time point corresponding to the first video frame, and when the moving object detection result of the second video frame is that a moving object of the preset object type does not exist and the moving object detection result of a video frame acquired before the second video frame is that a moving object of the preset object type exists, determining a first ending time point according to a time point corresponding to the second video frame.
In implementation, in the process of detecting a moving object of a video frame in a target video, if a moving object of a preset object type exists in the current video frame for detecting the moving object and a moving object of the preset object type does not exist in a last detected video frame of the current detected video frame, it may be considered that the moving object of the preset object type occurs at a time point corresponding to the current detected video frame in the target video. The currently detected video frame may be referred to as a first video frame, and a time point corresponding to the currently detected video frame may be referred to as a first start time point. Then, the moving object detection may be performed continuously on the video frames in the target video, and generally, the occurrence of a moving object in the video may last for a period of time, so that when a moving object of a preset object type does not exist in a detected video frame after the first start time point, and a moving object of a preset object type exists in a last detected video frame of the video frame, it may be considered that the moving object in the target video stops moving at a time point corresponding to the video frame, or moves out of a picture, the video frame may be referred to as a second video frame, and the time point corresponding to the video frame may be referred to as a first end time point. The video segments of the first start time point and the first end time point are start time points and end time points of video segments in which a moving object of a preset object type appears in the target video. If the terminal or the video storage device stores video recordings (i.e., target videos) of all time periods in which the video camera device captures the video recordings, the first start time point and the first end time point may be stored for subsequent playback of a video clip in which a moving object of a preset object type is present after the first start time point and the first end time point are determined. If the terminal or the video storage device stores only the video segments in which the moving object of the preset object type exists, the video segments between the first start time point and the first end time point may be saved after the first start time point and the first end time point are determined.
Correspondingly, video clip information can be generated according to the first starting time point and the first ending time point, the video clip information can be stored in the terminal corresponding to the alarm information, and a subsequent user can play back the video clip according to the alarm information. The corresponding processing is as follows: and when a mobile object segment playback instruction is received, playing a video segment between a first starting time point and a first ending time point in the target video.
In implementation, a video playback function is set in the video management application program, a user can check generated alarm information in the video management application program, the alarm information corresponds to a video playback option, and the user can play back a video clip of a mobile object corresponding to the alarm information by clicking the corresponding video playback option. When the moving object video clip is stored in the terminal, the terminal can acquire the video clip information corresponding to the alarm information, and the video clip between the first start time point and the first end time point is played in the target video according to the first start time point and the first end time point in the video clip information. When the video clip is stored in the video storage device, the terminal may send a video acquisition instruction to the video storage device, where the video acquisition instruction may carry a first start time point and a first end time point in the video clip information, and then the video storage device may send the corresponding moving object video clip to the terminal according to the received video acquisition instruction, so that the terminal may play back the corresponding moving object video clip.
Optionally, after detecting that a moving object exists in the video, in addition to identifying the object type to which the moving object belongs, a moving type corresponding to the moving object may also be determined, and the corresponding processing is as follows: determining object information of a moving object in each video frame acquired between a first starting time point and a first ending time point, wherein the object information comprises an object type and/or a movement type, and the movement type comprises entering a target area or leaving the target area; and adding the first starting time point, the first ending time point and the object information of the moving object to the corresponding relation between the starting time point, the ending time point and the object information.
In implementation, when a moving object is detected in the video, the video frame corresponding to the first starting time point and the video frame after the first starting time point may be identified, and object information of the moving object may be determined, where the object information includes an object type and/or a movement type. The object type, i.e. the type to which the mobile object belongs, may be, for example, a vehicle, a person, an animal, etc. When the object type is a person, the person in the video frame can be subjected to face recognition to determine the identity of the person, and when the object type is a vehicle, the vehicle in the video frame can be subjected to license plate number recognition to determine the license plate number of the vehicle and the like. The movement type includes entering or leaving of a moving object to or from a target area, and the like, wherein the target area may be an area photographed by a camera device or a preset area in a video frame by a user. After detecting the presence of the moving object in the video, position change information of the moving object may be determined based on detection results of the respective video frames at the first start time point and the first end time point, and whether the moving object enters or leaves the target region may be determined based on the position change information of the moving object and the target region. When the position of the mobile object moves from the outside of the target area to the inside of the target area, it may be determined that the mobile object enters the target area, for example, the target area may be a gate of a warehouse, and when the position of the mobile object moves from the inside of the target area to the outside of the target area, it may be determined that the mobile object enters the target area. Also, it is also possible to set the target region as a target boundary, determine whether the moving object is out of bounds or the like based on the position conversion information of the moving object and the target boundary. After determining the object information, the first start time point, the first end time point and the object information of the moving object may be added to the corresponding relationship between the start time point, the end time point and the object information, that is, the first start time point and the first end time point are stored correspondingly to the object information of the moving object, and the target event information is generated for a subsequent query on the video clip of the moving object. If the terminal or the video storage device only stores the video segment in which the moving object exists, the video segment between the first start time point and the first end time point may also be stored in correspondence with the target event information. It should be noted that, if the image capturing apparatus detects a moving object in a video frame, the target event information may be sent to the terminal or the video storage apparatus after the target event information is obtained.
Correspondingly, a video query function is arranged in the video management application program, a user can query the video according to the target event information, and when a fragment query instruction is received, object information corresponding to the fragment query instruction is obtained; determining at least one pair of starting time point and ending time point corresponding to the fragment query instruction based on the corresponding relation and the object information corresponding to the fragment query instruction; playing the video segment between at least one pair of start time points and end time points.
In implementation, the video management application program is provided with a video query function, and a user can select query object information, for example, the corresponding type of a queried moving object is a vehicle, or a query is a moving type away from a target area, and the like, and in addition, the user can set a time range for querying. When the target video is stored in the terminal, the user can click the query option to trigger the segment query instruction after setting the object information and the time range which need to be queried, and the segment query instruction can include the queried object information and the time range. When the terminal detects the fragment query instruction, at least one target event information corresponding to the object information in the fragment query instruction can be queried according to the corresponding relation between the start time point and the end time point and the object information corresponding to the fragment query instruction, and at least one pair of the start time point and the end time point is determined according to the at least one target event information. And then playing the corresponding video clip according to at least one pair of the starting time point and the ending time point. When the video clip is stored in the video storage device, the terminal can send the clip query instruction to the video storage device, and the video storage device queries at least one pair of start time point and end time point corresponding to the object information in the clip query instruction according to the corresponding relation and the object information corresponding to the clip query instruction. And then transmitting the corresponding video clip to the terminal according to the at least one pair of the start time point and the end time point. The terminal can play the inquired video clip. When there are a plurality of logarithms of the start time point and the end time point of the query, that is, a plurality of video segments are queried, the terminal may display an image corresponding to each video segment, where the image may be any video frame in each queried video segment, may be a start video frame of a video segment, or a middle time video frame, and the like, which is not limited herein. The user can play the corresponding video clip by clicking the corresponding image of the video clip.
Optionally, in order to facilitate the user to view the moving object segment, a plurality of moving object segments that are relatively close in time may be determined as one moving object segment, and the corresponding processing is as follows: and when the moving object detection result of the third video frame within a third preset time length after the first end time point is a moving object with a preset object type and the moving object detection result of the previous acquired video frame of the third video frame is a moving object without the preset object type, determining a second start time point according to the time point corresponding to the third video frame, and when the moving object detection result of the fourth video frame is a moving object without the preset object type and the moving object detection result of the previous acquired video frame of the fourth video frame is a moving object with the preset object type, determining a second end time point according to the time point corresponding to the fourth video frame.
In implementation, in the process of detecting a moving object for a video frame in a target video, within a third preset time period after the first end time point, a video frame in which a moving object of a preset object type exists is detected again. The video frame may be referred to as a third video frame and the time point corresponding to the third video frame may be referred to as a second start time point. And then continuing to perform moving object detection on the video frame in the target video, and when a moving object of the preset object type does not exist in the detected video frame after the second starting time point and a moving object of the preset object type exists in the last detected video frame of the video frame, then the video frame may be referred to as a fourth video frame, and the time point corresponding to the fourth video frame may be referred to as a second ending time point. Because the two segments are relatively close in time, the two segments can be processed as a segment of moving object segment, that is, corresponding video segment information can be generated according to the first starting time point, the first ending time point, the second starting time point and the second ending time point, and is correspondingly stored with the alarm information generated at the first starting time point. In addition, the moving objects in the video clips at the first starting time point and the first ending time point and the moving objects in the video clips at the second starting time point and the second ending time point can be identified, so that the object information and the target event information corresponding to the two video clips are respectively obtained for subsequent playback and query of the video clips. It should be noted that, when the video frame with the moving object is detected again within the third preset duration at the second end time point, the third start time point and the third end time point may be determined according to the same process, and so on, and the fourth start time point and the fourth end time point may also be determined, and details are not repeated here.
Correspondingly, in order to facilitate the user to view the moving object segment, after receiving the moving object segment playback instruction, the moving object without the preset object type can be played at double speed, and the corresponding processing is as follows: playing a video clip between a first end time point and a second start time point in a double-speed playing mode; and playing the video clip between the second starting time point and the second ending time point.
In implementation, since the video segments before the first end time point and the second end time point are free of moving objects, the terminal may play the video segment between the first end time point and the first end time point at a normal play speed, play the video segment between the first end time point and the second end time point at a double-speed play mode, and play the video segment between the second end time point and the second start time point at a normal play speed during the playing of the video. When a starting time point and an ending time point such as a third starting time point, a third ending time point, a fourth starting time point and the like also exist in the target video, the video clip between the adjacent starting time point and the ending time point can be played at a normal playing speed according to the time sequence of the starting time point and the ending time point, and the video clip between the adjacent ending time point and the starting time point can be played in a double-speed playing manner.
Optionally, the user may view other associated videos corresponding to the time period in which the preset type of moving target appears according to the target event information, and when receiving an associated video playback instruction of the moving object segment, play a video segment located between the first start time point and the first end time point in the video associated with the target video.
In an implementation, a video monitoring system generally includes a plurality of image capturing devices, each image capturing device can monitor a different area, and in the same video monitoring system, videos captured by different image capturing devices may be associated videos. The user can check videos of other camera devices through the target event information corresponding to one camera device. The video management application program can be provided with an associated video playback function, and a user can search the target event information according to the time and the object information at the terminal, namely the user can search the stored target event information by setting the time period and the object type and/or the movement type of the mobile object. And then clicking the associated video playback option corresponding to the target event information to trigger an associated video playback instruction of the moving object segment. When the terminal detects the associated video playback instruction of the moving object segment, the video segment between the first start time point and the first end time point in the video associated with the target video can be played according to the first start time point and the first end time point recorded in the target event information. In addition, the user can also specify the corresponding image pickup device and play a video clip between the first starting time point and the first ending time point which are shot by the specified image pickup device.
According to the method and the device, the video frame in the target video is subjected to moving object detection, whether the video frame comprises a moving object of a preset object type or not is determined, and if the video frame comprises the moving object of the preset object type, corresponding alarm information is sent out. The method and the device have the advantages that the specific mobile object can be alarmed, a follow-up user can directly check the corresponding video clip through the specific mobile object alarm information, the user does not need to search the corresponding video record in the sending time of a large amount of alarm information, and the efficiency of playing the video record can be improved.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 4 is a device for sending alarm information according to an embodiment of the present application, and as shown in fig. 4, the device includes:
an obtaining module 410, configured to obtain a video frame in a target video based on a preset detection period;
a first determining module 420, configured to determine whether a moving object exists in a currently acquired video frame, and if a moving object exists in the currently acquired video frame, determine a moving object detection result of the currently acquired video frame based on the currently acquired video frame and a trained object detection model, where the moving object detection result is used to indicate whether a moving object of a preset object type is included in the video frame;
an alarm module 430, configured to send alarm information if the moving object detection result indicates that a moving object of a preset object type exists.
Optionally, the first determining module 420 is configured to:
inputting a currently acquired video frame into a trained object detection model to obtain a model output value, wherein the model output value is used for indicating whether the input video frame comprises a moving object or not and indicating the object type of the moving object when the input video frame comprises the moving object;
and determining whether the currently acquired video frame comprises a moving object of a preset object type or not based on the model output value, so as to obtain a moving object detection result of the video frame.
Optionally, the first determining module 420 is configured to:
determining whether a moving object exists in an image of a currently acquired video frame in a preset area, and if the moving object exists in the image of the currently acquired video frame in the preset area, determining a moving object detection result of the image of the currently acquired video frame in the preset area based on the currently acquired video frame and a trained object detection model.
Optionally, the apparatus further comprises a playback module, configured to:
when the moving object detection result of a first video frame is that a moving object of a preset object type exists and the moving object detection result of a previously acquired video frame of the first video frame is that a moving object of the preset object type does not exist, determining a first starting time point according to a time point corresponding to the first video frame, and when the moving object detection result of a second video frame is that a moving object of the preset object type does not exist and the moving object detection result of a previously acquired video frame of the second video frame is that a moving object of the preset object type exists, determining a first ending time point according to a time point corresponding to the second video frame;
when a moving object segment playback instruction is received, playing a video segment between the first start time point and the first end time point in the target video.
Optionally, the playback module is configured to:
and determining a time point which is a first preset time length before the time point corresponding to the first video frame as the first starting time point, and determining a time point which is a second preset time length after the time point corresponding to the second video frame as the first ending time point.
Optionally, the apparatus further includes a second determining module, configured to:
determining object information of a moving object in each video frame acquired between a first starting time point and a first ending time point, wherein the object information comprises an object type and/or a movement type, and the movement type comprises entering a target area or leaving the target area;
and adding the first starting time point, the first ending time point and the object information of the moving object to the corresponding relation between the starting time point, the ending time point and the object information.
Optionally, the apparatus further includes a query module, configured to:
when a fragment query instruction is received, acquiring object information corresponding to the fragment query instruction;
determining at least one pair of starting time point and ending time point corresponding to the fragment query instruction based on the corresponding relation and the object information corresponding to the fragment query instruction;
playing the video segment between the at least one pair of start time points and end time points.
Optionally, the apparatus further includes a third determining module, configured to:
and when the moving object detection result of the third video frame within a third preset time period after the first end time point is a moving object with a preset object type and the moving object detection result of the previous acquired video frame of the third video frame is a moving object without the preset object type, determining a second start time point according to the time point corresponding to the third video frame, and when the moving object detection result of the fourth video frame is a moving object without the preset object type and the moving object detection result of the previous acquired video frame of the fourth video frame is a moving object with the preset object type, determining a second end time point according to the time point corresponding to the fourth video frame.
Optionally, the apparatus further comprises a second playback module, configured to:
playing the video clip between the first end time point and the second start time point in a double-speed playing mode;
playing the video segment between the second start time point and the second end time point.
Optionally, the apparatus further includes a playing module, configured to:
when receiving a video playback instruction associated with a moving object segment, playing a video segment located between the first start time point and the first end time point in the video associated with the target video.
It should be noted that: in the above embodiment, when the device for sending alarm information sends alarm information, only the division of the above functional modules is used for illustration, and in practical applications, the above function allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the device for sending alarm information and the method for sending alarm information provided by the above embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, which is not described herein again.
Fig. 5 shows a block diagram of a terminal 500 according to an exemplary embodiment of the present application. The terminal 500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 500 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, the terminal 500 includes: a processor 501 and a memory 502.
The processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content required to be displayed on a display screen. In some embodiments, processor 501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for execution by processor 501 to implement the method of issuing alert information provided by method embodiments herein.
In some embodiments, the terminal 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502 and peripheral interface 503 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 503 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 504, touch screen display 505, camera 506, audio circuitry 507, positioning components 508, and power supply 509.
The peripheral interface 503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 501 and the memory 502. In some embodiments, the processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 501, the memory 502, and the peripheral interface 503 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 504 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 504 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 504 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 505 is a touch display screen, the display screen 505 also has the ability to capture touch signals on or over the surface of the display screen 505. The touch signal may be input to the processor 501 as a control signal for processing. At this point, the display screen 505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 505 may be one, providing the front panel of the terminal 500; in other embodiments, the display screens 505 may be at least two, respectively disposed on different surfaces of the terminal 500 or in a folded design; in still other embodiments, the display 505 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 500. Even more, the display screen 505 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 505 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 506 is used to capture images or video. Optionally, camera assembly 506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of a terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 500. The microphone may also be an array microphone or an omni-directional acquisition microphone. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 507 may also include a headphone jack.
The positioning component 508 is used for positioning the current geographic Location of the terminal 500 for navigation or LBS (Location Based Service). The Positioning component 508 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
Power supply 509 is used to power the various components in terminal 500. The power source 509 may be alternating current, direct current, disposable or rechargeable. When power supply 509 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery can also be used to support fast charge technology.
In some embodiments, terminal 500 also includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: acceleration sensor 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, optical sensor 515, and proximity sensor 516.
The acceleration sensor 511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 500. For example, the acceleration sensor 511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 501 may control the touch screen 505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 512 may detect a body direction and a rotation angle of the terminal 500, and the gyro sensor 512 may cooperate with the acceleration sensor 511 to acquire a 3D motion of the user on the terminal 500. The processor 501 may implement the following functions according to the data collected by the gyro sensor 512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 513 may be disposed on a side bezel of the terminal 500 and/or an underlying layer of the touch display screen 505. When the pressure sensor 513 is disposed on the side frame of the terminal 500, a holding signal of the terminal 500 by the user can be detected, and the processor 501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the touch display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 505. The operability control comprises at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 514 is used for collecting a fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 514 may be provided on the front, back, or side of the terminal 500. When a physical button or a vendor Logo is provided on the terminal 500, the fingerprint sensor 514 may be integrated with the physical button or the vendor Logo.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the touch screen 505 based on the ambient light intensity collected by the optical sensor 515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 505 is turned down. In another embodiment, processor 501 may also dynamically adjust the shooting parameters of camera head assembly 506 based on the ambient light intensity collected by optical sensor 515.
A proximity sensor 516, also referred to as a distance sensor, is typically disposed on the front panel of the terminal 500. The proximity sensor 516 is used to collect the distance between the user and the front surface of the terminal 500. In one embodiment, when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 gradually decreases, the processor 501 controls the touch display screen 505 to switch from the bright screen state to the dark screen state; when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 becomes gradually larger, the processor 501 controls the touch display screen 505 to switch from the message screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 5 is not intended to be limiting of terminal 500 and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 6 is a schematic structural diagram of a video storage device according to an embodiment of the present application, where the video storage device 600 may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 601 and one or more memories 602, where the memory 602 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 601 to implement the methods provided by the foregoing method embodiments. Certainly, the video storage device may further include a wired or wireless network interface, a keyboard, an input/output interface, and other components to facilitate input and output, and the video storage device may further include other components for implementing the device functions, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, is also provided that includes instructions executable by a processor in a terminal to perform the method of issuing alert information in the above-described embodiments. The computer readable storage medium may be non-transitory. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A method of alerting, the method comprising:
acquiring video frames in a target video based on a preset detection period;
determining whether a moving object exists in a currently acquired video frame, and if the moving object exists in the currently acquired video frame, determining a moving object detection result of the currently acquired video frame based on the currently acquired video frame and a trained object detection model, wherein the moving object detection result is used for indicating whether a moving object of a preset object type is included in the video frame;
if the detection result of the mobile object is that the mobile object of a preset object type exists, sending alarm information;
when the moving object detection result of a first video frame is that a moving object of a preset object type exists and the moving object detection result of a previously acquired video frame of the first video frame is that a moving object of the preset object type does not exist, determining a first starting time point according to a time point corresponding to the first video frame, and when the moving object detection result of a second video frame is that a moving object of the preset object type does not exist and the moving object detection result of a previously acquired video frame of the second video frame is that a moving object of the preset object type exists, determining a first ending time point according to a time point corresponding to the second video frame;
determining position change information of a moving object in each video frame acquired between the first starting time point and the first ending time point, and determining a movement type of the moving object in each video frame acquired between the first starting time point and the first ending time point according to the position change information and a target area, wherein the movement type comprises entering the target area or leaving the target area;
adding the first starting time point, the first ending time point and the movement type of the moving object into the corresponding relation of the starting time point, the ending time point and the movement type;
when a fragment query instruction is received, acquiring a movement type corresponding to the fragment query instruction;
determining at least one pair of starting time point and ending time point corresponding to the fragment query instruction based on the corresponding relation and the movement type corresponding to the fragment query instruction;
playing the video segment between the at least one pair of start and end time points.
2. The method of claim 1, wherein determining a moving object detection result for a currently acquired video frame based on the currently acquired video frame and a trained object detection model comprises:
inputting a currently acquired video frame into a trained object detection model to obtain a model output value, wherein the model output value is used for indicating whether the input video frame comprises a moving object or not and indicating the type of the moving object when the input video frame comprises the moving object;
and determining whether the currently acquired video frame comprises a moving object of a preset object type or not based on the model output value, so as to obtain a moving object detection result of the video frame.
3. The method of claim 1, wherein determining whether a moving object exists in the currently acquired video frame, and if a moving object exists in the currently acquired video frame, determining a moving object detection result of the currently acquired video frame based on the currently acquired video frame and the trained object detection model comprises:
determining whether a moving object exists in an image of a currently acquired video frame in a preset area, and if the moving object exists in the image of the currently acquired video frame in the preset area, determining a moving object detection result of the image of the currently acquired video frame in the preset area based on the currently acquired video frame and a trained object detection model.
4. The method of claim 1, wherein determining a first starting time point according to a time point corresponding to a first video frame and determining a first ending time point according to a time point corresponding to a second video frame comprises:
and determining a time point which is a first preset time length before the time point corresponding to the first video frame as the first starting time point, and determining a time point which is a second preset time length after the time point corresponding to the second video frame as the first ending time point.
5. The method of claim 1, further comprising:
determining the object type of a moving object in each video frame acquired between a first starting time point and a first ending time point;
and adding the first starting time point, the first ending time point and the object type of the moving object into the corresponding relation between the starting time point, the ending time point and the object type.
6. The method of claim 5, further comprising:
when a fragment query instruction is received, acquiring an object type corresponding to the fragment query instruction;
determining at least one pair of starting time point and ending time point corresponding to the fragment query instruction based on the corresponding relation and the object type corresponding to the fragment query instruction;
playing the video segment between the at least one pair of start time points and end time points.
7. The method of claim 1, wherein when the moving object detection result of the second video frame is that no moving object of a preset object type exists and the moving object detection result of the previously acquired video frame of the second video frame is that a moving object of a preset object type exists, after determining the first ending point in time according to the point in time corresponding to the second video frame, the method further comprises:
and when the moving object detection result of the third video frame within a third preset time period after the first end time point is a moving object with a preset object type and the moving object detection result of the video frame acquired before the third video frame is a moving object without the preset object type, determining a second start time point according to the time point corresponding to the third video frame, and when the moving object detection result of the fourth video frame is a moving object without the preset object type and the moving object detection result of the video frame acquired before the fourth video frame is a moving object with the preset object type, determining a second end time point according to the time point corresponding to the fourth video frame.
8. The method according to claim 7, wherein after playing a video segment between the first start time point and the first end time point in the target video when receiving a moving object segment playback instruction, the method further comprises:
playing the video clip between the first end time point and the second start time point in a double-speed playing mode;
playing the video segment between the second start time point and the second end time point.
9. The method of claim 1, further comprising:
when receiving a video playback instruction associated with a moving object segment, playing a video segment located between the first start time point and the first end time point in the video associated with the target video.
10. An apparatus for issuing an alarm message, the apparatus comprising:
the acquisition module is used for acquiring video frames in the target video based on a preset detection period;
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining whether a moving object exists in a currently acquired video frame, and if the moving object exists in the currently acquired video frame, determining a moving object detection result of the currently acquired video frame based on the currently acquired video frame and a trained object detection model, wherein the moving object detection result is used for indicating whether a moving object of a preset object type is included in the video frame;
the alarm module is used for sending alarm information if the detection result of the moving object is that the moving object of a preset object type exists;
a playback module, configured to determine a first start time point according to a time point corresponding to a first video frame when a moving object detection result of the first video frame is that a moving object of a preset object type exists and a moving object detection result of a previous acquired video frame of the first video frame is that a moving object of a preset object type does not exist, and determine a first end time point according to a time point corresponding to a second video frame when a moving object detection result of the second video frame is that a moving object of a preset object type does not exist and a moving object detection result of a previous acquired video frame of the second video frame is that a moving object of a preset object type exists;
a second determining module, configured to determine position change information of a moving object in each video frame acquired between the first starting time point and the first ending time point, and determine a movement type of the moving object in each video frame acquired between the first starting time point and the first ending time point according to the position change information and a target area, where the movement type includes entering the target area or leaving the target area; adding the first starting time point, the first ending time point and the movement type of the moving object into the corresponding relation of the starting time point, the ending time point and the movement type; when a fragment query instruction is received, acquiring a movement type corresponding to the fragment query instruction; determining at least one pair of starting time point and ending time point corresponding to the fragment query instruction based on the corresponding relation and the movement type corresponding to the fragment query instruction; playing the video segment between the at least one pair of start and end time points.
11. A computer device comprising a processor and a memory, wherein at least one instruction is stored in the memory, and wherein the at least one instruction is loaded and executed by the processor to perform operations performed by the method of alerting an alert information of any of claims 1 to 9.
12. A computer-readable storage medium, wherein at least one instruction is stored in the storage medium, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the method for issuing alert information according to any one of claims 1 to 9.
CN202010432607.2A 2020-05-20 2020-05-20 Method, device, equipment and storage medium for sending alarm information Active CN113706807B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010432607.2A CN113706807B (en) 2020-05-20 2020-05-20 Method, device, equipment and storage medium for sending alarm information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010432607.2A CN113706807B (en) 2020-05-20 2020-05-20 Method, device, equipment and storage medium for sending alarm information

Publications (2)

Publication Number Publication Date
CN113706807A CN113706807A (en) 2021-11-26
CN113706807B true CN113706807B (en) 2023-02-10

Family

ID=78645349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010432607.2A Active CN113706807B (en) 2020-05-20 2020-05-20 Method, device, equipment and storage medium for sending alarm information

Country Status (1)

Country Link
CN (1) CN113706807B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115361523B (en) * 2022-07-13 2024-02-13 重庆甲智甲创科技有限公司 Intelligent monitoring method and device and intelligent door

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106507129A (en) * 2016-09-28 2017-03-15 浙江宇视科技有限公司 A kind of video intelligent back method and equipment
CN108322831A (en) * 2018-02-28 2018-07-24 广东美晨通讯有限公司 video playing control method, mobile terminal and computer readable storage medium
CN109922310A (en) * 2019-01-24 2019-06-21 北京明略软件系统有限公司 The monitoring method of target object, apparatus and system
CN110738687A (en) * 2019-10-18 2020-01-31 上海眼控科技股份有限公司 Object tracking method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681751A (en) * 2016-01-15 2016-06-15 上海小蚁科技有限公司 Method, device and system for presenting preview of video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106507129A (en) * 2016-09-28 2017-03-15 浙江宇视科技有限公司 A kind of video intelligent back method and equipment
CN108322831A (en) * 2018-02-28 2018-07-24 广东美晨通讯有限公司 video playing control method, mobile terminal and computer readable storage medium
CN109922310A (en) * 2019-01-24 2019-06-21 北京明略软件系统有限公司 The monitoring method of target object, apparatus and system
CN110738687A (en) * 2019-10-18 2020-01-31 上海眼控科技股份有限公司 Object tracking method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113706807A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN110971930B (en) Live virtual image broadcasting method, device, terminal and storage medium
CN111147878B (en) Stream pushing method and device in live broadcast and computer storage medium
CN110798790B (en) Microphone abnormality detection method, device and storage medium
CN110839128B (en) Photographing behavior detection method and device and storage medium
CN112084811B (en) Identity information determining method, device and storage medium
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN108363982B (en) Method and device for determining number of objects
CN110874905A (en) Monitoring method and device
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN111586279B (en) Method, device and equipment for determining shooting state and storage medium
CN111325701B (en) Image processing method, device and storage medium
CN112749590B (en) Object detection method, device, computer equipment and computer readable storage medium
CN113706807B (en) Method, device, equipment and storage medium for sending alarm information
CN111428080B (en) Video file storage method, video file search method and video file storage device
CN111723124B (en) Data collision analysis method and device, electronic equipment and storage medium
CN111383251A (en) Method and device for tracking target object, monitoring equipment and storage medium
CN111611414A (en) Vehicle retrieval method, device and storage medium
CN112015612B (en) Method and device for acquiring stuck information
CN113936240A (en) Method, device and equipment for determining sample image and storage medium
CN110717365B (en) Method and device for obtaining picture
CN111860030A (en) Behavior detection method, behavior detection device, behavior detection equipment and storage medium
CN112749583A (en) Face image grouping method and device, computer equipment and storage medium
CN110941458A (en) Method, device and equipment for starting application program and storage medium
CN112241987A (en) System, method, device and storage medium for determining defense area
CN111698453B (en) Video processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant