CN112948627B - Alarm video generation method, display method and device - Google Patents

Alarm video generation method, display method and device Download PDF

Info

Publication number
CN112948627B
CN112948627B CN201911266637.4A CN201911266637A CN112948627B CN 112948627 B CN112948627 B CN 112948627B CN 201911266637 A CN201911266637 A CN 201911266637A CN 112948627 B CN112948627 B CN 112948627B
Authority
CN
China
Prior art keywords
image
image frame
area
image area
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911266637.4A
Other languages
Chinese (zh)
Other versions
CN112948627A (en
Inventor
师恩义
徐鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201911266637.4A priority Critical patent/CN112948627B/en
Publication of CN112948627A publication Critical patent/CN112948627A/en
Application granted granted Critical
Publication of CN112948627B publication Critical patent/CN112948627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The embodiment of the application provides an alarm video generation method, an alarm video display method and an alarm video display device, which can determine an image frame when a target object in a monitoring video triggers an alarm as a first image frame; based on the first image frame, selecting each image frame meeting preset conditions from the monitoring video as a second image frame; for each second image frame with the target object displayed, determining an image area to be cut containing the image area occupied by the target object from the second image frame; cutting each second image frame based on the image area to be cut to obtain the cut image frame of each second image frame as a cut image frame; and generating an alarm video based on each cut image frame. Based on the processing, the effectiveness of the alarm video can be improved.

Description

Alarm video generation method, display method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an alarm video generation method, an alarm video display method, and an alarm video display device.
Background
Perimeter precaution means that in the security protection field, when it is detected that an object such as a person, a vehicle or an animal enters a designated area or crosses an area boundary, an alarm message can be sent to a terminal.
In one implementation mode, perimeter precaution can be achieved based on a video monitoring system, when a target object triggering alarm is detected in a monitoring video shot by a camera, an alarm message can be sent to a terminal, the alarm message can carry an alarm video, and the alarm video can comprise a plurality of image frames before and after the target object triggers the alarm.
However, since the proportion of the target object in the alarm video captured by the camera may be small, the user may not clearly observe the details of the target object when viewing the alarm video at the terminal, that is, the effectiveness of the alarm video in the related art is low.
Disclosure of Invention
The embodiment of the application provides an alarm video generation method, an alarm video display method and an alarm video display device, which can improve the effectiveness of an alarm video. The specific technical scheme is as follows:
the embodiment of the application discloses an alarm video generation method, which comprises the following steps:
determining an image frame when a target object in a monitoring video triggers an alarm as a first image frame;
selecting, from the surveillance video, each image frame satisfying a preset condition as a second image frame based on the first image frame, the preset condition being a first number of image frames located before and closest to the first image frame, a second number of image frames located after and closest to the first image frame, and the first image frame in the surveillance video;
for each second image frame displaying the target object, determining an image area to be cropped containing an image area occupied by the target object from the second image frame;
cutting each second image frame based on the image area to be cut to obtain the cut image frame of each second image frame as a cut image frame;
and generating an alarm video based on each of the cropped image frames.
Optionally, for each second image frame on which the target object is displayed, determining an image area to be cropped that includes an image area occupied by the target object from the second image frame includes:
for each second image frame displaying the target object, determining a smallest rectangular area, which contains an image area occupied by the target object, in the second image frame as a first image area in the second image frame;
performing region expansion on a first image region in the second image frame according to a preset width-height ratio to obtain a second image region;
determining a minimum rectangular area containing an image area corresponding to a first image area in each second image frame in the second image frame as a third image area in the second image frame;
performing region expansion on a third image region in the second image frame according to the preset width-height ratio to obtain a fourth image region;
determining the largest image area in the obtained second image areas as a fifth image area;
and if the ratio of the sizes of the fifth image area and the fourth image area is greater than or equal to a preset ratio, taking the fourth image area in the second image frame as an image area to be cut in the image frame.
Optionally, before the cropping the second image frames based on the to-be-cropped image area to obtain the cropped image frames of the second image frames, the method further includes:
determining a second image frame, on which the target object is not displayed, as a third image frame;
and determining an image area to be cut in the third image frame as an image area corresponding to a fourth image area in the second image frame on which the target object is displayed.
Optionally, the method further includes:
if the ratio of the sizes of the fifth image area and the fourth image area is smaller than the preset ratio, performing area expansion on a second image area in the second image frame according to the size of a preset multiple of the fifth image area, wherein the preset multiple is greater than or equal to 1;
and taking the image area after the area expansion as an image area to be cut in the second image frame.
Optionally, the method further includes:
determining a second image frame without the target object displayed as a third image frame when the ratio of the sizes of the fifth image area and the fourth image area is smaller than the preset ratio;
performing interpolation calculation on an image area to be cut in a previous image frame and an image area to be cut in a next image frame of the third image frame;
and determining an image area to be cut in the third image frame based on the result of the interpolation calculation.
Optionally, the generating an alarm video based on each of the cropped image frames includes:
zooming each of the cropped image frames based on a preset resolution as a zoomed image frame, wherein the preset resolution is smaller than the resolution of the surveillance video;
and generating an alarm video based on the zooming image frames.
Optionally, the method further includes:
and sending an alarm message to a terminal, wherein the alarm message carries the alarm video, the first image frame and the cutting information, and the cutting information is used for indicating the size and the position of the image area to be cut in each second image frame.
In a second aspect, an embodiment of the present application discloses an alarm video display method, including:
receiving an alarm message, wherein the alarm message carries an alarm video; the alarm video is obtained by cutting each second image frame based on the image area to be cut; the second image frames comprise a first number of image frames in the surveillance video that are located before and closest to a first image frame, a second number of image frames that are located after and closest to the first image frame, and the first image frame; the first image frame is an image frame when a target object in the monitoring video triggers an alarm; each image area to be cut in the second image frame with the target object displayed is an image area which contains the image area occupied by the target object in the second image frame;
and displaying the alarm video.
Optionally, the alarm message further carries the first image frame and clipping information, where the clipping information is used to indicate the size and position of the to-be-clipped image area in each second image frame.
Optionally, after the displaying the alarm video, the method further includes:
and when an alarm picture display instruction is received, displaying the first image frame.
Optionally, the method further includes:
when a display instruction for simultaneously displaying an alarm picture and an alarm video is received, determining the size of the image area to be cut and the position of the image area to be cut in the first image frame as a target display position according to the cutting information;
and displaying the alarm video at the target display position according to the size of the image area to be cut.
In a third aspect, an embodiment of the present application discloses an alarm video generating device, where the device includes:
the first determination module is used for determining an image frame when a target object in the monitoring video triggers an alarm as a first image frame;
the selecting module is used for selecting each image frame meeting preset conditions from the monitoring video as a second image frame based on the first image frame, wherein the preset conditions are that the first image frame, the second image frame and the first image frame are positioned in front of the first image frame and closest to the first image frame in the monitoring video, and the second image frame is positioned behind the first image frame and closest to the first image frame;
a second determining module, configured to determine, for each second image frame in which the target object is displayed, an image area to be cropped that includes an image area occupied by the target object from the second image frame;
the cutting module is used for cutting each second image frame based on the image area to be cut to obtain the cut image frame of each second image frame as a cut image frame;
and the generating module is used for generating an alarm video based on each cutting image frame.
Optionally, the second determining module is specifically configured to determine, for each second image frame that displays the target object, a smallest rectangular area in the second image frame, where the smallest rectangular area includes an image area occupied by the target object, as a first image area in the second image frame;
performing region expansion on a first image region in the second image frame according to a preset width-height ratio to obtain a second image region;
determining a minimum rectangular area containing an image area corresponding to the first image area in each second image frame in the second image frame as a third image area in the second image frame;
performing area expansion on a third image area in the second image frame according to the preset width-height ratio to obtain a fourth image area;
determining the largest image area in the obtained second image areas as a fifth image area;
and if the ratio of the sizes of the fifth image area and the fourth image area is greater than or equal to a preset ratio, taking the fourth image area in the second image frame as an image area to be cut in the image frame.
Optionally, the apparatus further comprises:
a first processing module, configured to determine a second image frame on which the target object is not displayed as a third image frame;
and determining an image area to be cut in the third image frame as an image area corresponding to a fourth image area in the second image frame on which the target object is displayed.
Optionally, the second determining module is further configured to perform, if the ratio of the sizes of the fifth image area and the fourth image area is smaller than the preset ratio, area expansion on a second image area in the second image frame according to a preset multiple of the size of the fifth image area, where the preset multiple is greater than or equal to 1;
and taking the image area after the area expansion as an image area to be cut in the second image frame.
Optionally, the apparatus further comprises:
a second processing module, configured to determine, as a third image frame, a second image frame in which the target object is not displayed when a ratio of sizes of the fifth image area and the fourth image area is smaller than the preset ratio;
performing interpolation calculation on an image area to be cut in a previous image frame and an image area to be cut in a next image frame of the third image frame;
and determining an image area to be cut in the third image frame based on the result of the interpolation calculation.
Optionally, the generating module is specifically configured to scale each of the cropped image frames based on a preset resolution to serve as a scaled image frame, where the preset resolution is smaller than the resolution of the monitoring video;
and generating an alarm video based on the zooming image frames.
Optionally, the apparatus further comprises:
and the sending module is used for sending an alarm message to a terminal, wherein the alarm message carries the alarm video, the first image frame and the cutting information, and the cutting information is used for indicating the size and the position of an image area to be cut in each second image frame.
In a fourth aspect, an embodiment of the present application discloses an alarm video display device, including:
the receiving module is used for receiving an alarm message, wherein the alarm message carries an alarm video; the alarm video is obtained by cutting each second image frame based on the image area to be cut; the second image frames comprise a first number of image frames in the surveillance video that are located before and closest to a first image frame, a second number of image frames that are located after and closest to the first image frame, and the first image frame; the first image frame is an image frame when a target object in the monitoring video triggers an alarm; each image area to be cut out in a second image frame on which the target object is displayed is an image area of the image area occupied by the target object in the second image frame;
and the first display module is used for displaying the alarm video.
Optionally, the alarm message further carries the first image frame and cropping information, where the cropping information is used to indicate the size and position of the to-be-cropped image area in each second image frame.
Optionally, the apparatus further comprises:
and the second display module is used for displaying the first image frame when an alarm image display instruction is received.
Optionally, the second display module is further configured to, when a display instruction for displaying an alarm picture and an alarm video at the same time is received, determine, according to the clipping information, the size of the image area to be clipped and the position of the image area to be clipped in the first image frame as a target display position;
and displaying the alarm video at the target display position according to the size of the image area to be cut.
In another aspect of this application, this application embodiment also discloses an electronic device, where the electronic device includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method for generating an alarm video according to the first aspect when executing the program stored in the memory.
In another aspect of this application, an embodiment of this application further discloses an electronic device, where the electronic device includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the alarm video display method according to the second aspect when executing the program stored in the memory.
In yet another aspect of this embodiment, a computer-readable storage medium is further provided, which has instructions stored therein, and when the computer-readable storage medium runs on a computer, the method for generating an alarm video according to the first aspect is implemented.
In yet another aspect of this embodiment, there is also provided a computer-readable storage medium having stored therein instructions which, when run on a computer, implement the alarm video display method according to the second aspect described above.
In another aspect of this embodiment, a computer program product containing instructions is provided, which when executed on a computer, causes the computer to execute the method for generating an alarm video according to the first aspect.
In another aspect of this embodiment, a computer program product containing instructions is provided, which when executed on a computer causes the computer to execute the method for displaying an alarm video according to the second aspect.
The embodiment of the application provides a method for generating an alarm video, which can determine an image frame when a target object in a surveillance video triggers an alarm as a first image frame, select each image frame meeting preset conditions from the surveillance video as a second image frame based on the first image frame, determine an image area to be cut containing an image area occupied by the target object from each second image frame aiming at each second image frame displaying the target object, cut each second image frame based on the image area to be cut to obtain the cut image frame of each second image frame as a cut image frame, and generate the alarm video based on each cut image frame.
Accordingly, the image frames in the monitoring video are cut based on the image area occupied by the target object, the occupied proportion of the target object in the image frames can be improved, and correspondingly, the alarm video obtained through cutting is displayed on the terminal, so that a user can clearly observe the details of the target object, and further, the effectiveness of the alarm video can be improved.
Of course, it is not necessary for any product or method of the present application to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an alarm video generation method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an alarm video display method according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating display state switching of a terminal according to an embodiment of the present disclosure;
fig. 4 is a flowchart of an alarm message display provided in an embodiment of the present application;
FIG. 5 is a flowchart of a method for generating a linked list of matched images according to an embodiment of the present disclosure;
fig. 6 is a processing flow diagram of an alarm video generation module according to an embodiment of the present disclosure;
fig. 7 is a block diagram of an alarm video generating apparatus according to an embodiment of the present application;
FIG. 8 is a block diagram of an alarm video display device according to an embodiment of the present application;
fig. 9 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 10 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In the prior art, the proportion of the target object in the alarm video shot by the camera may be small, so that when a user watches the alarm video at a terminal, the user cannot clearly observe the details of the target object, that is, in the related art, the effectiveness of the alarm video is low.
In order to solve the above problem, an embodiment of the present application provides an alarm video generation method, which may be applied to an electronic device, where the electronic device may acquire a monitoring video, process the monitoring video, and generate an alarm video. For example, the electronic device may be a front-end device of a video surveillance system.
The electronic device may determine an image frame (i.e., a first image frame in this embodiment) when a target object in the surveillance video triggers an alarm, and may select, based on the first image frame, each image frame (i.e., a second image frame in this embodiment) that satisfies a preset condition from the surveillance video.
For each second image frame on which a target object is displayed, the electronic device may determine an image area to be cropped from the second image frame, where the image area to be cropped includes an image area occupied by the target object, and then, the electronic device may crop each second image frame based on the image area to be cropped to obtain a cropped image frame (i.e., a cropped image frame in the embodiment of the present application) of each second image frame, and then, the electronic device generates an alarm video based on each cropped image frame.
Accordingly, the image frames in the monitoring video are cut based on the image area occupied by the target object, the occupied proportion of the target object in the image frames can be improved, and correspondingly, the alarm video obtained through cutting is displayed on the terminal, so that a user can clearly observe the details of the target object, and further, the effectiveness of the alarm video can be improved.
Referring to fig. 1, fig. 1 is a flowchart of an alarm video generation method provided in an embodiment of the present application, where the method may include the following steps:
s101: and determining an image frame when a target object in the monitoring video triggers an alarm as a first image frame.
The target object is an object displayed in the surveillance video, for example, the target object may be a person, an animal, or a vehicle displayed in the surveillance video.
In one implementation, the electronic device may perform image analysis on the collected surveillance video, and determine a target object for triggering an alarm according to a preset alarm policy. For example, when it is detected that a vehicle enters a designated area, the vehicle may be determined as a target object, and accordingly, an image frame of a surveillance video when the vehicle enters the designated area is a first image frame in the embodiment of the present application.
S102: and based on the first image frame, selecting each image frame meeting preset conditions from the monitoring video as a second image frame.
The preset conditions are a first number of image frames which are located in front of and closest to a first image frame, a second number of image frames which are located behind and closest to the first image frame, and the first image frame in the monitoring video. The first number and the second number may be set by a skilled person based on experience.
In actual operation, in order to enable a user to observe an environmental scene monitored before and after a target object triggers an alarm, a plurality of image frames adjacent to a first image frame in a monitoring video may be acquired to generate an alarm video.
In one implementation, a first number of image frames located before and closest to a first image frame, a second number of image frames located after and closest to the first image frame in the surveillance video may be determined and combined with the first image frame as a second image frame. And each second image frame can embody the monitored environmental scene before and after the target object triggers the alarm.
It is understood that, since the target object is in a motion state, some of the second image frames may not display the target object.
S103: for each second image frame on which a target object is displayed, an image area to be cropped containing an image area occupied by the target object is determined from the second image frame.
In the embodiment of the present application, after each second image frame is determined, a second image frame (which may be referred to as a fourth image frame) in which the target object is displayed may be further determined, and then, an image area to be cropped in the fourth image frame may be determined.
For each fourth image frame, the image area to be cut out can include an image area occupied by the target object in the fourth image frame, that is, the image area to be cut out in one fourth image frame can cover the image area occupied by the target object in the fourth image frame.
In addition, if a target object is not displayed in one second image frame, the region to be cropped in the second image frame may be determined according to the region to be cropped in the fourth image frame, and the determination method will be described in detail in the following embodiments.
S104: and cutting each second image frame based on the image area to be cut to obtain the cut image frame of each second image frame as a cut image frame.
After the image area to be cropped is determined, for each second image frame, the second image frame may be cropped based on the image area to be cropped in the second image frame, so as to obtain an image picture portion corresponding to the image area to be cropped, and obtain a new image frame (i.e., the image frame to be cropped in the embodiment of the present application).
It is understood that the cropped image frame can display the target object, and since the cropped image frame is cropped from the second image frame, the proportion of the target object in the cropped image frame is greater than that in the second image frame, i.e., the cropped image frame can display the details of the target object more clearly.
S105: and generating an alarm video based on each cut image frame.
In the embodiment of the application, after each cropping image frame is obtained, an alarm video can be generated based on each cropping image frame.
Because the proportion of the target object in the clipping image frame is larger than that of the target object in the second image frame, namely, the clipping image frame can clearly display the details of the target object, a user can clearly observe the details of the target object after the alarm video is displayed on the terminal, and further, the effectiveness of the alarm video can be improved.
It can be understood that the storage space occupied by the cut image frame after cutting is smaller than that occupied by the second image frame, so that the data volume of the alarm video can be reduced, the bandwidth resource occupied when the alarm video is transmitted is reduced, and for the terminal, when the alarm video is decoded and played, the decoding calculation amount can be reduced, and the playing delay is reduced.
In addition, in order to further reduce the data volume of the alarm video, the clipping video frame can be scaled.
Optionally, S105 may include the following steps: and zooming each cut image frame based on the preset resolution to serve as a zooming image frame, and generating an alarm video based on each zooming image frame.
And the preset resolution is smaller than the resolution of the monitoring video.
The preset resolution may be set by a technician based on experience and business requirements. For example, if the resolution of the surveillance video is 1920 × 1080, the preset resolution may be 640 × 480, and the cropped video frame is scaled according to the preset resolution, so that the data amount of the alarm video can be further reduced.
The size of the target object in the surveillance Video with a resolution of 1080P is generally not more than 400 × 400, if a section of the alert Video with a format of VGA (Video Graphics Array, video transmission standard) is compressed, that is, the resolution is 640 × 480, the frame rate is 12.5fps (Frames Per Second), and the duration is 4 seconds, the obtained alert Video is only about 100KB, and for the alert Video with the same duration and encoding mode, the frame rate is 25fps, and the resolution is 1080P, the size is about 1MB, and the difference between the two is about 10 times.
In addition, according to the difference of the image areas occupied by the target object in each second image frame, the second image frame can be cut in different cutting modes, so that the proportion occupied by the target object in the cut image frame is increased as much as possible, the change situation of the proportion occupied by the target object in the cut image frame is consistent with the change situation of the proportion occupied by the target object in the second image frame, and the watching experience of a user is improved.
Optionally, the image area to be cropped may be determined according to different modes, and accordingly, S103 may include the following steps:
step one, aiming at each second image frame with a target object displayed, determining a minimum rectangular area of the image area occupied by the target object in the second image frame as a first image area in the second image frame.
In an implementation manner, after the fourth image frame is determined from each second image frame, if a plurality of fourth image frames are determined, for each fourth image frame, a minimum rectangular region (i.e., a first image region in the embodiment of the present application) including an image region occupied by a target object in the fourth image frame may be determined, where the first image region is a minimum rectangular region circumscribed by the target object in the fourth image frame.
Further, a respective first image region in each fourth image frame may be determined. It is understood that the size of the first image area in each fourth image frame may be the same or different; the relative position of the first image area in each fourth image frame may be the same or different.
And step two, performing region expansion on the first image region in the second image frame according to the preset width-height ratio to obtain a second image region.
The preset aspect ratio may be an aspect ratio of an alarm video, and the specific aspect ratio may be set by a technician according to a service requirement. For example, the preset aspect ratio may be 16.
Since the shapes of the target objects displayed in the fourth image frames may be different, so that the aspect ratios of the first image regions are different, for uniform processing, the first image regions may be subjected to region expansion according to a preset aspect ratio to obtain second image regions with the same aspect ratio.
In one implementation, in order to improve the user viewing experience of the obtained alarm video, the central point of the first image region may be used as an expansion center, and region expansion may be performed symmetrically to both sides.
For example, the preset aspect ratio is 16.
For another example, the preset aspect ratio is 16.
And step three, determining a minimum rectangular area containing the image area corresponding to the first image area in each second image frame in the second image frame as a third image area in the second image frame.
In addition, after the first image region is determined, for each fourth image frame, the image regions corresponding to the first image regions in the other fourth image frames in the fourth image frame may also be determined, and further, the smallest rectangular region (i.e., the third image region in the embodiment of the present application) including the image regions corresponding to the first image regions in all the fourth image frames in the fourth image frame may be determined.
It will be appreciated that the relative positions of the third image areas in each fourth image frame are the same and the sizes are also the same.
And step four, performing area expansion on a third image area in the second image frame according to the preset width-height ratio to obtain a fourth image area.
In one implementation, after the third image regions are determined, the third image regions may also be subjected to region expansion according to a preset aspect ratio to obtain fourth image regions with the same aspect ratio.
For the method of performing region extension on the third image region, reference may be made to the above method of performing region extension on the first image region, and details are not described here again.
And step five, determining the largest image area in the obtained second image areas as a fifth image area, and if the ratio of the sizes of the fifth image area and the fourth image area is greater than or equal to a preset ratio, taking the fourth image area in the image frame as an image area to be cut in the image frame.
Wherein the preset ratio can be set by a skilled person according to experience.
After the second image area and the fourth image area are determined, a largest image area (i.e., a fifth image area in the embodiment of the present application) in the second image areas may be further determined, and then sizes of the fifth image area and the fourth image area may be compared, so as to determine different cropping modes according to a comparison result.
In one implementation, if the ratio of the sizes of the fifth image area and the fourth image area is greater than or equal to the preset ratio, which indicates that the ratio of the fifth image area to the fourth image area is large, the fourth image frame may be cropped in the uniform cropping mode, that is, for each fourth image frame, the fourth image area in the fourth image frame may be used as the image area to be cropped in the fourth image frame.
Since the relative positions and sizes of the third image areas in the fourth image frames are the same, it can be known that the relative positions and sizes of the fourth image areas in the fourth image frames are the same. Namely, in the uniform cropping mode, the image areas to be cropped in the fourth image frames are the same.
Optionally, the method may further include: determining a second image frame, on which the target object is not displayed, as a third image frame; and determining an image area to be cut in the third image frame as an image area corresponding to a fourth image area in the second image frame on which the target object is displayed.
It is understood that, since there may be an image frame (i.e., the third image frame in the embodiment of the present application) in the second image frame, where the target object is not displayed, if the above-mentioned uniform cropping mode is adopted, in order to ensure the fluency of the image frames in the cropped image frames, the image area to be cropped in the third image frame may be determined to be an image area corresponding to the fourth image area in the fourth image frame, that is, the relative positions of the image areas to be cropped in all the second image frames are the same, and the sizes of the image areas to be cropped are also the same.
Optionally, the method may further include the steps of: if the ratio of the sizes of the fifth image area and the fourth image area is smaller than the preset ratio, performing area expansion on the second image area in the image frame according to the preset multiple of the fifth image area; and taking the image area after the area expansion as an image area to be cut in the image frame.
The preset multiple may be greater than or equal to 1, for example, the preset multiple may be 1.1, or alternatively, the preset multiple may also be 1.2, but is not limited thereto.
In one implementation, if the ratio of the sizes of the fifth image area and the fourth image area is smaller than the preset ratio, it indicates that the proportion of the fifth image area in the fourth image area is smaller, and at this time, if the uniform cropping mode is adopted, the proportion of the target object in the cropped image frame is still smaller.
Therefore, in order to further increase the proportion of the target object in the cropped image frame, the fourth image frame may be cropped in a moving cropping mode, that is, for each fourth image frame, the second image area in the fourth image frame may be subjected to area expansion according to the size of the preset multiple of the fifth image area, and the image area obtained after the area expansion is used as the image area to be cropped in the fourth image frame.
The method for expanding the second image region according to the preset multiple of the fifth image region may also refer to the method for expanding the first image region.
Optionally, in order to ensure the fluency of the image frames in the cropped image frames, the method may further include the following steps: under the condition that the ratio of the sizes of the fifth image area and the fourth image area is smaller than a preset ratio, determining a second image frame without a target object displayed as a third image frame; performing interpolation calculation on an image area to be cut in a previous image frame and an image area to be cut in a next image frame of the third image frame; based on the result of the interpolation calculation, an image area to be cropped in the third image frame is determined.
It can be understood that, since the relative positions of the second image areas in the fourth image frames may not be the same, if the moving cropping mode is adopted, the central point of the second image area is taken as the extension center, and the second image area is symmetrically extended towards two sides to obtain the image areas to be cropped, so that the relative positions of the image areas to be cropped in the fourth image frames may also be different.
In order to ensure the smoothness of the image frames in the cut image frames, for the image frame (i.e., the third image frame) in which the target object is not displayed in the second image frame, interpolation calculation may be performed on the to-be-cut image area in the previous image frame and the to-be-cut image area in the next image frame of the third image frame, and then the to-be-cut image area in the third image frame is obtained.
For example, if the coordinates of the vertex at the lower left corner of the image area to be cut in the previous image frame of the third image frame are (X1, Y1), the width is a, and the height is B, and the coordinates of the vertex at the lower left corner of the image area to be cut in the subsequent image frame of the third image frame are (X2, Y2), the width is a, and the height is B, the coordinates of the vertex at the lower left corner of the image area to be cut in the third image frame can be obtained as
Figure BDA0002313032260000151
The width is a, and the height is B, so that an image area to be cropped in the third image frame can be determined.
Optionally, the method may further include the steps of: and sending an alarm message to the terminal.
The alarm message may carry an alarm video, a first image frame, and clipping information, where the clipping information is used to indicate the size and position of an image area to be clipped in each second image frame.
After the alarm video is generated, an alarm message carrying the alarm video, the first image frame and the cutting information can be sent to the terminal, and then the terminal can display the alarm video and the first image frame based on the cutting information.
In the related technology, the alarm video and the alarm picture are generally stored separately, the alarm picture needs to be intercepted first during playing, the processing process is complex, and the playing delay is large. In the embodiment of the application, the alarm video and the alarm picture can be combined together for transmission and storage, correspondingly, the terminal can simultaneously display the alarm video and the alarm picture, and can switch and display the alarm video and the alarm picture, so that the user experience can be improved.
Referring to fig. 2, fig. 2 is a flowchart of an alarm video display method provided in an embodiment of the present application, where the method may include the following steps:
s201: and receiving an alarm message.
The alarm message may carry an alarm video, and the alarm video may be obtained by cutting each second image frame based on the image area to be cut.
The second image frames include a first number of image frames located before and closest to the first image frame, a second number of image frames located after and closest to the first image frame, and the first image frame in the surveillance video.
The first image frame is an image frame when a target object in the monitoring video triggers an alarm. The image area to be cut out in each second image frame on which the target object is displayed is an image area including the image area of the target object in the second image frame.
For a method for generating an alarm video, reference may be made to the detailed description of the above embodiments, and details are not described here.
S202: and displaying an alarm video.
Accordingly, the image frames in the monitoring video are cut based on the image area occupied by the target object, the occupied proportion of the target object in the image frames can be improved, and correspondingly, the alarm video obtained through cutting is displayed on the terminal, so that a user can clearly observe the details of the target object, and further, the effectiveness of the alarm video can be improved.
Optionally, in order to improve the viewing experience of the user, the alarm message may further carry the first image frame and the cropping information, where the cropping information is used to indicate the size and the position of the image area to be cropped in each second image frame.
In the embodiment of the application, when the terminal displays the alarm video, the first image frame can be displayed, and in addition, the alarm video and the first image frame can be simultaneously displayed according to the cutting information.
Optionally, the method may further include the steps of: and when an alarm picture display instruction is received, displaying the first image frame.
In one implementation, when the terminal displays an alarm video in the current display interface, the user may select to display an alarm picture. When receiving an alarm picture display instruction, the terminal can display the first image frame in the current display interface.
Optionally, the method may further include the steps of: when a display instruction for displaying an alarm picture and an alarm video at the same time is received, determining the size of an image area to be cut and the position of the image area to be cut in a first image frame as a target display position according to cutting information; and displaying an alarm video at the target display position according to the size of the image area to be cut.
In one implementation mode, the user can also select to display the alarm picture and the alarm video at the same time, and the terminal can determine the size of the image area to be cut and the position (namely the target display position) of the image area to be cut according to the cutting information, so that the terminal can display the first image frame in the current display interface and display the alarm video in the image area corresponding to the target display position in the first image frame.
Referring to fig. 3, fig. 3 is a schematic view illustrating switching of a display state of a terminal according to an embodiment of the present application.
In one implementation, the video monitoring system may store an alarm message in a preset network storage space, where the alarm message may include an alarm video, a first image frame, and clipping information. Accordingly, the terminal may obtain a link to the network storage space (i.e., an alert message link) and display the link in an alert message list.
The terminal displaying the alarm message may include the following display states:
display state 1: a list of alert messages is displayed, which may include an alert message link.
Display state 2: in the case of the display state 1, after the user clicks the alarm message link, the terminal may decode the picture (i.e., the first image frame) and fill the entire display area, that is, the terminal may acquire the first image frame and fill and display the first image frame in the display interface.
Display state 3: under the condition of the display state 2, when the user clicks the alarm picture, that is, the user clicks the first image frame in the display interface of the terminal, the terminal may decode the video code stream and cover the picture area corresponding to the cropping frame, that is, the terminal may determine the size of the image area to be cropped and the position (that is, the target display position) of the image area to be cropped, and further, the terminal may display the alarm video in the image area corresponding to the target display position in the first image frame.
Display state 4: under the condition of the display state 3, when the user clicks the region to be clipped in the alarm picture, that is, the user clicks the alarm video in the display interface of the terminal, the terminal can decode the video and fill the whole display region, that is, the terminal can fill and display the alarm video in the display interface.
In the case of display state 4, when the user clicks on the alarm video, i.e. the user clicks on the alarm video that the terminal fills up with the display, the terminal may jump to display state 2.
In addition, a "return" button may also be displayed in the display interface of the terminal, and in any display state of the display state 2, the display state 3, and the display state 4, the user may click the "return" button, and accordingly, the terminal may jump to the display state 1.
In the related technology, the alarm video and the alarm picture are generally stored separately, the alarm picture needs to be intercepted first during playing, the processing process is complex, and the playing delay is large. In the embodiment of the application, the alarm video and the alarm picture can be combined together to be transmitted and stored, correspondingly, the terminal can simultaneously display the alarm video and the alarm picture and can switch and display the alarm video and the alarm picture, and user experience can be improved.
Referring to fig. 4, fig. 4 is a flow chart of displaying alarm information according to an embodiment of the present disclosure.
The video monitoring system can be provided with a video input module, and the video input module is used for sensing and imaging light information, namely, a monitoring video in a YUV format can be obtained through the video input module.
The video input module can generate monitoring videos with two resolutions, the monitoring videos with one resolution (namely YUV1 image frame + timestamp) are sent to the intelligent analysis module, and the monitoring videos with the other resolution (namely YUV2 image frame + timestamp) are sent to the video cache module.
The intelligent analysis module can output a target linked list according to the received monitoring video, wherein the target linked list records a timestamp of an image frame, an identification of an object in the image frame and position and size information of an object frame, and the object frame can represent the minimum matrix image area containing each object.
In addition, referring to fig. 5, fig. 5 is a flowchart for generating a matched image linked list according to an embodiment of the present application, and two image linked lists may be maintained in the video cache module: an unmatched image linked list and a matched image linked list. The unmatched image chain table can store the YUV image frame + the timestamp, and the matched image chain table can store the YUV image frame + the timestamp + the target chain table.
The video cache module stores the image frames sent by the video input module in the unmatched image linked list, and when the target linked list sent by the intelligent analysis module is received, the video cache module can find the image frames corresponding to the timestamps from the unmatched image linked list and store the image frames and the target linked list into the matched image linked list.
The length L1 of the unmatched image linked list and the length L2 of the matched image linked list can be preset, and in the unmatched image linked list and the matched image linked list, when a new image frame is stored, one image frame with the longest storage time is deleted from the linked lists.
In fig. 4, when the intelligent analysis module detects that a target object triggers an alarm, alarm information may be generated, where the alarm information may include an identifier of the target object, and a timestamp (which may be referred to as an alarm timestamp) of an image frame (i.e., a first image frame in the embodiment of the present application) when the alarm is triggered, and sends the alarm information to the alarm video generation module.
The alert video generation module may determine a sequence of decimated image frames (i.e., the second image frame in the embodiments of the present application). Starting from the alarm timestamp, in a matched image chain table of the video cache module, nf (i.e., the first number in the embodiment of the present application) frames are extracted forward (i.e., in the timestamp increasing direction), then Nb (i.e., the second number in the embodiment of the present application) frames are extracted backward (i.e., in the timestamp decreasing direction), and a frame interval of the extracted image frames is a preset frame interval, which may be 0 or 1.
In addition, if the frame number of the alarm timestamp in the forward direction is less than the number of Nb frames in the matched image linked list of the video cache module, the alarm timestamp can be continuously extracted in the backward direction; on the contrary, if the number of frames of the alarm timestamp in the backward direction is less than Nf frames, the extraction can be continued in the forward direction. In short, the number of extracted image frames is fixed to Nf + Nb +1 frames (Nf + Nb +1 is smaller than the length L2 of the matched image chain table).
The alarm video generation module can cut the extracted image frames to generate an alarm video and send the alarm video to the video compression module.
Referring to fig. 6, fig. 6 is a flowchart illustrating a process of an alarm video generation module according to an embodiment of the present disclosure.
The alarm video generation module may calculate a minimum bounding rectangle (i.e., a fourth image region in the embodiment of the present application) and a maximum target frame (i.e., a fifth image region in the embodiment of the present application) after determining the extracted image frame sequence (i.e., the second image frame), and calculate an area ratio of the maximum target frame to the minimum bounding rectangle.
According to the size relation between the area ratio and the threshold value Kh (namely the preset ratio in the embodiment of the application), determining a clipping mode (including a clipping mode 1 and a clipping mode 2), namely a uniform clipping mode and a flow clipping mode in the embodiment of the application, further clipping a second image frame, zooming the clipped image frame obtained by clipping based on a preset resolution, and sending clipping information and an image sequence (namely the zoomed clipped image frame) to a video compression module.
The video compression module can perform video compression on the zoomed clipping image frame.
In addition, as can be seen from fig. 4, when the intelligent analysis module detects an alarm, it may also send alarm information to the alarm picture generation module. The alarm picture generation module can acquire an image frame (i.e., a first image frame in the embodiment of the present application) corresponding to the alarm timestamp from the video cache module according to the alarm information, and send the image frame to the picture compression module.
The image compression module can generate an alarm image in a JPEG format corresponding to the first image frame and send the alarm image to the alarm message synthesis module. In addition, the Image compression module may add the position information of the target object to EXIF (Exchangeable Image File Format) data of the warning Image.
The video compression module can also send the compressed alarm video to the alarm message synthesis module.
Correspondingly, the alarm message synthesis module can generate an alarm message carrying the alarm video, the first image frame and the cutting information, and send the alarm message to the alarm storage and uploading module.
The alarm storage and uploading module can send the alarm message to the alarm message display module.
In addition, the alarm storage and uploading module can also store the alarm video, the first image frame and the cutting information into an SD (Secure Digital) Card, an EMMC (Embedded Multi Media Card), a network cloud disk and the like.
The alarm message display module can be a module running on a computer client or a mobile phone client, and can receive the alarm message and an operation instruction of a user and display the alarm message in the form of an image.
Based on the same inventive concept, referring to fig. 7, fig. 7 is a structural diagram of an alarm video generating device provided in an embodiment of the present application, where the device includes:
a first determining module 701, configured to determine an image frame when a target object in a surveillance video triggers an alarm, as a first image frame;
a selecting module 702, configured to select, based on the first image frame, each image frame that meets preset conditions from the surveillance video as a second image frame, where the preset conditions are a first number of image frames located before and closest to the first image frame, a second number of image frames located after and closest to the first image frame, and the first image frame in the surveillance video;
a second determining module 703, configured to determine, for each second image frame in which the target object is displayed, an image area to be cropped that includes an image area occupied by the target object from the second image frame;
a cropping module 704, configured to crop each second image frame based on the image area to be cropped, to obtain a cropped image frame of each second image frame, where the cropped image frame is used as a cropped image frame;
a generating module 705, configured to generate an alarm video based on each of the cropped image frames.
Optionally, the second determining module 703 is specifically configured to determine, for each second image frame that displays the target object, a smallest rectangular area in the second image frame, where the smallest rectangular area includes an image area occupied by the target object, as a first image area in the second image frame;
performing area expansion on a first image area in the second image frame according to a preset width-to-height ratio to obtain a second image area;
determining a minimum rectangular area containing an image area corresponding to a first image area in each second image frame in the second image frame as a third image area in the second image frame;
performing region expansion on a third image region in the second image frame according to the preset width-height ratio to obtain a fourth image region;
determining the largest image area in the obtained second image areas as a fifth image area;
and if the ratio of the sizes of the fifth image area and the fourth image area is greater than or equal to a preset ratio, taking the fourth image area in the second image frame as an image area to be cut in the image frame.
Optionally, the apparatus further comprises:
the first processing module is used for determining a second image frame which does not display the target object as a third image frame;
and determining an image area to be cut in the third image frame as an image area corresponding to a fourth image area in the second image frame on which the target object is displayed.
Optionally, the second determining module 703 is further configured to perform, if the ratio of the sizes of the fifth image area and the fourth image area is smaller than the preset ratio, area expansion on the second image area in the second image frame according to the size of a preset multiple of the fifth image area, where the preset multiple is greater than or equal to 1;
and taking the image area after the area expansion as an image area to be cut in the second image frame.
Optionally, the apparatus further comprises:
a second processing module, configured to determine, as a third image frame, a second image frame on which the target object is not displayed when a ratio of sizes of the fifth image area and the fourth image area is smaller than the preset ratio;
performing interpolation calculation on an image area to be cut in a previous image frame and an image area to be cut in a next image frame of the third image frame;
and determining an image area to be cut in the third image frame based on the result of the interpolation calculation.
Optionally, the generating module 705 is specifically configured to scale each of the cropped image frames based on a preset resolution to serve as a scaled image frame, where the preset resolution is smaller than the resolution of the monitoring video;
and generating an alarm video based on the zooming image frames.
Optionally, the apparatus further comprises:
and the sending module is used for sending an alarm message to a terminal, wherein the alarm message carries the alarm video, the first image frame and the cutting information, and the cutting information is used for indicating the size and the position of the image area to be cut in each second image frame.
Based on the same inventive concept, referring to fig. 8, fig. 8 is a structural diagram of an alarm video display device provided in an embodiment of the present application, the device including:
a receiving module 801, configured to receive an alarm message, where the alarm message carries an alarm video; the alarm video is obtained by cutting each second image frame based on the image area to be cut; the second image frames comprise a first number of image frames in the surveillance video that are located before and closest to a first image frame, a second number of image frames that are located after and closest to the first image frame, and the first image frame; the first image frame is an image frame when a target object in the monitoring video triggers an alarm; each image area to be cut in the second image frame with the target object displayed is an image area which contains the image area occupied by the target object in the second image frame;
a first display module 802, configured to display the alarm video.
Optionally, the alarm message further carries the first image frame and clipping information, where the clipping information is used to indicate the size and position of the to-be-clipped image area in each second image frame.
Optionally, the apparatus further comprises:
and the second display module is used for displaying the first image frame when an alarm image display instruction is received.
Optionally, the second display module is further configured to, when a display instruction for displaying an alarm picture and an alarm video at the same time is received, determine, according to the clipping information, the size of the image area to be clipped and the position of the image area to be clipped in the first image frame as a target display position;
and displaying the alarm video at the target display position according to the size of the image area to be cut.
An electronic device is further provided in the embodiments of the present application, as shown in fig. 9, and includes a processor 901, a communication interface 902, a memory 903, and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 complete mutual communication through the communication bus 904,
a memory 903 for storing computer programs;
the processor 901 is configured to implement the following steps when executing the program stored in the memory 903:
determining an image frame when a target object in a monitoring video triggers an alarm as a first image frame;
selecting, from the surveillance video, each image frame satisfying a preset condition as a second image frame based on the first image frame, the preset condition being a first number of image frames located before and closest to the first image frame, a second number of image frames located after and closest to the first image frame, and the first image frame in the surveillance video;
for each second image frame on which the target object is displayed, determining an image area to be cropped containing an image area occupied by the target object from the second image frame;
cutting each second image frame based on the image area to be cut to obtain the cut image frame of each second image frame as a cut image frame;
and generating an alarm video based on each of the cropped image frames.
The embodiment of the present application further provides an electronic device, as shown in fig. 10, which includes a processor 1001, a communication interface 1002, a memory 1003 and a communication bus 1004, wherein the processor 1001, the communication interface 1002 and the memory 1003 complete mutual communication through the communication bus 1004,
a memory 1003 for storing a computer program;
the processor 1001 is configured to implement the following steps when executing the program stored in the memory 1003:
receiving an alarm message, wherein the alarm message carries an alarm video; the alarm video is obtained by cutting each second image frame based on the image area to be cut; the second image frames comprise a first number of image frames in the surveillance video that are located before and closest to a first image frame, a second number of image frames that are located after and closest to the first image frame, and the first image frame; the first image frame is an image frame when a target object in the monitoring video triggers an alarm; each image area to be cut in the second image frame with the target object displayed is an image area which contains the image area occupied by the target object in the second image frame;
and displaying the alarm video.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
The electronic equipment provided by the embodiment of the application cuts the image frames in the monitoring video based on the image area occupied by the target object, can improve the proportion of the target object in the image frames, and correspondingly displays the cut alarm video on the terminal, so that a user can clearly observe the details of the target object, and further, the effectiveness of the alarm video can be improved.
The embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a computer, the computer is enabled to execute the method for generating an alarm video provided by the embodiment of the present application.
Specifically, the method for generating an alarm video includes:
determining an image frame when a target object in a monitoring video triggers an alarm as a first image frame;
selecting, from the surveillance video, each image frame satisfying a preset condition as a second image frame based on the first image frame, the preset condition being a first number of image frames located before and closest to the first image frame, a second number of image frames located after and closest to the first image frame, and the first image frame in the surveillance video;
for each second image frame displaying the target object, determining an image area to be cropped containing an image area occupied by the target object from the second image frame;
cutting each second image frame based on the image area to be cut to obtain the cut image frame of each second image frame as a cut image frame;
and generating an alarm video based on each of the cropped image frames.
It should be noted that other implementation manners of the above alarm video generation method are the same as those of the foregoing method embodiment, and are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a computer, the computer is enabled to execute the alarm video display method provided by the embodiment of the present application.
Specifically, the alarm video display method includes:
receiving an alarm message, wherein the alarm message carries an alarm video; the alarm video is obtained by cutting each second image frame based on the image area to be cut; the second image frames comprise a first number of image frames in the surveillance video that are located before and closest to a first image frame, a second number of image frames that are located after and closest to the first image frame, and the first image frame; the first image frame is an image frame when a target object in the monitoring video triggers an alarm; each image area to be cut in the second image frame with the target object displayed is an image area which contains the image area occupied by the target object in the second image frame;
and displaying the alarm video.
It should be noted that other implementation manners of the above-mentioned alarm video display method are the same as those of the foregoing method embodiment, and are not described herein again.
By operating the instruction stored in the computer-readable storage medium provided by the embodiment of the application, the image frame in the monitoring video is cut based on the image area occupied by the target object, the proportion of the target object in the image frame can be improved, and correspondingly, the alarm video obtained by cutting is displayed on the terminal, so that the user can clearly observe the details of the target object, and further, the effectiveness of the alarm video can be improved.
The embodiment of the present application further provides another computer program product containing instructions, which when run on a computer, causes the computer to execute the method for generating an alarm video provided by the embodiment of the present application.
Specifically, the method for generating an alarm video includes:
determining an image frame when a target object in a monitoring video triggers an alarm as a first image frame;
selecting, from the surveillance video, each image frame satisfying a preset condition as a second image frame based on the first image frame, the preset condition being a first number of image frames located before and closest to the first image frame, a second number of image frames located after and closest to the first image frame, and the first image frame in the surveillance video;
for each second image frame on which the target object is displayed, determining an image area to be cropped containing an image area occupied by the target object from the second image frame;
cutting each second image frame based on the image area to be cut to obtain the cut image frame of each second image frame as a cut image frame;
and generating an alarm video based on each of the cropped image frames.
It should be noted that other implementation manners of the above alarm video generation method are the same as those of the foregoing method embodiment, and are not described here again.
The embodiment of the present application further provides another computer program product containing instructions, which when run on a computer, causes the computer to execute the alarm video display method provided by the embodiment of the present application.
Specifically, the alarm video display method includes:
receiving an alarm message, wherein the alarm message carries an alarm video; the alarm video is obtained by cutting each second image frame based on the image area to be cut; the second image frames comprise a first number of image frames in the surveillance video that are located before and closest to a first image frame, a second number of image frames that are located after and closest to the first image frame, and the first image frame; the first image frame is an image frame when a target object in the monitoring video triggers an alarm; each image area to be cut out in a second image frame on which the target object is displayed is an image area of the image area occupied by the target object in the second image frame;
and displaying the alarm video.
It should be noted that other implementation manners of the above-mentioned alarm video display method are the same as those of the foregoing method embodiment, and are not described herein again.
By operating the computer program product provided by the embodiment of the application, the image frames in the monitoring video are cut based on the image area occupied by the target object, the occupied proportion of the target object in the image frames can be improved, and correspondingly, the cut alarm video is displayed on the terminal, so that a user can clearly observe the details of the target object, and further, the effectiveness of the alarm video can be improved.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, the electronic device, the computer-readable storage medium, and the computer program product embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and in the relevant places, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the scope of protection of the present application.

Claims (9)

1. A method for generating an alarm video, the method comprising:
determining an image frame when a target object in a monitoring video triggers an alarm as a first image frame;
selecting, from the surveillance video, each image frame satisfying a preset condition as a second image frame based on the first image frame, the preset condition being a first number of image frames located before and closest to the first image frame, a second number of image frames located after and closest to the first image frame, and the first image frame in the surveillance video;
for each second image frame on which the target object is displayed, determining an image area to be cropped containing an image area occupied by the target object from the second image frame;
cutting each second image frame based on the image area to be cut to obtain the cut image frame of each second image frame as a cut image frame;
generating an alarm video based on each of the clipped image frames;
the determining, for each second image frame on which the target object is displayed, an image area to be cropped including an image area occupied by the target object from the second image frame includes:
for each second image frame displaying the target object, determining a minimum rectangular area, which contains the image area occupied by the target object, in the second image frame as a first image area in the second image frame;
performing area expansion on a first image area in the second image frame according to a preset width-to-height ratio to obtain a second image area;
determining a minimum rectangular area containing an image area corresponding to a first image area in each second image frame in the second image frame as a third image area in the second image frame;
performing area expansion on a third image area in the second image frame according to the preset width-height ratio to obtain a fourth image area;
determining the largest image area in the obtained second image areas as a fifth image area;
and if the ratio of the sizes of the fifth image area and the fourth image area is greater than or equal to a preset ratio, taking the fourth image area in the second image frame as an image area to be cut in the image frame.
2. The method according to claim 1, wherein before the cropping each second image frame based on the image area to be cropped to obtain a cropped image frame of each second image frame as a cropped image frame, the method further comprises:
determining a second image frame, on which the target object is not displayed, as a third image frame;
and determining an image area to be cut in the third image frame as an image area corresponding to a fourth image area in the second image frame on which the target object is displayed.
3. The method of claim 1, further comprising:
if the ratio of the sizes of the fifth image area and the fourth image area is smaller than the preset ratio, performing area expansion on a second image area in the second image frame according to the size of a preset multiple of the fifth image area, wherein the preset multiple is greater than or equal to 1;
and taking the image area after the area expansion as an image area to be cut in the second image frame.
4. The method of claim 3, further comprising:
determining a second image frame without the target object displayed as a third image frame when the ratio of the sizes of the fifth image area and the fourth image area is smaller than the preset ratio;
performing interpolation calculation on an image area to be cut in a previous image frame and an image area to be cut in a next image frame of the third image frame;
and determining an image area to be cut in the third image frame based on the result of the interpolation calculation.
5. A method for displaying an alarm video, the method comprising:
receiving an alarm message, wherein the alarm message carries an alarm video; the alarm video is obtained by cutting each second image frame based on the image area to be cut; the second image frames comprise a first number of image frames in the surveillance video that are located before and closest to a first image frame, a second number of image frames that are located after and closest to the first image frame, and the first image frame; the first image frame is an image frame when a target object in the monitoring video triggers an alarm; each image area to be cut in the second image frame with the target object displayed is an image area which contains the image area occupied by the target object in the second image frame;
displaying the alarm video;
wherein the image area to be cropped in the second image frame is determined based on the following modes:
for each second image frame displaying the target object, determining a minimum rectangular area, which contains the image area occupied by the target object, in the second image frame as a first image area in the second image frame;
performing area expansion on a first image area in the second image frame according to a preset width-to-height ratio to obtain a second image area;
determining a minimum rectangular area containing an image area corresponding to the first image area in each second image frame in the second image frame as a third image area in the second image frame;
performing region expansion on a third image region in the second image frame according to the preset width-height ratio to obtain a fourth image region;
determining the largest image area in the obtained second image areas as a fifth image area;
and if the ratio of the sizes of the fifth image area and the fourth image area is greater than or equal to a preset ratio, taking the fourth image area in the second image frame as an image area to be cut in the image frame.
6. An alert video generating apparatus, the apparatus comprising:
the first determination module is used for determining an image frame when a target object in the monitoring video triggers an alarm as a first image frame;
a selecting module, configured to select, based on the first image frame, each image frame that meets a preset condition from the surveillance video as a second image frame, where the preset condition is that a first number of image frames located before and closest to the first image frame, a second number of image frames located after and closest to the first image frame, and the first image frame in the surveillance video;
a second determining module, configured to determine, for each second image frame in which the target object is displayed, an image area to be cropped that includes an image area occupied by the target object from the second image frame;
the cutting module is used for cutting each second image frame based on the image area to be cut to obtain the cut image frame of each second image frame as a cut image frame;
the generating module is used for generating an alarm video based on each cutting image frame;
the second determining module is specifically configured to determine, for each second image frame in which the target object is displayed, a smallest rectangular area in the second image frame, where the smallest rectangular area includes an image area occupied by the target object, as a first image area in the second image frame;
performing area expansion on a first image area in the second image frame according to a preset width-to-height ratio to obtain a second image area;
determining a minimum rectangular area containing an image area corresponding to the first image area in each second image frame in the second image frame as a third image area in the second image frame;
performing region expansion on a third image region in the second image frame according to the preset width-height ratio to obtain a fourth image region;
determining the largest image area in the obtained second image areas as a fifth image area;
and if the ratio of the sizes of the fifth image area and the fourth image area is greater than or equal to a preset ratio, taking the fourth image area in the second image frame as an image area to be cut in the image frame.
7. An alarm video display apparatus, the apparatus comprising:
the receiving module is used for receiving an alarm message, wherein the alarm message carries an alarm video; the alarm video is obtained by cutting each second image frame based on the image area to be cut; the second image frames comprise a first number of image frames in the surveillance video that are located before and closest to a first image frame, a second number of image frames that are located after and closest to the first image frame, and the first image frame; the first image frame is an image frame when a target object in the monitoring video triggers an alarm; each image area to be cut in the second image frame with the target object displayed is an image area which contains the image area occupied by the target object in the second image frame;
the first display module is used for displaying the alarm video;
wherein the image area to be cropped in the second image frame is determined based on the following modes:
for each second image frame displaying the target object, determining a smallest rectangular area, which contains an image area occupied by the target object, in the second image frame as a first image area in the second image frame;
performing area expansion on a first image area in the second image frame according to a preset width-to-height ratio to obtain a second image area;
determining a minimum rectangular area containing an image area corresponding to a first image area in each second image frame in the second image frame as a third image area in the second image frame;
performing area expansion on a third image area in the second image frame according to the preset width-height ratio to obtain a fourth image area;
determining the largest image area in the obtained second image areas as a fifth image area;
and if the ratio of the sizes of the fifth image area and the fourth image area is greater than or equal to a preset ratio, taking the fourth image area in the second image frame as an image area to be cut in the image frame.
8. An electronic device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, is configured to perform the method steps of any of claims 1-4 or 5.
9. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 4 or claim 5.
CN201911266637.4A 2019-12-11 2019-12-11 Alarm video generation method, display method and device Active CN112948627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911266637.4A CN112948627B (en) 2019-12-11 2019-12-11 Alarm video generation method, display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911266637.4A CN112948627B (en) 2019-12-11 2019-12-11 Alarm video generation method, display method and device

Publications (2)

Publication Number Publication Date
CN112948627A CN112948627A (en) 2021-06-11
CN112948627B true CN112948627B (en) 2023-02-03

Family

ID=76226443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911266637.4A Active CN112948627B (en) 2019-12-11 2019-12-11 Alarm video generation method, display method and device

Country Status (1)

Country Link
CN (1) CN112948627B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591651A (en) * 2021-07-22 2021-11-02 浙江大华技术股份有限公司 Image capturing method, image display method and device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867792A (en) * 2010-05-14 2010-10-20 成都基业长青科技有限责任公司 Real-time transmission method for video record buffering
CN102833511A (en) * 2012-09-18 2012-12-19 浙江红苹果电子有限公司 Method and device for storing high-definition videos based on serial digital interface (SDI)
CN104392573A (en) * 2014-10-11 2015-03-04 天津艾思科尔科技有限公司 Video-based intelligent theft detection method
CN109068099A (en) * 2018-09-05 2018-12-21 济南大学 Virtual electronic fence monitoring method and system based on video monitoring
CN109509190A (en) * 2018-12-19 2019-03-22 中国科学院重庆绿色智能技术研究院 Video monitoring image screening technique, device, system and storage medium
CN110348343A (en) * 2019-06-27 2019-10-18 深圳市中电数通智慧安全科技股份有限公司 A kind of act of violence monitoring method, device, storage medium and terminal device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007052100A2 (en) * 2005-02-15 2007-05-10 Dspv, Ltd. System and method of user interface and data entry from a video call
JP2012099876A (en) * 2010-10-29 2012-05-24 Sanyo Electric Co Ltd Image processing device, imaging device, image processing method, and program
WO2014133559A1 (en) * 2013-02-27 2014-09-04 Blendagram, Inc. System for and method of augmenting video and images
CN106803959B (en) * 2017-02-28 2019-12-27 腾讯科技(深圳)有限公司 Video image encoding method, video image decoding method, video image encoding apparatus, video image decoding apparatus, and readable storage medium
CN107635101B (en) * 2017-10-27 2020-07-03 Oppo广东移动通信有限公司 Shooting method, shooting device, storage medium and electronic equipment
CN110557678B (en) * 2018-05-31 2022-05-03 北京百度网讯科技有限公司 Video processing method, device and equipment
CN110347877B (en) * 2019-06-27 2022-02-11 北京奇艺世纪科技有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867792A (en) * 2010-05-14 2010-10-20 成都基业长青科技有限责任公司 Real-time transmission method for video record buffering
CN102833511A (en) * 2012-09-18 2012-12-19 浙江红苹果电子有限公司 Method and device for storing high-definition videos based on serial digital interface (SDI)
CN104392573A (en) * 2014-10-11 2015-03-04 天津艾思科尔科技有限公司 Video-based intelligent theft detection method
CN109068099A (en) * 2018-09-05 2018-12-21 济南大学 Virtual electronic fence monitoring method and system based on video monitoring
CN109509190A (en) * 2018-12-19 2019-03-22 中国科学院重庆绿色智能技术研究院 Video monitoring image screening technique, device, system and storage medium
CN110348343A (en) * 2019-06-27 2019-10-18 深圳市中电数通智慧安全科技股份有限公司 A kind of act of violence monitoring method, device, storage medium and terminal device

Also Published As

Publication number Publication date
CN112948627A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
KR101718373B1 (en) Video play method, terminal, and system
KR101813196B1 (en) Method, device, program, and recording medium for video communication
CN107682714B (en) Method and device for acquiring online video screenshot
JP2017538978A (en) Alarm method and device
US20150085114A1 (en) Method for Displaying Video Data on a Personal Device
KR20080058171A (en) Camera tampering detection
CN109429037B (en) Image processing method, device, equipment and system
JP2010009134A (en) Image processing system, image processing method, and program
KR20160043523A (en) Method, and device for video browsing
CN111787398A (en) Video compression method, device, equipment and storage device
CN112584083B (en) Video playing method, system, electronic equipment and storage medium
CN111314617A (en) Video data processing method and device, electronic equipment and storage medium
CN113766217A (en) Video delay test method and device, electronic equipment and storage medium
CN112948627B (en) Alarm video generation method, display method and device
KR102108246B1 (en) Method and apparatus for providing video in potable device
JP2019149785A (en) Video conversion device and program
CN110796012A (en) Image processing method and device, electronic equipment and readable storage medium
KR20170053714A (en) Systems and methods for subject-oriented compression
CN112954267B (en) Camera for generating alarm video
CN111953980A (en) Video processing method and device
CN112887515B (en) Video generation method and device
CN114140389A (en) Video detection method and device, electronic equipment and storage medium
JP2019009615A (en) Monitoring camera device, monitoring video distribution method, and monitoring system
CN113596582A (en) Video preview method and device and electronic equipment
CN112954374A (en) Video data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant