CN111104549A - Method and equipment for retrieving video - Google Patents

Method and equipment for retrieving video Download PDF

Info

Publication number
CN111104549A
CN111104549A CN201911403186.4A CN201911403186A CN111104549A CN 111104549 A CN111104549 A CN 111104549A CN 201911403186 A CN201911403186 A CN 201911403186A CN 111104549 A CN111104549 A CN 111104549A
Authority
CN
China
Prior art keywords
target
intelligent
video
preset
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911403186.4A
Other languages
Chinese (zh)
Inventor
金若梅
许海涛
何月朋
俞坚才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TP Link Technologies Co Ltd
Original Assignee
TP Link Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TP Link Technologies Co Ltd filed Critical TP Link Technologies Co Ltd
Priority to CN201911403186.4A priority Critical patent/CN111104549A/en
Publication of CN111104549A publication Critical patent/CN111104549A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Abstract

The application is applicable to the technical field of computers, and provides a method and equipment for retrieving videos, wherein the method comprises the following steps: acquiring intelligent data to be retrieved; acquiring each target data meeting preset retrieval conditions from the intelligent data; acquiring a time period corresponding to each target data; and extracting the target video corresponding to each time period from the original video. In the mode, the intelligent data are searched to obtain target data meeting the conditions; and further extracting the target video from the original video according to the time period corresponding to the target data. The intelligent retrieval method is used for retrieving based on intelligent data, so that the resource occupancy rate is reduced, the retrieval process is simple, and the retrieval time is saved; and the target video is extracted from the original video according to the time period corresponding to the target data meeting the conditions, so that the retrieved target video is more accurate.

Description

Method and equipment for retrieving video
Technical Field
The application belongs to the technical field of computers, and particularly relates to a method and equipment for retrieving videos.
Background
A Network Video Recorder (NVR) mainly provides functions of preview, recording and playback; during playback, due to the fact that the video recording time is long and the data size is large, if people want to find specific characters or events through playback of all videos, the difficulty is high. Therefore, the intelligent post-retrieval function of NVR is very important in practical application, and it can help users to screen out key videos required by users.
However, the conventional retrieval method occupies a large amount of resources, is complex in retrieval process, wastes a large amount of time, and cannot accurately retrieve the key video required by the user.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method and an apparatus for retrieving videos, so as to solve the problems that the conventional retrieval method occupies a large amount of resources, the retrieval process is complex, a large amount of time is wasted, and a key video required by a user cannot be accurately retrieved.
A first aspect of an embodiment of the present application provides a method for retrieving a video, including:
acquiring intelligent data to be retrieved; the intelligent data is obtained by preprocessing an original video;
acquiring each target data meeting preset retrieval conditions from the intelligent data;
acquiring a time period corresponding to each target data;
and extracting the target video corresponding to each time period from the original video.
Further, the intelligent data comprises image information corresponding to each object identified from the original video; the image information includes a size and a position of a rectangle of the object.
Further, in order to increase the retrieval speed and accurately acquire target data meeting preset retrieval conditions, and further accurately retrieve a target video required by a user, when the preset retrieval conditions are used for out-of-range detection, acquiring each target data meeting the preset retrieval conditions from the intelligent data includes:
determining a warning line in a preset area; the warning line is used for judging whether each object is out of range;
determining a first object which crosses the boundary based on the warning line, and acquiring target data corresponding to the first object in the process of crossing the warning line; the first object is an object of each of the objects that crosses the alert line.
Further, in order to accurately determine an out-of-range object and further accurately acquire target data corresponding to the out-of-range object in a process of crossing an alert line, determining a first out-of-range object based on the alert line, and acquiring the target data corresponding to the first out-of-range object in a process of crossing the alert line includes:
determining first image information based on the warning line and each of the image information; the first image information is image information corresponding to an object with the same motion direction and a preset border crossing direction; the preset border crossing direction is used for representing the direction crossing the warning line;
based on the first image information, eliminating image information corresponding to an object which is not intersected with the warning line to obtain second image information;
calculating a vector included angle between a rectangle corresponding to the second object and the warning line, and judging whether the second object is out of range or not based on the vector included angle; the second object is an object corresponding to the second image information;
and when the judgment result is that the second object is out of range, acquiring target data corresponding to the second object in the process of crossing the warning line.
When the preset retrieval condition is used for regional intrusion detection, acquiring each target data meeting the preset retrieval condition from the intelligent data;
acquiring a preset warning area;
determining a target rectangle overlapping the alert zone based on the rectangle of each of the objects and the alert zone;
and acquiring target data corresponding to the object corresponding to the target rectangle when the object invades the warning area.
When the preset retrieval condition is used for mobile detection, acquiring each target data meeting the preset retrieval condition from the intelligent data;
acquiring a preset movement detection area;
determining a third object based on the rectangle of each of the objects and the movement detection area; the third object is an object that moves in the movement detection area among each of the objects;
and acquiring corresponding target data when the third object moves.
Further, in order to facilitate the user to view the retrieved target video, the application further includes:
generating an intelligent event corresponding to each target video based on each target video and the original video; the intelligent event is used for representing an event of an object corresponding to the target video in a target time period corresponding to the target video;
and drawing and marking a progress bar corresponding to each intelligent event based on the original video.
Further, in order to facilitate the user to visually check the occurred intelligent event, the method further includes: and generating an intelligent event list based on each intelligent event, and displaying the intelligent event list in a preset display area.
A second aspect of an embodiment of the present invention provides an apparatus for retrieving a video, including:
the first acquisition unit is used for acquiring intelligent data to be retrieved; the intelligent data is obtained by preprocessing an original video;
the second acquisition unit is used for acquiring each target data meeting the preset retrieval condition from the intelligent data;
a third obtaining unit, configured to obtain a time period corresponding to each piece of target data;
and the extracting unit is used for extracting the target video corresponding to each time period from the original video.
The intelligent data comprises image information corresponding to each object identified from the original video; the image information includes a size and a position of a rectangle of the object.
Further, the second acquisition unit includes:
the first determining unit is used for determining a warning line in a preset area; the warning line is used for judging whether each object is out of range;
the second determining unit is used for determining a first object crossing the boundary based on the warning line and acquiring target data corresponding to the first object in the process of crossing the warning line; the first object is an object of each of the objects that crosses the alert line.
Further, when the preset retrieval condition is used for boundary crossing detection, the second determining unit is specifically configured to:
determining first image information based on the warning line and each of the image information; the first image information is image information corresponding to an object with the same motion direction and a preset border crossing direction; the preset border crossing direction is used for representing the direction crossing the warning line;
based on the first image information, eliminating image information corresponding to an object which is not intersected with the warning line to obtain second image information;
calculating a vector included angle between a rectangle corresponding to the second object and the warning line, and judging whether the second object is out of range or not based on the vector included angle; the second object is an object corresponding to the second image information;
and when the judgment result is that the second object is out of range, acquiring target data corresponding to the second object in the process of crossing the warning line.
Further, when the preset retrieval condition is used for detecting the intrusion into the area, the second obtaining unit is specifically configured to:
acquiring a preset warning area;
determining a target rectangle overlapping the alert zone based on the rectangle of each of the objects and the alert zone;
and acquiring target data corresponding to the object corresponding to the target rectangle when the object invades the warning area.
Further, when the preset search condition is used for motion detection, the second obtaining unit is specifically configured to:
acquiring a preset movement detection area;
determining a third object based on the rectangle of each of the objects and the movement detection area; the third object is an object that moves in the movement detection area among each of the objects;
and acquiring corresponding target data when the third object moves.
Further, the apparatus further comprises:
the first generation unit is used for generating an intelligent event corresponding to each target video based on each target video and the original video; the intelligent event is used for representing an event of an object corresponding to the target video in a target time period corresponding to the target video;
and the drawing unit is used for drawing and marking the progress bar corresponding to each intelligent event based on the original video.
Further, the apparatus further comprises:
and the second generating unit is used for generating an intelligent event list based on each intelligent event and displaying the intelligent event list in a preset display area.
A third aspect of embodiments of the present invention provides another apparatus, including a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, the processor implementing the following steps when executing the computer-readable instructions:
acquiring intelligent data to be retrieved; the intelligent data is obtained by preprocessing an original video;
acquiring each target data meeting preset retrieval conditions from the intelligent data;
acquiring a time period corresponding to each target data;
and extracting the target video corresponding to each time period from the original video.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of:
acquiring intelligent data to be retrieved; the intelligent data is obtained by preprocessing an original video;
acquiring each target data meeting preset retrieval conditions from the intelligent data;
acquiring a time period corresponding to each target data;
and extracting the target video corresponding to each time period from the original video.
The method and the device for retrieving the video have the following beneficial effects:
according to the embodiment of the application, the intelligent data to be retrieved is obtained; acquiring each target data meeting preset retrieval conditions from the intelligent data; acquiring a time period corresponding to each target data; and extracting the target video corresponding to each time period from the original video. In the above mode, the video retrieval device retrieves the acquired intelligent data to obtain target data meeting the conditions; and further extracting the target video from the original video according to the time period corresponding to the target data. The intelligent data is obtained by preprocessing an original video, and comprises image information corresponding to each object identified from the original video, wherein the image information comprises the size and the position of a rectangle of each object; the equipment for retrieving the video does not need to directly analyze the original video, but retrieves based on the intelligent data, and because the data volume of the intelligent data is small, the resource occupancy rate is greatly reduced, the retrieval process is simple, the retrieval time is saved, and the retrieval speed is accelerated; and the target video is extracted from the original video according to the time period corresponding to the target data meeting the conditions, so that the retrieved target video is more accurate. Further, when the preset retrieval condition is used for detecting different scenes, the user can draw different retrieval areas according to actual conditions, so that the retrieval is more convenient, and the retrieval result is more flexible. Furthermore, the retrieval result is displayed in the form of an intelligent event and an intelligent event list, so that the user can check the retrieval result more conveniently.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating an implementation of a method for retrieving video according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a rectangle corresponding to any one of the objects provided in the present application;
FIG. 3 is a schematic diagram illustrating an angle between a rectangle and a warning line;
FIG. 4 is a flowchart illustrating an implementation of a method for retrieving videos according to another embodiment of the present application;
fig. 5 is a schematic diagram of an apparatus for retrieving videos according to an embodiment of the present application.
Fig. 6 is a schematic diagram of an apparatus according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for retrieving video according to an embodiment of the present invention. The main execution body of the method for retrieving videos in this embodiment is a device for retrieving videos, and the device includes, but is not limited to, a Network Video Recorder (NVR), a mobile phone, a computer, an intelligent mobile terminal, and the like. In this embodiment, NVR is taken as an example for explanation. The method of retrieving a video as shown in fig. 1 may comprise:
s101: acquiring intelligent data to be retrieved; the intelligent data is obtained by preprocessing an original video.
The NVR acquires intelligent data to be retrieved; the intelligent data is obtained by preprocessing the original video. Specifically, the intelligent data to be retrieved may be data obtained by preprocessing an original video by an intelligent device such as a network Camera (IP Camera, IPC), a mobile phone, or a Camera, and the intelligent device sends the obtained intelligent data to a device for retrieving a video, for example, to NVR; and the NVR receives the intelligent data to be retrieved, which is sent by the intelligent devices. The intelligent data to be retrieved may be obtained by reading an existing original video from a memory by an independent image processing device and an independent program, and preprocessing the original video to obtain the intelligent data to be retrieved.
The embodiment takes IPC as an example to explain how the NVR obtains the intelligent data to be retrieved. Specifically, the IPC generates an original video, such as an original surveillance video, and the IPC performs recognition processing on each frame image in the original video to obtain the intelligent data to be retrieved. For example, the intelligent data is obtained at about 15 frames per second, which is only an exemplary illustration and is not limited thereto. The identification processing of the images in the original video by the IPC includes identifying objects in each frame of image, for example, identifying people, trees, vehicles, obstacles, and the like in the images; and acquiring image information corresponding to each identified object, wherein the image information comprises the size and the position of the rectangle of each object. The IPC sends the obtained intelligent data to NVR associated with the IPC; and the NVR receives the intelligent data to be retrieved, which is sent by the IPC. Further, the NVR may set device identification information for the IPC sending the intelligent data, for identifying the IPC device.
S102: and acquiring each target data meeting preset retrieval conditions from the intelligent data.
And the NVR acquires each target data meeting the preset retrieval condition from the intelligent data. Specifically, the smart data may include image information corresponding to each object identified from the original video, the image information including the size and position of the rectangle of the respective object. The size of the rectangle of each object is determined according to the situation, and it may be larger than the object or smaller than the object, and is not limited with respect to the actual implementation; the position of the rectangle for each object may be used to represent the position of each object. When the preset retrieval condition is used for boundary crossing detection, the NVR can determine a warning line in the preset area, and the warning line is used for judging whether each object crosses the boundary; and the NVR acquires target data corresponding to the out-of-range object in the process of crossing the warning line. When the preset retrieval condition is used for regional intrusion detection, the NVR acquires a preset warning region, determines an intruding object based on a rectangle corresponding to each object and the warning region, and acquires corresponding target data of the intruding object when the intruding object intrudes the warning region. When the preset retrieval condition is used for movement detection, the NVR acquires a preset movement detection area, determines an object moving in the movement detection area based on a rectangle corresponding to each object and the movement detection area, and acquires target data corresponding to the moving object when the moving object moves.
Further, in order to increase the retrieval speed and accurately acquire target data meeting preset retrieval conditions, and further accurately retrieve a target video required by a user, when the preset retrieval conditions are used for border-crossing detection, S102 may include S1021-S1022; when the preset retrieval condition is used for the area intrusion detection, S102 may include S1023-S1025; when the search condition is preset for motion detection, S102 may include S1026 to S1028, which are specifically as follows:
s1021: determining a warning line in a preset area; the warning line is used for judging whether each object is out of range.
The smart data may include image information corresponding to each object identified from the original video, the image information including the size and position of the rectangle of the respective object, it being understood that the rectangle corresponding to the object may represent the size and position of the object. When the preset retrieval condition is used for boundary crossing detection, the NVR determines a warning line in the preset area. Specifically, the preset area is a preset retrieval area, and a user can draw a warning line in the area based on the NVR device, wherein the warning line is used for judging whether each object is out of range; in other words, the warning line can determine which objects are out of range.
S1022: determining a first object which crosses the boundary based on the warning line, and acquiring target data corresponding to the first object in the process of crossing the warning line; the first object is an object of each of the objects that crosses the alert line.
The NVR determines the first object crossing the boundary based on the warning line and acquires target data corresponding to the first object in the process of crossing the warning line. Wherein the first object refers to an object crossing the alert line among all objects identified from the original video. Specifically, the smart data may include image information corresponding to each object identified from the original video, the image information including the size and position of the rectangle of the respective object. The NVR determines the object that crosses the fence based on the information contained in the smart data and the fence. For example, according to the warning line, the position information corresponding to each object, and the image information corresponding to each object, determining which objects have the same movement direction as the preset boundary crossing direction, and acquiring the image information corresponding to the objects; further, image information corresponding to an object which is not intersected with the warning line is removed to obtain new image information; and calculating a vector included angle between a rectangle corresponding to the object information and the warning line, judging whether the objects cross the boundary according to the vector included angle, and acquiring target data corresponding to the boundary-crossing objects in the process of crossing the warning line when the judgment result is boundary crossing.
Further, in order to accurately determine the boundary-crossing object and further accurately acquire the target data corresponding to the boundary-crossing object in the process of crossing the warning line, S1022 may include S10221-S10224, specifically as follows:
s10221: determining first image information based on the warning line and each of the image information; the first image information is image information corresponding to an object with the same motion direction and a preset border crossing direction; the preset border crossing direction is used for representing the direction crossing the warning line.
The NVR determines the first image information based on the guard line and each image information. The first image information is image information corresponding to an object with the same motion direction and a preset border crossing direction; the preset out-of-range direction is used to represent a direction crossing the warning line. For example, the preset out-of-range direction may be from the left side of the warning line to the right side of the warning line, or from the right side of the warning line to the left side of the warning line, or may include both from the left side of the warning line to the right side of the warning line and from the right side of the warning line to the left side of the warning line, which is not limited herein.
Specifically, the position information corresponding to each object at each moment can be represented by a rectangle corresponding to the object, and the NVR analyzes the central point of the rectangle corresponding to each object at N consecutive moments to obtain the motion trajectory of each object; determining the motion direction of each object according to the motion track of each object, and further judging whether the motion direction of each object is the same as the preset cross-boundary direction; and acquiring image information corresponding to an object with the same motion direction and the preset cross direction, namely acquiring first image information.
S10222: and based on the first image information, eliminating image information corresponding to the object which is not intersected with the warning line to obtain second image information.
And based on the first image information, the NVR rejects the image information corresponding to the object which is not intersected with the warning line, and obtains second image information. Specifically, the NVR determines which objects in the objects corresponding to the first image information do not intersect with the warning line, and eliminates the image information corresponding to the non-intersecting objects from the first image information to obtain the second image information. Judging whether a certain object is intersected with the warning line or not can be realized by judging whether a rectangle corresponding to the object is intersected with the warning line or not, and judging that the object is intersected with the warning line when the rectangle corresponding to the object is intersected with the warning line; and when the rectangle corresponding to the object does not intersect with the warning line, judging that the object does not intersect with the warning line. As shown in fig. 2, the rectangle having coordinates on both left and right sides is a rectangle corresponding to a certain object, and the rectangle corresponding to the object is divided into 4 regions, i.e., region 1, region 2, region 3, and region 4. If the warning line is located in the 4 areas, judging that the object corresponding to the rectangle is not intersected with the warning line; otherwise, judging that the object corresponding to the rectangle is possibly intersected with the warning line, and further continuing to execute the following process for judgment. It should be noted that, when determining whether the warning line intersects with the rectangle corresponding to the object, the determination is performed according to the actual length of the warning line, that is, the object cannot be determined to intersect with the warning line because the extension line of the warning line intersects with the rectangle corresponding to the object.
S10223: calculating a vector included angle between a rectangle corresponding to the second object and the warning line, and judging whether the second object is out of range or not based on the vector included angle; the second object is an object corresponding to the second image information.
NVR calculates a vector included angle between a rectangle corresponding to the second object and the warning line, and judges whether the second object is out of bounds or not based on the vector included angle; and the second object is an object corresponding to the second image information. As shown in fig. 3, selecting an endpoint of the warning line and four vertices of a rectangle corresponding to the second object, and drawing to obtain four vector line segments, thereby obtaining four vector included angles; calculating the directivity of each vector included angle according to the vector outer product; the included angle direction of each vector is judged to be positioned above or below the warning line; when the directions of the included angles of the four vectors are the same, judging that the object does not cross the boundary; and when the included angles of the four vectors are different, judging that the object is out of range. As shown in fig. 3, four vector angles formed by the rectangular region corresponding to the left object are located above the warning line, three vector angles are located below the warning line, and four vector angles formed by the rectangle corresponding to the right object are located above the warning line.
S10224: and when the judgment result is that the second object is out of range, acquiring target data corresponding to the second object in the process of crossing the warning line.
And when the judgment result is that the second object is out of range, the NVR acquires target data corresponding to the second object in the process of crossing the warning line. And when the judgment result is that the second object does not cross the boundary, the NVR does not process the object. Specifically, when the judgment result is that the second object is out of range, the NVR acquires data corresponding to the second object before starting to cross the warning line, when crossing the warning line, and just after crossing the warning line, and records the data as target data corresponding to the second object in the process of crossing the warning line. The target data may include position information, image information, rectangles, etc. corresponding to the second object at different times during crossing of the fence.
S1023: and acquiring a preset warning area.
When the preset retrieval condition is used for regional intrusion detection, the NVR acquires a preset warning region. Specifically, the alert region may be an alert region drawn by the user based on the NVR device, and may be used to determine whether an object invades the alert region.
S1024: determining a target rectangle overlapping the alert zone based on the rectangle of each of the objects and the alert zone.
And the NVR determines a target rectangle overlapped with the warning area based on the rectangle corresponding to each object and the preset warning area. Specifically, whether a rectangle corresponding to each object overlaps with the guard area is judged, and when the rectangle corresponding to a certain object overlaps with the guard area, the rectangle corresponding to the object is marked as a target rectangle. The overlapping includes partial overlapping and full overlapping, and it can be understood that as long as the rectangle corresponding to a certain object is partially overlapped or fully overlapped with the alert area, the object is judged to invade the alert area; it can also be understood that the target rectangle is a rectangle corresponding to the object subjected to the area intrusion. Specifically, the method for determining whether the rectangle corresponding to each object overlaps the alert area may be to determine whether all four vertices of the rectangle corresponding to each object are located in the alert area. When the four vertexes are all in the alert area, judging that the rectangle corresponding to the object is completely overlapped with the alert area; when the four vertexes are not in the warning area, judging that the rectangle corresponding to the object is not overlapped with the warning area; when the four vertex portions are within the alert zone, it is determined that the rectangle corresponding to the object partially overlaps the alert zone.
S1025: and acquiring target data corresponding to the object corresponding to the target rectangle when the object invades the warning area.
And the NVR acquires target data corresponding to the object corresponding to the target rectangle when the object invades the warning area. Specifically, the NVR acquires data corresponding to an object corresponding to the target rectangle before the object starts to intrude into the alert area, when the object intrudes into the alert area, and when the object leaves the alert area, and records the data as target data corresponding to the object in the process of intruding into the alert area. The target data may include position information, image information, rectangles and the like corresponding to the object corresponding to the target rectangle at different times in the process of invading the warning area.
S1026: and acquiring a preset movement detection area.
When the preset retrieval condition is used for movement detection, the NVR acquires a preset movement detection area. Specifically, the movement detection area may be a movement detection area drawn by the user based on the NVR device, and may be used to determine whether an object moves in the movement detection area. The number of the movement detection areas is not limited, for example, 16 movement detection areas are drawn, which is only an exemplary illustration here, and is not limited to this.
S1027: determining a third object based on the rectangle of each of the objects and the movement detection area; the third object is an object that moves in the movement detection area among each of the objects.
The NVR determines a third object based on the rectangle corresponding to each object and a preset movement detection area; wherein the third object is an object moving in a preset movement detection area. Specifically, the NVR judges whether each object of the object moves, and rejects the static object; and further judging whether the rectangle corresponding to the moving object is intersected with the preset movement detection area or not. When a rectangle corresponding to a moving object is intersected with a preset movement detection area, judging that the object moves in the preset movement detection area; and when the rectangle corresponding to the moving object does not intersect with the preset movement detection area, judging that the object does not move in the preset movement detection area.
S1028: and acquiring corresponding target data when the third object moves.
And the NVR acquires target data corresponding to the third object when the third object moves. Specifically, the NVR acquires data corresponding to the third object before, during, and immediately after the movement of the third object in the movement detection area, and records the data as target data corresponding to the third object in the movement process of the movement detection area. The target data may include position information, image information, a rectangle, etc. corresponding to the third object at different times during the movement of the movement detection area.
S103: and acquiring a time period corresponding to each target data.
The NVR acquires a time period corresponding to each target data. The target data comprises image information of different objects corresponding to different scenes at different moments, the starting time and the ending time of the moments can be obtained, and the time period corresponding to the target data is obtained through calculation.
S104: and extracting the target video corresponding to each time period from the original video.
The NVR extracts a target video corresponding to each time period from the original video. Specifically, the NVR stores original videos in advance, and extracts different target videos from the original videos according to a time period corresponding to each target data.
According to the embodiment of the application, the intelligent data to be retrieved is obtained; acquiring each target data meeting preset retrieval conditions from the intelligent data; acquiring a time period corresponding to each target data; and extracting the target video corresponding to each time period from the original video. In the above mode, the video retrieval device retrieves the acquired intelligent data to obtain target data meeting the conditions; and further extracting the target video from the original video according to the time period corresponding to the target data. The intelligent data is obtained by preprocessing an original video, and comprises image information corresponding to each object identified from the original video, wherein the image information comprises the size and the position of a rectangle of each object; the equipment for retrieving the video does not need to directly analyze the original video, but retrieves based on the intelligent data, and because the data volume of the intelligent data is small, the resource occupancy rate is greatly reduced, the retrieval process is simple, the retrieval time is saved, and the retrieval speed is accelerated; and the target video is extracted from the original video according to the time period corresponding to the target data meeting the conditions, so that the retrieved target video is more accurate. Further, when the preset retrieval condition is used for detecting different scenes, the user can draw different retrieval areas according to actual conditions, so that the retrieval is more convenient, and the retrieval result is more flexible.
Referring to fig. 4, fig. 4 is a schematic flow chart of a method for retrieving video according to another embodiment of the present invention. The main execution body of the method for retrieving videos in the embodiment is equipment for retrieving videos, and the equipment includes, but is not limited to, a network hard disk video recorder, a mobile phone, a computer, an intelligent mobile terminal and the like. In this embodiment, NVR is taken as an example for explanation.
The difference between this embodiment and the previous embodiment is S205-S206, where S201-S204 in this embodiment are completely the same as S101-S104 in the previous embodiment, and reference is specifically made to the description of S101-S104 in the previous embodiment, which is not repeated herein.
Further, in order to facilitate the user to view the retrieved target video, S205-S206 are further included after S204, which is as follows:
s205: generating an intelligent event corresponding to each target video based on each target video and the original video; the intelligent event is used for representing an event of an object corresponding to the target video in a target time period corresponding to the target video.
The NVR generates an intelligent event corresponding to each target video based on each target video and the original video; the intelligent event is used for representing an event of an object corresponding to the target video in a target time period corresponding to the target video. Specifically, the NVR intercepts videos of M seconds before and after the time period from the original video based on the time period corresponding to the target video, and generates an intelligent event corresponding to the target video based on the intercepted videos and the target video. And adding M seconds before and after the time period corresponding to the target video to generate the target time period corresponding to the target video. The intelligent event can be used for representing a border crossing event, an area intrusion event, a movement detection area movement event and the like of an object corresponding to the target video in a target time period corresponding to the target video.
Further, in order to facilitate the user to visually check the occurred smart event, S205 may further include: and generating an intelligent event list based on each intelligent event, and displaying the intelligent event list in a preset display area.
Specifically, the NVR combines all smart events to generate a smart event list, and displays the smart event list in a preset display area. For example, the preset display area may be the right side of the display area of the NVR device, and the smart event list is displayed on the right side of the display area.
S206: and drawing and marking a progress bar corresponding to each intelligent event based on the original video.
And the NVR draws and marks a progress bar corresponding to each intelligent event based on the original video. Specifically, each intelligent event has a corresponding target time period, the same time period is found in the original video, and a progress bar is drawn below the time period, that is, the progress bar corresponding to the intelligent event is drawn below the original video. Further, to make the progress bar more prominent and easy for the user to view, the progress bar may be highlighted in yellow.
According to the embodiment of the application, the intelligent data to be retrieved is obtained; acquiring each target data meeting preset retrieval conditions from the intelligent data; acquiring a time period corresponding to each target data; and extracting the target video corresponding to each time period from the original video. In the above mode, the video retrieval device retrieves the acquired intelligent data to obtain target data meeting the conditions; and further extracting the target video from the original video according to the time period corresponding to the target data. The intelligent data is obtained by preprocessing an original video, and comprises image information corresponding to each object identified from the original video, wherein the image information comprises the size and the position of a rectangle of each object; the equipment for retrieving the video does not need to directly analyze the original video, but retrieves based on the intelligent data, and because the data volume of the intelligent data is small, the resource occupancy rate is greatly reduced, the retrieval process is simple, the retrieval time is saved, and the retrieval speed is accelerated; and the target video is extracted from the original video according to the time period corresponding to the target data meeting the conditions, so that the retrieved target video is more accurate. Further, when the preset retrieval condition is used for detecting different scenes, the user can draw different retrieval areas according to actual conditions, so that the retrieval is more convenient, and the retrieval result is more flexible. Furthermore, the retrieval result is displayed in the form of an intelligent event and an intelligent event list, so that the user can check the retrieval result more conveniently.
Referring to fig. 5, fig. 5 is a schematic diagram of an apparatus for retrieving video according to an embodiment of the present disclosure. The device comprises units for performing the steps in the embodiments corresponding to fig. 1 and 4. Please refer to fig. 1 and fig. 4 for the corresponding embodiments. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 5, it includes:
a first obtaining unit 310, which obtains the intelligent data to be retrieved; the intelligent data is obtained by preprocessing an original video;
a second obtaining unit 320, configured to obtain each target data meeting a preset retrieval condition from the intelligent data;
a third obtaining unit 330, configured to obtain a time period corresponding to each piece of target data;
an extracting unit 340, configured to extract a target video corresponding to each time period from the original video.
The intelligent data comprises image information corresponding to each object identified from the original video; the image information includes a size and a position of a rectangle of the object.
Further, the second obtaining unit 320 includes:
the first determining unit is used for determining a warning line in a preset area; the warning line is used for judging whether each object is out of range;
the second determining unit is used for determining a first object crossing the boundary based on the warning line and acquiring target data corresponding to the first object in the process of crossing the warning line; the first object is an object of each of the objects that crosses the alert line.
Further, when the preset retrieval condition is used for boundary crossing detection, the second determining unit is specifically configured to:
determining first image information based on the warning line and each of the image information; the first image information is image information corresponding to an object with the same motion direction and a preset border crossing direction; the preset border crossing direction is used for representing the direction crossing the warning line;
based on the first image information, eliminating image information corresponding to an object which is not intersected with the warning line to obtain second image information;
calculating a vector included angle between a rectangle corresponding to the second object and the warning line, and judging whether the second object is out of range or not based on the vector included angle; the second object is an object corresponding to the second image information;
and when the judgment result is that the second object is out of range, acquiring target data corresponding to the second object in the process of crossing the warning line.
Further, when the preset retrieval condition is used for detecting an intrusion into a region, the second obtaining unit 320 is specifically configured to:
acquiring a preset warning area;
determining a target rectangle overlapping the alert zone based on the rectangle of each of the objects and the alert zone;
and acquiring target data corresponding to the object corresponding to the target rectangle when the object invades the warning area.
Further, when the preset retrieving condition is used for motion detection, the second obtaining unit 320 is specifically configured to:
acquiring a preset movement detection area;
determining a third object based on the rectangle of each of the objects and the movement detection area; the third object is an object that moves in the movement detection area among each of the objects;
and acquiring corresponding target data when the third object moves.
Further, the apparatus further comprises:
the first generation unit is used for generating an intelligent event corresponding to each target video based on each target video and the original video; the intelligent event is used for representing an event of an object corresponding to the target video in a target time period corresponding to the target video;
and the drawing unit is used for drawing and marking the progress bar corresponding to each intelligent event based on the original video.
Further, the apparatus further comprises:
and the second generating unit is used for generating an intelligent event list based on each intelligent event and displaying the intelligent event list in a preset display area.
Referring to fig. 6, fig. 6 is a schematic diagram of an apparatus for retrieving video according to another embodiment of the present application. As shown in fig. 6, the apparatus 4 of this embodiment includes: a processor 40, a memory 41, and computer readable instructions 42 stored in the memory 41 and executable on the processor 40. The processor 40, when executing the computer readable instructions 42, implements the steps in the above-described method embodiments for retrieving video by various devices, such as S101-S104 shown in fig. 1. Alternatively, the processor 40, when executing the computer readable instructions 42, implements the functions of the units in the embodiments described above, such as the functions of the units 310 to 340 shown in fig. 5.
Illustratively, the computer readable instructions 42 may be divided into one or more units, which are stored in the memory 41 and executed by the processor 40 to accomplish the present application. The one or more elements may be a series of computer readable instruction segments capable of performing certain functions, which are used to describe the execution of the computer readable instructions 42 in the terminal 4. For example, the computer readable instructions 42 may be a first obtaining unit, a second obtaining unit, a third obtaining unit, and an extracting unit, and the specific functions of each unit are as described above.
The apparatus may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 6 is merely an example of a device 4 and does not constitute a limitation of device 4 and may include more or fewer components than shown, or some components in combination, or different components, e.g., the device may also include input output devices, network access devices, buses, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the device 4, such as a hard disk or a memory of the device 4. The memory 41 may also be an external storage device of the device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a flash memory Card (FlashCard), etc. provided on the device 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the device 4. The memory 41 is used to store the computer readable instructions and other programs and data required by the device. The memory 41 may also be used to temporarily store data that has been output or is to be output.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not cause the essential features of the corresponding technical solutions to depart from the spirit scope of the technical solutions of the embodiments of the present application, and are intended to be included within the scope of the present application.

Claims (10)

1. A method for retrieving video, comprising:
acquiring intelligent data to be retrieved; the intelligent data is obtained by preprocessing an original video;
acquiring each target data meeting preset retrieval conditions from the intelligent data;
acquiring a time period corresponding to each target data;
and extracting the target video corresponding to each time period from the original video.
2. The method of claim 1, wherein the intelligent data comprises image information corresponding to each object identified from the original video; the image information includes a size and a position of a rectangle of the object.
3. The method of claim 2, wherein when the preset search condition is used for out-of-range detection, the obtaining of each target data meeting the preset search condition from the intelligent data includes;
determining a warning line in a preset area; the warning line is used for judging whether each object is out of range;
determining a first object which crosses the boundary based on the warning line, and acquiring target data corresponding to the first object in the process of crossing the warning line; the first object is an object of each of the objects that crosses the alert line.
4. The method according to claim 3, wherein the determining the first object that is out of range based on the alert line and the obtaining the target data corresponding to the first object in crossing the alert line comprises:
determining first image information based on the warning line and each of the image information; the first image information is image information corresponding to an object with the same motion direction and a preset border crossing direction; the preset border crossing direction is used for representing the direction crossing the warning line;
based on the first image information, eliminating image information corresponding to an object which is not intersected with the warning line to obtain second image information;
calculating a vector included angle between a rectangle corresponding to the second object and the warning line, and judging whether the second object is out of range or not based on the vector included angle; the second object is an object corresponding to the second image information;
and when the judgment result is that the second object is out of range, acquiring target data corresponding to the second object in the process of crossing the warning line.
5. The method of claim 2, wherein when the preset search condition is used for regional intrusion detection, the obtaining of each target data meeting the preset search condition from the intelligent data comprises;
acquiring a preset warning area;
determining a target rectangle overlapping the alert zone based on the rectangle of each of the objects and the alert zone;
and acquiring target data corresponding to the object corresponding to the target rectangle when the object invades the warning area.
6. The method according to claim 2, wherein when the preset search condition is used for the movement detection, the obtaining of each target data meeting the preset search condition from the intelligent data includes;
acquiring a preset movement detection area;
determining a third object based on the rectangle of each of the objects and the movement detection area; the third object is an object that moves in the movement detection area among each of the objects;
and acquiring corresponding target data when the third object moves.
7. The method according to any one of claims 1 to 6, wherein after extracting the target video corresponding to each of the time periods from the original video, the method further comprises:
generating an intelligent event corresponding to each target video based on each target video and the original video; the intelligent event is used for representing an event of an object corresponding to the target video in a target time period corresponding to the target video;
and drawing and marking a progress bar corresponding to each intelligent event based on the original video.
8. The method of claim 7, wherein after generating the smart event corresponding to each of the target videos based on each of the target videos and the original video, further comprising:
and generating an intelligent event list based on each intelligent event, and displaying the intelligent event list in a preset display area.
9. An apparatus for retrieving video, comprising:
the first acquisition unit is used for acquiring intelligent data to be retrieved; the intelligent data is obtained by preprocessing an original video;
the second acquisition unit is used for acquiring each target data meeting the preset retrieval condition from the intelligent data;
a third obtaining unit, configured to obtain a time period corresponding to each piece of target data;
and the extracting unit is used for extracting the target video corresponding to each time period from the original video.
10. An apparatus comprising a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, wherein the processor when executing the computer readable instructions implements the method of any one of claims 1 to 8.
CN201911403186.4A 2019-12-30 2019-12-30 Method and equipment for retrieving video Pending CN111104549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911403186.4A CN111104549A (en) 2019-12-30 2019-12-30 Method and equipment for retrieving video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911403186.4A CN111104549A (en) 2019-12-30 2019-12-30 Method and equipment for retrieving video

Publications (1)

Publication Number Publication Date
CN111104549A true CN111104549A (en) 2020-05-05

Family

ID=70424469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911403186.4A Pending CN111104549A (en) 2019-12-30 2019-12-30 Method and equipment for retrieving video

Country Status (1)

Country Link
CN (1) CN111104549A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112650880A (en) * 2020-11-30 2021-04-13 重庆紫光华山智安科技有限公司 Video analysis method and device, computer equipment and storage medium
CN115630191A (en) * 2022-12-22 2023-01-20 成都纵横自动化技术股份有限公司 Time-space data set retrieval method and device based on full-dynamic video and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104270608A (en) * 2014-09-28 2015-01-07 武汉烽火众智数字技术有限责任公司 Intelligent video player and playing method thereof
CN104484457A (en) * 2014-12-29 2015-04-01 广州中国科学院软件应用技术研究所 Method and system for extracting and searching moving object in parallel video
US20180007429A1 (en) * 2015-01-26 2018-01-04 Hangzhou Hikvision Digital Technology Co., Ltd. Intelligent processing method and system for video data
CN108596129A (en) * 2018-04-28 2018-09-28 武汉盛信鸿通科技有限公司 A kind of vehicle based on intelligent video analysis technology gets over line detecting method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104270608A (en) * 2014-09-28 2015-01-07 武汉烽火众智数字技术有限责任公司 Intelligent video player and playing method thereof
CN104484457A (en) * 2014-12-29 2015-04-01 广州中国科学院软件应用技术研究所 Method and system for extracting and searching moving object in parallel video
US20180007429A1 (en) * 2015-01-26 2018-01-04 Hangzhou Hikvision Digital Technology Co., Ltd. Intelligent processing method and system for video data
CN108596129A (en) * 2018-04-28 2018-09-28 武汉盛信鸿通科技有限公司 A kind of vehicle based on intelligent video analysis technology gets over line detecting method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112650880A (en) * 2020-11-30 2021-04-13 重庆紫光华山智安科技有限公司 Video analysis method and device, computer equipment and storage medium
CN115630191A (en) * 2022-12-22 2023-01-20 成都纵横自动化技术股份有限公司 Time-space data set retrieval method and device based on full-dynamic video and storage medium
CN115630191B (en) * 2022-12-22 2023-03-28 成都纵横自动化技术股份有限公司 Time-space data set retrieval method and device based on full-dynamic video and storage medium

Similar Documents

Publication Publication Date Title
US10417503B2 (en) Image processing apparatus and image processing method
US10242282B2 (en) Video redaction method and system
US8854474B2 (en) System and method for quick object verification
US8619135B2 (en) Detection of abnormal behaviour in video objects
Yu et al. Trajectory-based ball detection and tracking in broadcast soccer video
US11082668B2 (en) System and method for electronic surveillance
US9514225B2 (en) Video recording apparatus supporting smart search and smart search method performed using video recording apparatus
KR102139582B1 (en) Apparatus for CCTV Video Analytics Based on Multiple ROIs and an Object Detection DCNN and Driving Method Thereof
TW201826141A (en) A method for generating alerts in a video surveillance system
US20130265434A1 (en) Image processing apparatus and image processing method
US9354711B2 (en) Dynamic hand-gesture-based region of interest localization
Zabłocki et al. Intelligent video surveillance systems for public spaces–a survey
US20130170760A1 (en) Method and System for Video Composition
Pujol et al. A soft computing approach to violence detection in social media for smart cities
US10699751B1 (en) Method, system and device for fitting target object in video frame
US11170512B2 (en) Image processing apparatus and method, and image processing system
CN111104549A (en) Method and equipment for retrieving video
CN111582060B (en) Automatic line drawing perimeter alarm method, computer equipment and storage device
KR20160037480A (en) Method for establishing region of interest in intelligent video analytics and video analysis apparatus using the same
CN111652111A (en) Target detection method and related device
CN111212246A (en) Video generation method and device, computer equipment and storage medium
US20180121729A1 (en) Segmentation-based display highlighting subject of interest
KR20200143232A (en) Method of highlighting an object of interest in an image or video
EP3709666A1 (en) Method for fitting target object in video frame, system, and device
CN114743157B (en) Pedestrian monitoring method, device, equipment and medium based on video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination