CN111382313A - Dynamic inspection data retrieval method, device and apparatus - Google Patents

Dynamic inspection data retrieval method, device and apparatus Download PDF

Info

Publication number
CN111382313A
CN111382313A CN201811643320.3A CN201811643320A CN111382313A CN 111382313 A CN111382313 A CN 111382313A CN 201811643320 A CN201811643320 A CN 201811643320A CN 111382313 A CN111382313 A CN 111382313A
Authority
CN
China
Prior art keywords
data
frame index
index data
frame
motion detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811643320.3A
Other languages
Chinese (zh)
Inventor
陈升辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201811643320.3A priority Critical patent/CN111382313A/en
Publication of CN111382313A publication Critical patent/CN111382313A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a method, a device and a device for retrieving dynamic inspection data, which comprise the following steps: determining a first dynamic examination area to be retrieved and a comparison number N of preset I-frame index data in dynamic examination area data; acquiring first I frame index data intersected with the dynamic examination region data from the I frame index data to be retrieved; after a third dynamic examination area is determined for each first I frame index data, second I frame index data are obtained from N I frame index data before and/or after the first I frame index data; and taking the first I frame index data and the second I frame index data as the retrieved dynamic inspection data, or taking the second I frame index data as the retrieved dynamic inspection data. By adopting the invention, the picture does not frequently jump when the searched image data is played.

Description

Dynamic inspection data retrieval method, device and apparatus
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, a device, and an apparatus for retrieving motion detection data.
Background
Motion detection (Motion detection technology), also commonly called Motion detection for short, is commonly used for unattended surveillance video and automatic alarm. Images acquired by the camera according to different frame rates are calculated and compared by the CPU according to a certain algorithm, when the picture is changed, if a person walks, the lens is moved, the number obtained by calculating the comparison result exceeds a threshold value, and the system is indicated to automatically perform corresponding processing.
In a popular way, when a person walks in front of the lens or the lens is moved, a movement alarm is triggered, and the camera can transmit the changed screenshot of the dynamic examination area back to a PC (personal computer) end control system to support the mobile end mobile phone.
When a user needs to perform dynamic examination retrieval, a video is played, and after a dynamic examination search button is clicked, a dynamic examination area appears on a playing picture of the video. After clicking on the position of interest on the motion detection area, motion detection area data is generated. And then reading all index data of the video in the hard disk, respectively taking intersection of the dynamic examination area data and the dynamic examination area data in all the index data, and if the result is true, taking out the index data. And finally, playing the dynamic examination retrieval video according to the offset position in the extracted index data.
The prior art has the defect that the picture frequently jumps when the picture is played after the retrieval.
Disclosure of Invention
The invention provides a method, equipment and a device for retrieving dynamic inspection data, which are used for solving the problem of frequent picture jumping during playing after retrieval.
The embodiment of the invention provides a dynamic examination data retrieval method, which comprises the following steps:
determining a first dynamic examination area to be retrieved and a comparison number N of preset I-frame index data in dynamic examination area data;
acquiring first I frame index data which has intersection with the data of the dynamic examination region from the I frame index data to be retrieved, wherein when determining whether the intersection exists, the first I frame index data is determined according to the first dynamic examination region and a second dynamic examination region in the I frame index data;
after a third dynamic examination area is determined for each first I-frame index data, second I-frame index data is obtained from N I-frame index data before and/or after the first I-frame index data, wherein the third dynamic examination area is generated by intersecting the Nth I-frame index data with the (N-1) th I-frame index data, the dynamic examination area data is the I-frame index data with the number of N being 0, and the first I-frame index data is the I-frame index data with the number of N being 1; the second I frame index data is the Nth frame I frame index data intersected with the N-1 th frame I frame index data; when determining whether the intersection exists, determining according to the third motion detection area and the motion detection area in the second I-frame index data;
and taking the first I frame index data and the second I frame index data as the retrieved dynamic inspection data, or taking the second I frame index data as the retrieved dynamic inspection data.
Preferably, the dynamic examination region data is obtained by dividing the video picture by a 22 × 18 table;
the first motion detection region is a region in which motion detection has occurred in the motion detection region data.
Preferably, the I-frame index data includes motion detection region data and an offset position, wherein:
the dynamic examination region data in the I frame index data is 22-18 dynamic examination region data which is generated by taking the current page as the standard when the encoder detects that an object moves, and an auxiliary dynamic examination data frame is generated by the data and inserted into the video data;
when the auxiliary motion detection data frame is stored in the memory, the motion detection area data is recorded first, and when I frame data is to be detected, the offset positions of the motion detection area data and the I frame data in the hard disk form I frame index data.
Preferably, when determining whether there is an intersection, the method means that after performing and operation on each corresponding bit in the two pieces of motion detection region data, if at least one bit is true, it is determined that there is an intersection between the two pieces of motion detection region data.
Preferably, further comprising:
and de-duplicating the first I frame index data and/or the second I frame index data to be used as the retrieved dynamic inspection data.
The embodiment of the invention provides computer equipment, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the following method:
determining a first dynamic examination area to be retrieved and a comparison number N of preset I-frame index data in dynamic examination area data;
acquiring first I frame index data which has intersection with the data of the dynamic examination region from the I frame index data to be retrieved, wherein when determining whether the intersection exists, the first I frame index data is determined according to the first dynamic examination region and a second dynamic examination region in the I frame index data;
after a third dynamic examination area is determined for each first I-frame index data, second I-frame index data is obtained from N I-frame index data before and/or after the first I-frame index data, wherein the third dynamic examination area is generated by intersecting the Nth I-frame index data with the (N-1) th I-frame index data, the dynamic examination area data is the I-frame index data with the number of N being 0, and the first I-frame index data is the I-frame index data with the number of N being 1; the second I frame index data is the Nth frame I frame index data intersected with the N-1 th frame I frame index data; when determining whether the intersection exists, determining according to the third motion detection area and the motion detection area in the second I-frame index data;
and taking the first I frame index data and the second I frame index data as the retrieved dynamic inspection data, or taking the second I frame index data as the retrieved dynamic inspection data.
Preferably, the dynamic examination region data is obtained by dividing the video picture by a 22 × 18 table;
the first motion detection region is a region in which motion detection has occurred in the motion detection region data.
Preferably, the I-frame index data includes motion detection region data and an offset position, wherein:
the dynamic examination region data in the I frame index data is 22-18 dynamic examination region data which is generated by taking the current page as the standard when the encoder detects that an object moves, and an auxiliary dynamic examination data frame is generated by the data and inserted into the video data;
when the auxiliary motion detection data frame is stored in the memory, the motion detection area data is recorded first, and when I frame data is to be detected, the offset positions of the motion detection area data and the I frame data in the hard disk form I frame index data.
Preferably, when determining whether there is an intersection, the method means that after performing and operation on each corresponding bit in the two pieces of motion detection region data, if at least one bit is true, it is determined that there is an intersection between the two pieces of motion detection region data.
Preferably, further comprising:
and de-duplicating the first I frame index data and/or the second I frame index data to be used as the retrieved dynamic inspection data.
The embodiment of the invention provides a computer readable storage medium, and the computer readable storage medium stores a computer program for executing the above dynamic test data retrieval method.
The embodiment of the invention provides a dynamic examination data retrieval device, which comprises:
the retrieval parameter determining module is used for determining a first dynamic examination region to be retrieved and the comparison number N of preset I-frame index data in the dynamic examination region data;
the first I frame index data acquisition module is used for acquiring first I frame index data which has intersection with the dynamic inspection area data from the I frame index data to be retrieved, wherein when the intersection is determined, the first I frame index data acquisition module is determined according to the first dynamic inspection area and a second dynamic inspection area in the I frame index data;
a second I-frame index data obtaining module, configured to, after determining a third motion detection region for each first I-frame index data, obtain second I-frame index data from N I-frame index data before and/or after the first I-frame index data, where the third motion detection region is generated after intersecting an nth I-frame index data with an N-1 st I-frame index data, the motion detection region data is an N-0 frame I-frame index data, and the first I-frame index data is an N-1 frame I-frame index data; the second I frame index data is the Nth frame I frame index data intersected with the N-1 th frame I frame index data; when determining whether the intersection exists, determining according to the third motion detection area and the motion detection area in the second I-frame index data;
and the dynamic inspection data determining module is used for taking the first I frame index data and the second I frame index data as the retrieved dynamic inspection data, or taking the second I frame index data as the retrieved dynamic inspection data.
The invention has the following beneficial effects:
the problem of the existing scheme is that when a block of area is searched, the reason why picture jumps frequently occurs is that a plurality of I-frame index data which are not played because of no intersection are actually lacked between two continuously played frame index data, that is, from the time point of view, the actually occurring image information recorded between the two continuously played hit I-frame index data is usually discontinuous in time, and the information displayed during playing is also discontinuous for the viewer. In the technical solution provided in the embodiment of the present invention, after the I-frame index data (first I-frame index data) that has been successfully bid is determined according to the search area (first dynamic examination area) specified by the user, the other relevant I-frame index data (second I-frame index data) is further selected, and the further selected I-frame index data is used for solving the image continuity. The mode of selecting the second I frame index data is determined by adopting a third dynamic examination area generated by intersection of the first I frame index data and the dynamic examination area in the second I frame index data, so that the selected third dynamic examination area can meet the requirement of image continuity during playing. Therefore, the problem that pictures frequently jump when the pictures are played after retrieval is solved as the consistency of the pictures is ensured.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a diagram illustrating data of a motion detection area according to an embodiment of the present invention;
FIG. 2-1 is a schematic diagram of a dynamic examination region data 1 according to an embodiment of the present invention;
fig. 2-2 is a schematic diagram of the data 2 of the motion detection area according to the embodiment of the invention;
fig. 2-3 are schematic diagrams illustrating operations performed by and on motion detection region data 1 and motion detection region data 2 according to an embodiment of the present invention;
FIGS. 2-4 are schematic diagrams of newly generated motion detection region data in an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating an implementation of a motion detection data retrieval method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an implementation flow of retrieving and playing in an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a motion detection data retrieving device according to an embodiment of the present invention.
Detailed Description
First, a scenario and a concept implemented in the embodiment of the present invention will be described.
Fig. 1 is a schematic diagram of dynamic examination region data, as shown, the dynamic examination region data refers to a region divided by a table of 22 × 18 in a video frame; in the motion detection area data, the area where the motion detection occurs is set to 1, and the other areas are set to 0, wherein the area marked with 1 is called the motion detection area where the motion detection occurs (referred to as a motion detection area in this application) and is determined according to the area where the motion detection occurs in the screen. For example, if there are many people waving their arms in the picture, the area where the motion check is generated, which is the place where all people waving their arms in the picture, is marked as 1. The motion detection region data is recorded in the following structure.
The win arrays in the structure represent rows from top to bottom from 0 to 17. Each row in the structure is represented by a 32-bit uint, with the rightmost region at the lowest bit, and the other data bits are invalid except for the number of columns supported. The motion detection region data structure records as follows:
Figure BDA0001931552030000061
the I frame index data is composed of the offset position of I frame data in the hard disk and the data of the dynamic detection area. The I frame data structure is recorded as follows:
Figure BDA0001931552030000062
and operating each corresponding bit in the two pieces of motion detection area data, and once one bit is established, the two pieces of motion detection area data have intersection. Fig. 2-1 is a schematic diagram of the motion detection region data 1, fig. 2-2 is a schematic diagram of the motion detection region data 2, fig. 2-3 is a schematic diagram of the operation and the operation of the motion detection region data 1 and the motion detection region data 2, and fig. 2-4 is a schematic diagram of newly generated motion detection region data, in which a portion with stripes is a portion with bit 1 in the motion detection region data.
As shown in the figure, when the motion detection region data 1 in fig. 2-1 intersects with the motion detection region data 2 in fig. 2-2, that is, the two valid data in fig. 2-3 are respectively subjected to and operation, and the grid part is the data of the two and operation 1, there is an intersection between the two motion detection region data, and new motion detection region data generated by the intersection between the two is as shown in fig. 2-4. The figure is intended to mean that a part intersecting with the data of the motion detection area in figure 2-1 is found, and the upper part in figure 2-2 is found to intersect with figure 2-1 by performing and operation with figure 2-2, but the lower part is not found to intersect with figure 2-1. The non-intersecting portions are removed, leaving the intersecting portions, i.e., the motion detection region data in fig. 2-4, so that fig. 2-4 is the portion where the motion detection region data in fig. 2-1 and fig. 2-2 intersect.
The existing back-end network video storage server (hereinafter referred to as a memory) can be used for accessing and storing the video stream of a front-end video encoder (hereinafter referred to as an encoder), and the existing widely-used H264/H265 encoding mode can be used for implementation, wherein the internal video data frames are divided into I/P/B frames. The index data of an I frame of a video is usually determined by a camera, and generally, the frame rate of the camera is 25, and the inter-frame interval of the I frame is 50, i.e., 2 seconds per I frame data. Therefore, the I frame index data in the video is determined by the coding configuration of the camera and the length of the video recording time.
When the encoder detects that an object moves, the encoder generates 22 × 18 dynamic detection area data based on the current page. And generating an auxiliary motion detection data frame by the data, inserting the auxiliary motion detection data frame into the video data and sending the auxiliary motion detection data frame.
When the memory stores the received data, the data in the dynamic examination area is recorded first after judging that the data is an auxiliary dynamic examination data frame, and when I frame data is detected, the data in the dynamic examination area and the offset position of the I frame in the hard disk form I frame index data and are written in front of the video recording initial position.
When the user needs to perform the action search, the video is played, and after clicking the action search button, a grid 22 x 18 is superposed on the playing picture of the video. Clicking on the location of interest on the grid generates 22 x 18 motion detection region data. The generated 22 × 18 dynamic examination region data and the 22 × 18 grid superimposed on the playing picture are in a one-to-one correspondence relationship, and the concrete expression is that after the interesting grid on the picture is clicked, the corresponding position in the 22 × 18 dynamic examination region data is set to be 1.
And then reading all I-frame index data of the video in the hard disk, intersecting the region marked with 1 in the dynamic examination region data with the region marked with 1 in the dynamic examination region data in all I-frame index data, and if the result is true, namely the representative region is at least partially overlapped, taking out the I-frame index data. And finally, playing the dynamic examination retrieval video according to the offset position in the extracted I frame index data.
The disadvantage of this method is that, because of the search of a block of area, that is, the search of the area selected by the user in the previous example, the problem of frequent frame jumping occurs during the playing after the search, the reason for the frequent jumping is that there is usually a plurality of I-frame index data which are not played because there is no intersection between two continuously played frame index data, that is, from the time point of view, the actually occurring image information recorded between the two continuously played hit I-frame index data is usually discontinuous in time, and the displayed information is also discontinuous for the viewer. For example, the target 1 and the target 2 pass through the area successively and quickly, but since the I-frame index data before or after the recording of the target 1 and the target 2 does not belong to the hit and retrieved data, the I-frame index data cannot be played, so that under the scheme, the phenomenon that the two targets pass through in a flashing manner occurs, the motion track of the target cannot be seen, and the user experience is poor.
Based on this, the embodiment provides a technical solution that is started after the user inputs the desired motion detection region data, and the following description is given with reference to the drawings.
In the implementation, after the client inputs the desired motion examination region data, the number of comparison I frames is also input, and in the embodiment, the default number of input is mainly 2, but a larger number is also possible, and the following will describe the principle and scheme of implementation, and a person skilled in the art will easily find out an implementation way for setting the number of comparison I frames, and therefore, the two numbers are only used to teach the person skilled in the art how to implement the present invention specifically, but do not mean that only two values can be used, and the implementation process may determine corresponding values according to practical needs.
Fig. 3 is a schematic flow chart of an implementation of the motion detection data retrieval method, as shown in the figure, the implementation may include:
step 301, determining a first dynamic examination region to be retrieved and a comparison number N of preset I-frame index data in dynamic examination region data;
step 302, obtaining first I-frame index data having an intersection with the data of the dynamic examination region from the I-frame index data to be retrieved, wherein when determining whether the intersection exists, the first I-frame index data is determined according to the first dynamic examination region and a second dynamic examination region in the I-frame index data;
step 303, after determining a third motion detection region for each first I-frame index data, acquiring second I-frame index data from N I-frame index data before and/or after the first I-frame index data, where the third motion detection region is generated by intersecting the nth I-frame index data with the N-1 st I-frame index data, the motion detection region data is the N-0 frame I-frame index data, and the first I-frame index data is the N-1 frame I-frame index data; the second I frame index data is the Nth frame I frame index data intersected with the N-1 th frame I frame index data; when determining whether the intersection exists, determining according to the third motion detection area and the motion detection area in the second I-frame index data;
and step 304, using the first I frame index data and the second I frame index data as the retrieved dynamic inspection data, or using the second I frame index data as the retrieved dynamic inspection data.
In the implementation, after the I-frame index data (first I-frame index data) of the winning bid is determined according to the search area (first dynamic inspection area) designated by the user, the other relevant I-frame index data (second I-frame index data) is further selected, and the further selected I-frame index data is used for solving the image continuity. In an implementation, the manner of selecting the second I-frame index data is determined by using a third motion detection region generated by intersecting the first I-frame index data with the motion detection region data, and determining the third motion detection region with the motion detection region in the second I-frame index data. In fact, any third motion detection region may be used as long as it can satisfy the requirement of image continuity. It is needless to say that the selected second I frame index data itself is not necessarily the I frame index data that is winning in the user-specified area (first motion detection area).
Specifically, if a list mode is adopted during implementation, all the I-frame index data of the video may be taken out from the hard disk, the first I-frame index data is taken out to intersect with the dynamic examination region data desired by the client, and if the two data intersect with each other, the I-frame index data is put into the list to be played. And then, intersecting the new dynamic examination region data generated after intersection of the two data and the previous I frame index data, if the intersection exists and the number of the I frames compared before is less than the comparison number N of the I frames input by the client, putting the I frame index data into a list needing to be played, otherwise, ending the previous retrieval, and so on. After the previous search is completed, the search is performed in the same manner. And finishing the retrieval of the I frame index data after the forward and backward retrieval is finished.
And then taking out the next I frame index data taken out from the video, and repeating the steps until all the I frame index data are completely retrieved. And finally, playing the dynamic examination retrieval video after the I frame index data in the list to be played is deduplicated.
Obviously, the list mode is a mode that is relatively easy to implement and is also a mode that is suitable for computer processing, but obviously, the list mode is not the only mode, for example, after all the I frame index data are completely taken out, all the first I frame index data are selected, and each I frame index data is processed in parallel, and the like, as long as the mode can select the second I frame index data, can be adopted.
In the implementation, the dynamic examination region data is obtained by dividing a video picture by a 22-18 table;
the first motion detection region is a region in which motion detection has occurred in the motion detection region data.
In an implementation, the I-frame index data includes motion detection region data and an offset position, where:
the dynamic examination region data in the I frame index data is 22-18 dynamic examination region data which is generated by taking the current page as the standard when the encoder detects that an object moves, and an auxiliary dynamic examination data frame is generated by the data and inserted into the video data;
when the auxiliary motion detection data frame is stored in the memory, the motion detection area data is recorded first, and when I frame data is to be detected, the offset positions of the motion detection area data and the I frame data in the hard disk form I frame index data.
In the implementation, when determining whether there is an intersection, it means that after performing and operation on each corresponding bit in the two pieces of motion detection region data, if at least one bit is true, it is determined that there is an intersection between the two pieces of motion detection region data.
In the implementation, the method can further comprise the following steps:
and de-duplicating the first I frame index data and/or the second I frame index data to be used as the retrieved dynamic inspection data.
In the following, the second I-frame index data is selected in sequence in a tabular manner, and since the forward/backward search steps are consistent, the example only illustrates the implementation flow of the forward search.
In the examples:
the letter N represents the comparison number of the I frames, and represents that the I frames can be compared for N times in the implementation process or can be compared for N times in the future;
the letter D (1..2) represents the motion examination region data;
the letter L (1..2) represents a list for temporarily storing I-frame index data;
the letter I (1..2) represents I frame index data.
Fig. 4 is a schematic diagram of an implementation flow of retrieving and playing, as shown in the figure, the implementation flow may include:
step 401, the user inputs the comparison number N of the motion detection area and the I frame into the motion detection area data D1.
Specifically, D1 is a 22 × 18 region, and the user-selected motion-detection region is the region labeled 1 in the 22 × 18 region.
Step 402, reading all I frame index data in the video to the list L1.
And step 403, sequentially taking out the I frame index data I1 in the L1, judging whether all the I frame index data I1 are taken out, if yes, turning to step 409, and otherwise, turning to step 404.
Specifically, after the I-frame index data I1 in the L1 frame is fetched, the process proceeds to step 404 until the I-frame index data I1 in the L1 frame is processed.
And step 404, intersecting the dynamic examination region in the dynamic examination region data of the I1 with the dynamic examination region in the D1. And judging whether an intersection exists, if so, turning to a step 405, and otherwise, turning to a step 403.
And step 405, putting the I1 into the list L2, and finding out the dynamic examination region in the new dynamic examination region data D2.
Specifically, the method of D2 can be found in fig. 2-1, 2-2, 2-3 to 2-4 described above.
And 406, taking the index data of the I frame before the current index from the L1 as I2, and intersecting the dynamic examination region in the dynamic examination region data in the I2 with the dynamic examination region in the D2.
Step 407, judging whether an intersection exists and the number of the I frames compared forward is smaller than N, if so, turning to step 408, otherwise, turning to step 403.
And step 408, putting the I2 into the list L2, solving new dynamic examination region data generated by the intersection of the dynamic examination region in the dynamic examination region data in the I2 and the dynamic examination region in the D2 as D2, and after setting the I2 as the current index, turning to step 406.
Step 409, arranging and de-duplicating the I frame index data in the list L2 according to the time sequence.
And step 410, playing the video according to the I frame index data in the list L2.
Based on the same inventive concept, the embodiment of the present invention further provides a dynamic test data retrieval apparatus, a computer device, and a computer readable storage medium, and because the principles of solving the problems of these devices are similar to the dynamic test data retrieval method, the implementation of these devices can refer to the implementation of the method, and the repeated parts are not described again.
The embodiment of the invention provides computer equipment, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the following method:
determining a first dynamic examination area to be retrieved and a comparison number N of preset I-frame index data in dynamic examination area data;
acquiring first I frame index data which has intersection with the data of the dynamic examination region from the I frame index data to be retrieved, wherein when determining whether the intersection exists, the first I frame index data is determined according to the first dynamic examination region and a second dynamic examination region in the I frame index data;
after a third dynamic examination area is determined for each first I-frame index data, second I-frame index data is obtained from N I-frame index data before and/or after the first I-frame index data, wherein the third dynamic examination area is generated by intersecting the Nth I-frame index data with the (N-1) th I-frame index data, the dynamic examination area data is the I-frame index data with the number of N being 0, and the first I-frame index data is the I-frame index data with the number of N being 1; the second I frame index data is the Nth frame I frame index data intersected with the N-1 th frame I frame index data; when determining whether the intersection exists, determining according to the third motion detection area and the motion detection area in the second I-frame index data;
and taking the first I frame index data and the second I frame index data as the retrieved dynamic inspection data, or taking the second I frame index data as the retrieved dynamic inspection data.
In the implementation, the dynamic examination region data is obtained by dividing a video picture by a 22-18 table;
the first motion detection region is a region in which motion detection has occurred in the motion detection region data.
In an implementation, the I-frame index data includes motion detection region data and an offset position, where:
the dynamic examination region data in the I frame index data is 22-18 dynamic examination region data which is generated by taking the current page as the standard when the encoder detects that an object moves, and an auxiliary dynamic examination data frame is generated by the data and inserted into the video data;
when the auxiliary motion detection data frame is stored in the memory, the motion detection area data is recorded first, and when I frame data is to be detected, the offset positions of the motion detection area data and the I frame data in the hard disk form I frame index data.
In the implementation, when determining whether there is an intersection, it means that after performing and operation on each corresponding bit in the two pieces of motion detection region data, if at least one bit is true, it is determined that there is an intersection between the two pieces of motion detection region data.
In the implementation, the method can further comprise the following steps:
and de-duplicating the first I frame index data and/or the second I frame index data to be used as the retrieved dynamic inspection data.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program for executing the dynamic test data retrieval method.
The embodiment of the invention also provides a dynamic examination data retrieval device.
Fig. 5 is a schematic structural diagram of a motion examination data retrieving device, which may include:
a retrieval parameter determining module 501, configured to determine, in the motion detection region data, a first motion detection region to be retrieved and a comparison number N of preset I-frame index data;
a first I-frame index data obtaining module 502, configured to obtain, in the I-frame index data to be retrieved, first I-frame index data that has an intersection with the live-action region data, where, when determining whether there is an intersection, the first I-frame index data is determined according to the first live-action region and a second live-action region in the I-frame index data;
a second I-frame index data obtaining module 503, configured to, after determining a third motion detection region for each first I-frame index data, obtain second I-frame index data from N I-frame index data before and/or after the first I-frame index data, where the third motion detection region is generated after intersecting an nth I-frame index data with an N-1 st I-frame index data, the motion detection region data is an N-0 frame I-frame index data, and the first I-frame index data is an N-1 frame I-frame index data; the second I frame index data is the Nth frame I frame index data intersected with the N-1 th frame I frame index data; when determining whether the intersection exists, determining according to the third motion detection area and the motion detection area in the second I-frame index data;
the motion detection data determining module 504 is configured to use the first I frame index data and the second I frame index data as the retrieved motion detection data, or use the second I frame index data as the retrieved motion detection data.
For convenience of description, each part of the above-described apparatus is separately described as being functionally divided into various modules or units. Of course, the functionality of the various modules or units may be implemented in the same one or more pieces of software or hardware in practicing the invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (12)

1. A method for retrieving motion examination data, the method comprising:
determining a first dynamic examination area to be retrieved and a comparison number N of preset I-frame index data in dynamic examination area data;
acquiring first I frame index data which has intersection with the data of the dynamic examination region from the I frame index data to be retrieved, wherein when determining whether the intersection exists, the first I frame index data is determined according to the first dynamic examination region and a second dynamic examination region in the I frame index data;
after a third dynamic examination area is determined for each first I-frame index data, second I-frame index data is obtained from N I-frame index data before and/or after the first I-frame index data, wherein the third dynamic examination area is generated by intersecting the Nth I-frame index data with the (N-1) th I-frame index data, the dynamic examination area data is the I-frame index data with the number of N being 0, and the first I-frame index data is the I-frame index data with the number of N being 1; the second I frame index data is the Nth frame I frame index data intersected with the N-1 th frame I frame index data; when determining whether the intersection exists, determining according to the third motion detection area and the motion detection area in the second I-frame index data;
and taking the first I frame index data and the second I frame index data as the retrieved dynamic inspection data, or taking the second I frame index data as the retrieved dynamic inspection data.
2. The method according to claim 1, wherein the dynamic examination region data is region data obtained by dividing a video frame in a 22 x 18 table;
the first motion detection region is a region in which motion detection has occurred in the motion detection region data.
3. The method of claim 1, wherein the I-frame index data comprises motion detection region data and an offset location, wherein:
the dynamic examination region data in the I frame index data is 22-18 dynamic examination region data which is generated by taking the current page as the standard when the encoder detects that an object moves, and an auxiliary dynamic examination data frame is generated by the data and inserted into the video data;
when the auxiliary motion detection data frame is stored in the memory, the motion detection area data is recorded first, and when I frame data is to be detected, the offset positions of the motion detection area data and the I frame data in the hard disk form I frame index data.
4. The method of claim 1, wherein determining whether there is an intersection means that the two motion detection region data have an intersection if at least one bit is true after each corresponding bit in the two motion detection region data is anded.
5. The method of any of claims 1 to 4, further comprising:
and de-duplicating the first I frame index data and/or the second I frame index data to be used as the retrieved dynamic inspection data.
6. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements a method comprising:
determining a first dynamic examination area to be retrieved and a comparison number N of preset I-frame index data in dynamic examination area data;
acquiring first I frame index data which has intersection with the data of the dynamic examination region from the I frame index data to be retrieved, wherein when determining whether the intersection exists, the first I frame index data is determined according to the first dynamic examination region and a second dynamic examination region in the I frame index data;
after a third dynamic examination area is determined for each first I-frame index data, second I-frame index data is obtained from N I-frame index data before and/or after the first I-frame index data, wherein the third dynamic examination area is generated by intersecting the Nth I-frame index data with the (N-1) th I-frame index data, the dynamic examination area data is the I-frame index data with the number of N being 0, and the first I-frame index data is the I-frame index data with the number of N being 1; the second I frame index data is the Nth frame I frame index data intersected with the N-1 th frame I frame index data; when determining whether the intersection exists, determining according to the third motion detection area and the motion detection area in the second I-frame index data;
and taking the first I frame index data and the second I frame index data as the retrieved dynamic inspection data, or taking the second I frame index data as the retrieved dynamic inspection data.
7. The computer device according to claim 6, wherein the dynamic examination region data is region data obtained by dividing a video frame in a 22 x 18 table;
the first motion detection region is a region in which motion detection has occurred in the motion detection region data.
8. The computer device of claim 6, wherein the I-frame index data comprises motion detection region data and an offset location, wherein:
the dynamic examination region data in the I frame index data is 22-18 dynamic examination region data which is generated by taking the current page as the standard when the encoder detects that an object moves, and an auxiliary dynamic examination data frame is generated by the data and inserted into the video data;
when the auxiliary motion detection data frame is stored in the memory, the motion detection area data is recorded first, and when I frame data is to be detected, the offset positions of the motion detection area data and the I frame data in the hard disk form I frame index data.
9. The computer device of claim 6, wherein when determining whether there is an intersection, the determining that there is an intersection is performed after performing an and operation on each corresponding bit in the two motion detection region data if at least one bit is true.
10. The computer device of any of claims 6 to 9, further comprising:
and de-duplicating the first I frame index data and/or the second I frame index data to be used as the retrieved dynamic inspection data.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing the method of any one of claims 1 to 5.
12. An automated inspection data retrieval device, comprising:
the retrieval parameter determining module is used for determining a first dynamic examination region to be retrieved and the comparison number N of preset I-frame index data in the dynamic examination region data;
the first I frame index data acquisition module is used for acquiring first I frame index data which has intersection with the dynamic inspection area data from the I frame index data to be retrieved, wherein when the intersection is determined, the first I frame index data acquisition module is determined according to the first dynamic inspection area and a second dynamic inspection area in the I frame index data;
a second I-frame index data obtaining module, configured to, after determining a third motion detection region for each first I-frame index data, obtain second I-frame index data from N I-frame index data before and/or after the first I-frame index data, where the third motion detection region is generated after intersecting an nth I-frame index data with an N-1 st I-frame index data, the motion detection region data is an N-0 frame I-frame index data, and the first I-frame index data is an N-1 frame I-frame index data; the second I frame index data is the Nth frame I frame index data intersected with the N-1 th frame I frame index data; when determining whether the intersection exists, determining according to the third motion detection area and the motion detection area in the second I-frame index data;
and the dynamic inspection data determining module is used for taking the first I frame index data and the second I frame index data as the retrieved dynamic inspection data, or taking the second I frame index data as the retrieved dynamic inspection data.
CN201811643320.3A 2018-12-29 2018-12-29 Dynamic inspection data retrieval method, device and apparatus Pending CN111382313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811643320.3A CN111382313A (en) 2018-12-29 2018-12-29 Dynamic inspection data retrieval method, device and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811643320.3A CN111382313A (en) 2018-12-29 2018-12-29 Dynamic inspection data retrieval method, device and apparatus

Publications (1)

Publication Number Publication Date
CN111382313A true CN111382313A (en) 2020-07-07

Family

ID=71216604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811643320.3A Pending CN111382313A (en) 2018-12-29 2018-12-29 Dynamic inspection data retrieval method, device and apparatus

Country Status (1)

Country Link
CN (1) CN111382313A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625798A (en) * 2020-12-14 2022-06-14 金篆信科有限责任公司 Data retrieval method and device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005354624A (en) * 2004-06-14 2005-12-22 Canon Inc Moving image processor, moving image processing method, and computer program
CN101631237A (en) * 2009-08-05 2010-01-20 青岛海信网络科技股份有限公司 Video monitoring data storing and managing system
CN102129474A (en) * 2011-04-20 2011-07-20 杭州华三通信技术有限公司 Method, device and system for retrieving video data
CN103049459A (en) * 2011-10-17 2013-04-17 天津市亚安科技股份有限公司 Feature recognition based quick video retrieval method
CN104281651A (en) * 2014-09-16 2015-01-14 福建星网锐捷安防科技有限公司 Method and system for searching large volume of video data
CN104284162A (en) * 2014-10-29 2015-01-14 广州中国科学院软件应用技术研究所 Video retrieval method and system
CN104683760A (en) * 2015-01-28 2015-06-03 安科智慧城市技术(中国)有限公司 Video processing method and system
KR20160050721A (en) * 2014-10-30 2016-05-11 에스케이텔레콤 주식회사 Method for searching image based on image recognition and applying image search apparatus thereof
JP2016115082A (en) * 2014-12-12 2016-06-23 株式会社日立システムズ Image search system and image search method
US20170013230A1 (en) * 2014-02-14 2017-01-12 Nec Corporation Video processing system
CN106557760A (en) * 2016-11-28 2017-04-05 江苏鸿信系统集成有限公司 Monitoring system is filtered in a kind of image frame retrieval based on video identification technology
CN107835381A (en) * 2017-10-17 2018-03-23 浙江大华技术股份有限公司 A kind of generation is dynamic to call the roll of the contestants in athletic events as the method and device of preview graph

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005354624A (en) * 2004-06-14 2005-12-22 Canon Inc Moving image processor, moving image processing method, and computer program
CN101631237A (en) * 2009-08-05 2010-01-20 青岛海信网络科技股份有限公司 Video monitoring data storing and managing system
CN102129474A (en) * 2011-04-20 2011-07-20 杭州华三通信技术有限公司 Method, device and system for retrieving video data
CN103049459A (en) * 2011-10-17 2013-04-17 天津市亚安科技股份有限公司 Feature recognition based quick video retrieval method
US20170013230A1 (en) * 2014-02-14 2017-01-12 Nec Corporation Video processing system
CN104281651A (en) * 2014-09-16 2015-01-14 福建星网锐捷安防科技有限公司 Method and system for searching large volume of video data
CN104284162A (en) * 2014-10-29 2015-01-14 广州中国科学院软件应用技术研究所 Video retrieval method and system
KR20160050721A (en) * 2014-10-30 2016-05-11 에스케이텔레콤 주식회사 Method for searching image based on image recognition and applying image search apparatus thereof
JP2016115082A (en) * 2014-12-12 2016-06-23 株式会社日立システムズ Image search system and image search method
CN104683760A (en) * 2015-01-28 2015-06-03 安科智慧城市技术(中国)有限公司 Video processing method and system
CN106557760A (en) * 2016-11-28 2017-04-05 江苏鸿信系统集成有限公司 Monitoring system is filtered in a kind of image frame retrieval based on video identification technology
CN107835381A (en) * 2017-10-17 2018-03-23 浙江大华技术股份有限公司 A kind of generation is dynamic to call the roll of the contestants in athletic events as the method and device of preview graph

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘云根;刘金刚;: "基于人体姿势编码的运动数据检索", 计算机辅助设计与图形学学报, no. 04, 15 April 2011 (2011-04-15) *
郑力明;易平;: "基于视频时间段检索的多媒体数据库系统研究", 计算机系统应用, no. 05, 15 May 2009 (2009-05-15) *
黄知义, 周宁: "基于内容视频检索的关键技术研究", 现代情报, no. 10, 25 October 2005 (2005-10-25) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625798A (en) * 2020-12-14 2022-06-14 金篆信科有限责任公司 Data retrieval method and device, electronic equipment and storage medium
CN114625798B (en) * 2020-12-14 2023-03-24 金篆信科有限责任公司 Data retrieval method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109326310B (en) Automatic editing method and device and electronic equipment
CN107707931B (en) Method and device for generating interpretation data according to video data, method and device for synthesizing data and electronic equipment
CN102290082B (en) Method and device for processing brilliant video replay clip
US10384125B2 (en) Information processing program and information processing method
US20160199742A1 (en) Automatic generation of a game replay video
US11438510B2 (en) System and method for editing video contents automatically technical field
US9313444B2 (en) Relational display of images
US11042991B2 (en) Determining multiple camera positions from multiple videos
US9131227B2 (en) Computing device with video analyzing function and video analyzing method
CN111095939B (en) Identifying previously streamed portions of media items to avoid repeated playback
US20210077911A1 (en) Method of determining exciting moments in a game video and method of playing a game video
US20230040548A1 (en) Panorama video editing method,apparatus,device and storage medium
CN104618656A (en) Information processing method and electronic equipment
JPH0993588A (en) Moving image processing method
EP2966591A1 (en) Method and apparatus for identifying salient events by analyzing salient video segments identified by sensor information
WO2021254223A1 (en) Video processing method, apparatus and device, and storage medium
RU2609071C2 (en) Video navigation through object location
CN111741325A (en) Video playing method and device, electronic equipment and computer readable storage medium
Husa et al. HOST-ATS: automatic thumbnail selection with dashboard-controlled ML pipeline and dynamic user survey
US10924637B2 (en) Playback method, playback device and computer-readable storage medium
US20210144358A1 (en) Information-processing apparatus, method of processing information, and program
CN111382313A (en) Dynamic inspection data retrieval method, device and apparatus
WO2021017496A1 (en) Directing method and apparatus and computer-readable storage medium
CN104182959B (en) target searching method and device
CN106412505A (en) Video display method and apparatus in P2P mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination