CN113971229A - Frame comparison method analysis method and device - Google Patents

Frame comparison method analysis method and device Download PDF

Info

Publication number
CN113971229A
CN113971229A CN202111221825.2A CN202111221825A CN113971229A CN 113971229 A CN113971229 A CN 113971229A CN 202111221825 A CN202111221825 A CN 202111221825A CN 113971229 A CN113971229 A CN 113971229A
Authority
CN
China
Prior art keywords
alarm
picture
value
data
characteristic value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111221825.2A
Other languages
Chinese (zh)
Inventor
袁进泽
严军
胡靖�
饶龙强
周武毅
赵丁漫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhiyuanhui Information Technology Co Ltd
Original Assignee
Chengdu Zhiyuanhui Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhiyuanhui Information Technology Co Ltd filed Critical Chengdu Zhiyuanhui Information Technology Co Ltd
Priority to CN202111221825.2A priority Critical patent/CN113971229A/en
Publication of CN113971229A publication Critical patent/CN113971229A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Abstract

The invention discloses a frame contrast method analysis method and a device, comprising the following steps: s1, acquiring a to-be-processed alarm picture in an alarm message queue, and analyzing the alarm picture to acquire a time T of the alarm picture; s2, obtaining a target time period (T-X, T + X) according to the time T and the optional time period X of the alarm picture; s3, calling a short video corresponding to a target time period (T-X, T + X) from a third-party service platform, and converting the short video into a picture sequence; s4, acquiring a characteristic value of each picture in the picture sequence, and generating a picture sequence characteristic value set; s5, obtaining a characteristic value of the alarm picture; s6, judging whether a characteristic value corresponding to the characteristic value of the alarm picture exists in the picture sequence characteristic value set; and S7, if the short video exists, pushing the short video to a processing platform. The invention provides a screening technology capable of quickly and accurately realizing the same and similar pictures, and has important research value and application prospect.

Description

Frame comparison method analysis method and device
Technical Field
The invention relates to the technical field of data processing, in particular to a frame contrast method analysis method and device.
Background
The alarm data is one of the most important elements in the rail transit data, and can record more details of the alarm information more realistically. However, the alarm picture data obtained from the mass data of the rail transit network often contains static single alarm content, and cannot dynamically show the alarm occurrence situation, thereby reducing the overall processing efficiency. Therefore, the alarm image data obtained in the network needs to be matched, short videos which are identical or highly similar to each other are found, and the task of processing the alarm data is completed.
For screening similar or identical short videos in image data, the following methods are frequently used at present: the manual detection is the most original picture screening method, and the method has the advantages of high precision and poor effect on massive internet picture data due to the fact that the method is high in required labor cost, low in detection speed and different in standard. The MD5 is matched, the same picture is searched by using the MD5 value of the picture, the same data can be accurately found from massive network pictures, and the speed is high. The disadvantages of this method are: only pictures with the same MD5 value can be screened from short videos in the network, and as long as a little difference exists between the two pictures, the screening method based on the MD5 cannot be effective, so that the MD5 cannot complete the screening work of similar pictures.
The above matching screening for the short video cannot simultaneously meet the requirements of high speed, high precision and high recall rate when the same and similar pictures in the short video are screened. Therefore, the research of a screening technology capable of quickly and accurately realizing the same and similar pictures has important research value and application prospect.
Disclosure of Invention
The invention aims to provide a frame comparison method analysis method and a frame comparison method analysis device. The method is used for solving the problem that the existing short video screening cannot be high-speed and efficient.
A frame contrast method analysis method specifically comprises the following steps:
s1, acquiring a to-be-processed alarm picture in the alarm message queue, and analyzing the alarm picture to acquire the moment T of the alarm picture;
s2, obtaining a target time period (T-X, T + X) according to the time T of the alarm picture and the optional time period X;
s3, calling a short video corresponding to a target time period (T-X, T + X) from a third-party service platform, and converting the short video into a picture sequence;
s4, acquiring a characteristic value of each picture in the picture sequence, and generating a picture sequence characteristic value set;
s5, obtaining a characteristic value of the alarm picture;
s6, judging whether a characteristic value corresponding to the characteristic value of the alarm picture exists in the picture sequence characteristic value set;
and S7, if the short video exists, pushing the short video to a processing platform.
Further, the characteristic value of the picture is obtained by the following steps:
s401, reducing the size, and reducing the picture to a preset size to obtain a picture with m multiplied by n pixels;
s402, simplifying colors, and converting the picture with m multiplied by n pixels into a gray picture with m multiplied by n pixels;
s403, calculating an average value, and calculating the average value of the m × n gray level pictures of the pixels, namely calculating the gray level average value of the m × n pixels of the gray level pictures;
s404, comparing the gray levels of the pixels, traversing m multiplied by n pixels of the gray level picture, and comparing the gray level value of each pixel with the average value to generate a binary matrix;
s405, generating an m multiplied by n bit integer value from the binary matrix according to a preset rule, and performing hash operation on the integer value to obtain a characteristic value of the picture.
Further, the step S404 specifically includes the following steps:
comparing the gray value of each pixel in the gray picture with the average gray value respectively; if the gray value of one pixel is larger than the average gray value, setting the value of the corresponding pixel in the binary image to be 1; otherwise, setting the value of the corresponding pixel in the binary image to 0.
Further, the step S6 specifically includes the following steps:
s601, extracting characteristic values which are not traversed from the picture sequence characteristic value set;
s602, obtaining the similarity between the characteristic value of the alarm picture and the characteristic value which is not traversed;
s603, judging the similarity and a first preset threshold value;
s604, if the similarity is larger than or equal to a first preset threshold, judging that the characteristic value is a corresponding characteristic value, and executing a step S7;
s605, if the ratio is smaller than a first preset threshold, judging that the characteristic value is a non-corresponding characteristic value, and judging whether the characteristic value set of the picture sequence is traversed;
s606, if not, S601 is continuously executed.
Further, the alert message queue is generated by the steps of:
s101, receiving alarm data sent by a third-party service platform at the current moment, and storing the alarm data into a database according to a time sequence;
s102, analyzing the alarm data to generate first layer data and second layer data, wherein the first layer data is data describing the attribute of the alarm data, and the second layer data is an alarm picture;
s103, calculating a metadata value of the alarm data at the current moment according to the first layer data and the second layer data;
s104, obtaining a metadata value of the alarm data in the database within the optional time period, calculating the similarity between the metadata value at the current time and the metadata value of the alarm data within the optional time period, judging the alarm data at the current time to be new alarm data if the similarity is smaller than a second preset threshold, and turning to the step S105;
and S105, inserting the alarm data at the current moment into an alarm message queue.
Further, the step S103 specifically includes the following steps:
s1031, calculating a hash value of the first layer data;
s1032, calculating a characteristic value of second-layer data;
and S1033, summing the hash value of the first layer of data and the characteristic value of the second layer of data to obtain a metadata value of the alarm data.
Further, the first layer data includes an alarm data type value, an alarm data content value, and a device id value of the alarm data, and step S1031 specifically includes the following steps:
summing the alarm data type value, the alarm data content value and the equipment id value of the alarm data to obtain a summation value of the first layer of data;
and carrying out Hash operation on the summation value to obtain a Hash value of the first layer of data.
Further, the analyzing the alarm picture to obtain the time T of the alarm picture specifically includes:
carrying out noise reduction processing on the alarm picture;
and scanning the alarm picture after the noise reduction treatment by an optical character recognition technology to obtain time information in the alarm picture, wherein the time information is the shooting time T of the alarm picture.
A frame-contrast analysis apparatus comprising:
the warning picture acquisition module is used for acquiring a warning picture to be processed in a warning message queue and analyzing the warning picture to acquire the moment T of the warning picture;
the target time period module is used for obtaining a target time period (T-X, T + X) according to the moment T of the alarm picture and the optional time period X;
the image sequence module is used for calling a short video corresponding to a target time period (T-X, T + X) from a third-party service platform and converting the short video into an image sequence;
the characteristic value collection module is used for acquiring the characteristic value of each picture in the picture sequence and generating a picture sequence characteristic value set;
the characteristic value module is used for acquiring the characteristic value of the alarm picture;
the characteristic value judging module is used for judging whether a characteristic value corresponding to the characteristic value of the alarm picture exists in the picture sequence characteristic value set or not;
and the pushing module is used for pushing the short video to a processing platform.
Further, an alarm message queue module is arranged in front of the alarm picture obtaining module, and the alarm message queue module specifically includes:
an input module: the system comprises a database, a third-party service platform and a data processing module, wherein the database is used for receiving alarm data sent by the third-party service platform and storing the alarm data into the database according to a time sequence;
an analysis module: the alarm data processing device is used for analyzing the alarm data to generate first layer data and second layer data, wherein the first layer data is data describing the attribute of the alarm data, and the second layer data is an alarm picture;
a metadata module: the metadata value of the alarm data is calculated according to the alarm data at the current moment calculated by the first layer data and the second layer data;
a similarity determination module: obtaining a metadata value of the alarm data in the database within the optional time period, calculating the similarity between the metadata value at the current time and the metadata value of the alarm data within the optional time period, and judging the alarm data at the current time as new alarm data if the similarity is smaller than a second preset threshold;
a queue module: for inserting the alarm data of the current moment into the alarm message queue.
Further, the step S605 includes: if the ratio is smaller than a preset threshold value, judging that the characteristic value is a non-corresponding characteristic value, deleting the characteristic value, and judging whether the characteristic value set of the picture sequence is traversed or not.
Further, the preset rule is from left to right, from top to bottom or big-endian.
Further, the similarity in step S602 is: and calculating the ratio of the characteristic value of the alarm picture to the characteristic value of the picture which is not traversed, wherein the first preset threshold is 85%.
Further, the similarity in step S602 is: and calculating the Hamming distance between the characteristic value of the alarm picture and the characteristic value of the picture which is not traversed, wherein the first preset threshold is 5.
Further, the similarity in step S104 is: and calculating the ratio of the metadata value at the current moment to the metadata value in the database within the optional time period, wherein the second preset threshold is 85%.
Further, the similarity in step S104 is: and calculating the Hamming distance between the metadata value at the current moment and the metadata value in the database within the optional time period, wherein the second preset threshold is 5.
In the prior art, short videos corresponding to the warning pictures are often found through a manual detection or MD5 matching mode, and the processing mode lacks high-speed and effective processing for the situation of massive short videos with multiple cameras in a traffic track network.
According to the method and the device, the time T is obtained by analyzing the alarm picture, the short video in the target time period corresponding to the time T is found, the number of the short video needing to be matched is greatly reduced, the matching efficiency is improved, the similarity screening is carried out according to the characteristic value of the alarm picture and the characteristic value of the short video picture sequence, so that the picture which is the same as or similar to the alarm picture is found, the problem that the screening work of the similar picture cannot be finished by the MD5 picture matching mode, and only the identical picture can be screened is solved, and the method and the device are suitable for processing rail transit alarm data.
The invention has the following beneficial effects:
1. the time T is obtained by analyzing the alarm picture, and the short video in the target time period corresponding to the time T is found, so that the number of the short videos needing to be matched is greatly reduced, and the matching efficiency is improved;
2. and according to the characteristic value of the alarm picture and the characteristic value of the short video picture sequence, similarity screening is carried out, so that a picture which is the same as or similar to the alarm picture is found, the problem that similar pictures can not be screened in an MD5 picture matching mode, and only the same pictures can be screened is solved, and the method is suitable for processing rail transit alarm data.
Drawings
FIG. 1 is a schematic flow chart of a frame-contrast method analysis method according to the present invention;
FIG. 2 is a schematic structural diagram of an analyzing apparatus for frame contrast method according to the present invention;
FIG. 3 is a schematic structural diagram of a frame contrast analysis apparatus based on an alert message queue according to the present invention;
FIG. 4 is a diagram illustrating an alert message queue module according to the present invention;
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited to these examples.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "longitudinal", "lateral", "horizontal", "inner", "outer", "front", "rear", "top", "bottom", and the like indicate orientations or positional relationships that are based on the orientations or positional relationships shown in the drawings, or that are conventionally placed when the product of the present invention is used, and are used only for convenience in describing and simplifying the description, but do not indicate or imply that the device or element referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus should not be construed as limiting the invention.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "open," "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Real-time example 1
The present embodiment aims to provide a frame contrast analysis method, which specifically includes the following steps:
s1, acquiring a to-be-processed alarm picture in the alarm message queue, and analyzing the alarm picture to acquire the moment T of the alarm picture;
s2, obtaining a target time period (T-X, T + X) according to the time T of the alarm picture and the optional time period X;
s3, calling a short video corresponding to a target time period (T-X, T + X) from a third-party service platform, and converting the short video into a picture sequence;
s4, acquiring a characteristic value of each picture in the picture sequence, and generating a picture sequence characteristic value set;
s5, obtaining a characteristic value of the alarm picture;
s6, judging whether a characteristic value corresponding to the characteristic value of the alarm picture exists in the picture sequence characteristic value set;
and S7, if the short video exists, pushing the short video to a processing platform.
Further, the characteristic value of the picture is obtained by the following steps:
s401, reducing the size. The picture is downscaled to a size of 12 x 10 for a total of 120 pixels. The effect of this step is to remove the details of the picture, only keep basic information such as structure, light and shade, abandon the picture difference that different sizes, proportion bring.
S402, simplifying colors. And converting the reduced picture into 120-level gray. That is, all pixels have 64 colors in total.
And S403, calculating an average value. The gray level average of all 120 pixels is calculated.
And S404, comparing the gray scales of the pixels. The gray scale of each pixel is compared to the average. Greater than or equal to the average value, noted 1; less than the average, noted as 0.
S405, calculating a hash value. The comparison results of the previous step are combined together to form a 120-bit integer, which is the feature value of the picture. The order of the combination is not important as long as it is guaranteed that all pictures are in the same order.
And dividing the 120-bit hash value sequence every 4 times, converting the hash value sequence into hexadecimal, and cascading the hexadecimal hash value sequence to form the characteristic value.
The characteristic value of the picture is obtained through the following steps:
s401, reducing the size. The fastest way to remove high frequencies and details is to reduce the size by keeping the structure bright and dark. The picture is reduced to a size of 8x8 for a total of 64 pixels. The picture difference caused by different sizes and proportions is abandoned.
S402, simplifying colors. And converting the reduced picture into 64-level gray. That is, all pixels have 64 colors in total.
S403, DCT (discrete cosine transform) is calculated.
DCT is the frequency clustering and the ladder shape of the picture decomposition, although JPEG uses 8 × 8 DCT transform, here 32 × 32 DCT transform.
S404, reducing DCT.
Although the result of DCT is a matrix of 32 x 32 size, we only need to retain the 8x8 matrix in the upper left corner, which part presents the lowest frequencies in the picture.
S405, calculating an average value.
The average of all 64 values was calculated.
And S406, further reducing DCT.
This is the most important step, and based on the 8 × 8 DCT matrix, a hash value of 64 bits of 0 or 1 is set, and "1" is set for the DCT mean values greater than or equal to "1", and "0" is set for the DCT mean values smaller than "0". The results do not tell us about the low frequency of authenticity, but only roughly the relative proportion of the frequency we have with respect to the mean. As long as the overall structure of the picture remains unchanged, the hash result value is unchanged. The influence of gamma correction or color histogram adjustment can be avoided.
S4007, calculating a hash value.
Setting 64bit to 64bit long integer, the order of combination is not important as long as it is guaranteed that all pictures are in the same order. The 32 x 32 DCT is converted to a 32 x 32 image.
The comparison results of the previous step are combined together to form a 64-bit integer, which is the feature value of the picture. The order of the combination is not important as long as it is guaranteed that all pictures are in the same order (e.g., left to right, top to bottom, big-endian).
After the fingerprint is obtained, different pictures can be compared to see how many of the 64 bits are different. In theory, this is equivalent to calculating the "Hammingdistance" (Hammingdistance). If the different data bits do not exceed 5, the two pictures are very similar; if it is greater than 10, it is indicated that these are two different pictures.
Further, the step S6 specifically includes the following steps:
s601, extracting characteristic values which are not traversed from the picture sequence characteristic value set;
s602, obtaining the similarity between the characteristic value of the alarm picture and the characteristic value which is not traversed;
s603, judging the similarity and a first preset threshold value;
s604, if the similarity is larger than or equal to a first preset threshold, judging that the characteristic value is a corresponding characteristic value, and executing a step S7;
s605, if the ratio is smaller than a first preset threshold, judging that the characteristic value is a non-corresponding characteristic value, and judging whether the characteristic value set of the picture sequence is traversed;
s606, if not, S601 is continuously executed.
Example 2
The steps of this embodiment are to provide a frame contrast analysis method based on an alert message queue, which specifically includes the following steps:
s0, generating a warning message queue, specifically comprising;
s001, receiving alarm data sent by a third-party service platform at the current moment, and storing the alarm data into a database according to a time sequence;
s002, analyzing the alarm data to generate first layer data and second layer data, wherein the first layer data is data describing the attribute of the alarm data, and the second layer data is an alarm picture;
s003, calculating a metadata value of the alarm data at the current moment according to the first layer data and the second layer data;
calculating a hash value of the first layer data;
summing the alarm data type value, the alarm data content value and the device id value of the alarm data to obtain a sum value of the first layer of data;
and carrying out Hash operation on the summation value to obtain a Hash value of the first layer of data.
Calculating a characteristic value of the second layer data;
the size is reduced. The picture is downscaled to a size of 12 x 10 for a total of 120 pixels. The effect of this step is to remove the details of the picture, only keep basic information such as structure, light and shade, abandon the picture difference that different sizes, proportion bring.
The color is simplified. And converting the reduced picture into 120-level gray. That is, all pixels have 64 colors in total.
The average value is calculated. The gray level average of all 120 pixels is calculated.
The gray levels of the pixels are compared. The gray scale of each pixel is compared to the average. Greater than or equal to the average value, noted 1; less than the average, noted as 0.
A hash value is calculated. The comparison results of the previous step are combined together to form a 120-bit integer, which is the feature value of the picture. The order of the combination is not important as long as it is guaranteed that all pictures are in the same order.
And dividing every 4bit hash value sequences into hexadecimal, and sequentially cascading to form characteristic values.
And summing the hash value of the first layer of data and the characteristic value of the second layer of data to obtain a metadata value of the alarm data.
S004, obtaining the metadata value of the alarm data in the database in the optional time period, calculating the similarity between the metadata value at the current time and the metadata value of the alarm data in the optional time period, judging that the alarm data at the current time is new alarm data if the similarity is smaller than a second preset threshold, and turning to the step S005;
the similarity is the ratio of the metadata value of the current moment to the metadata value of the alarm data in the optional time period. The preset threshold is 85%.
S005, inserting the alarm data at the current moment into an alarm message queue.
S1, acquiring a to-be-processed alarm picture in the alarm message queue, and analyzing the alarm picture to acquire the moment T of the alarm picture;
s2, obtaining a target time period (T-X, T + X) according to the time T of the alarm picture and the optional time period X;
s3, calling a short video corresponding to a target time period (T-X, T + X) from a third-party service platform, and converting the short video into a picture sequence;
s4, acquiring a characteristic value of each picture in the picture sequence, and generating a picture sequence characteristic value set;
s5, obtaining a characteristic value of the alarm picture;
s6, judging whether a characteristic value corresponding to the characteristic value of the alarm picture exists in the picture sequence characteristic value set;
and S7, if the short video exists, pushing the short video to a processing platform.
Further, the characteristic value of the picture is obtained by the following steps:
s401, reducing the size. The picture is downscaled to a size of 12 x 10 for a total of 120 pixels. The effect of this step is to remove the details of the picture, only keep basic information such as structure, light and shade, abandon the picture difference that different sizes, proportion bring.
S402, simplifying colors. And converting the reduced picture into 120-level gray. That is, all pixels have 64 colors in total.
And S403, calculating an average value. The gray level average of all 120 pixels is calculated.
And S404, comparing the gray scales of the pixels. The gray scale of each pixel is compared to the average. Greater than or equal to the average value, noted 1; less than the average, noted as 0.
S405, calculating a hash value. The comparison results of the previous step are combined together to form a 120-bit integer, which is the feature value of the picture. The order of the combination is not important as long as it is guaranteed that all pictures are in the same order.
And dividing the 120-bit hash value sequence every 4 times, converting the hash value sequence into hexadecimal, and cascading the hexadecimal hash value sequence to form the characteristic value.
Further, the step S6 specifically includes the following steps:
s601, extracting characteristic values which are not traversed from the picture sequence characteristic value set;
s602, obtaining the similarity between the characteristic value of the alarm picture and the characteristic value which is not traversed;
s603, judging the similarity and a first preset threshold value;
s604, if the similarity is larger than or equal to a first preset threshold, judging that the characteristic value is a corresponding characteristic value, and executing a step S7;
s605, if the ratio is smaller than a first preset threshold, judging that the characteristic value is a non-corresponding characteristic value, deleting the characteristic value, and judging whether the characteristic value set of the picture sequence is traversed;
s606, if not, S601 is continuously executed.
Example 3
An object of the present embodiment is to provide a frame contrast analysis apparatus, which specifically includes:
the warning picture acquisition module is used for acquiring a warning picture to be processed in a warning message queue and analyzing the warning picture to acquire the moment T of the warning picture;
the target time period module is used for obtaining a target time period (T-X, T + X) according to the moment T of the alarm picture and the optional time period X;
the image sequence module is used for calling a short video corresponding to a target time period (T-X, T + X) from a third-party service platform and converting the short video into an image sequence;
the characteristic value collection module is used for acquiring the characteristic value of each picture in the picture sequence and generating a picture sequence characteristic value set;
the characteristic value module is used for acquiring the characteristic value of the alarm picture;
the characteristic value judging module is used for judging whether a characteristic value corresponding to the characteristic value of the alarm picture exists in the picture sequence characteristic value set or not;
and the pushing module is used for pushing the short video to a processing platform.
Example 4
The present embodiment aims to provide a frame contrast analysis apparatus based on an alert message queue, which specifically includes:
the warning picture acquisition module is used for acquiring a warning picture to be processed in a warning message queue and analyzing the warning picture to acquire the moment T of the warning picture;
the target time period module is used for obtaining a target time period (T-X, T + X) according to the moment T of the alarm picture and the optional time period X;
the image sequence module is used for calling a short video corresponding to a target time period (T-X, T + X) from a third-party service platform and converting the short video into an image sequence;
the characteristic value collection module is used for acquiring the characteristic value of each picture in the picture sequence and generating a picture sequence characteristic value set;
the characteristic value module is used for acquiring the characteristic value of the alarm picture;
the characteristic value judging module is used for judging whether a characteristic value corresponding to the characteristic value of the alarm picture exists in the picture sequence characteristic value set or not;
and the pushing module is used for pushing the short video to a processing platform.
Further, an alarm message queue module is arranged in front of the alarm picture obtaining module, and the alarm message queue module specifically includes:
an input module: the system comprises a database, a third-party service platform and a data processing module, wherein the database is used for receiving alarm data sent by the third-party service platform and storing the alarm data into the database according to a time sequence;
an analysis module: the alarm data processing device is used for analyzing the alarm data to generate first layer data and second layer data, wherein the first layer data is data describing the attribute of the alarm data, and the second layer data is an alarm picture;
a metadata module: the metadata value of the alarm data is calculated according to the alarm data at the current moment calculated by the first layer data and the second layer data;
a similarity determination module: obtaining a metadata value of the alarm data in the database within the optional time period, calculating the similarity between the metadata value at the current time and the metadata value of the alarm data within the optional time period, and judging the alarm data at the current time as new alarm data if the similarity is smaller than a second preset threshold;
a queue module: for inserting the alarm data of the current moment into the alarm message queue.
The foregoing is only a preferred embodiment of the present invention, and the present invention is not limited thereto in any way, and any simple modification, equivalent replacement and improvement made to the above embodiment within the spirit and principle of the present invention still fall within the protection scope of the present invention.

Claims (10)

1. A frame contrast method analysis method is characterized by comprising the following steps:
s1, acquiring a to-be-processed alarm picture in an alarm message queue, and analyzing the alarm picture to acquire a time T of the alarm picture;
s2, obtaining a target time period (T-X, T + X) according to the time T and the optional time period X of the alarm picture;
s3, calling a short video corresponding to a target time period (T-X, T + X) from a third-party service platform, and converting the short video into a picture sequence;
s4, acquiring a characteristic value of each picture in the picture sequence, and generating a picture sequence characteristic value set;
s5, obtaining a characteristic value of the alarm picture;
s6, judging whether a characteristic value corresponding to the characteristic value of the alarm picture exists in the picture sequence characteristic value set;
and S7, if the short video exists, pushing the short video to a processing platform.
2. A frame-contrast analysis method according to claim 1, wherein the characteristic value of the picture is obtained by:
s401, reducing the size, and reducing the picture to a preset size to obtain a picture with m multiplied by n pixels;
s402, simplifying colors, and converting the picture with m multiplied by n pixels into a gray picture with m multiplied by n pixels;
s403, calculating an average value, and calculating the average value of the m × n gray level pictures of the pixels, namely calculating the gray level average value of the m × n pixels of the gray level pictures;
s404, comparing the gray levels of the pixels, traversing m multiplied by n pixels of the gray level picture, and comparing the gray level value of each pixel with the average value to generate a binary matrix;
s405, generating an m multiplied by n bit integer value from the binary matrix according to a preset rule, and performing hash operation on the integer value to obtain a characteristic value of the picture.
3. The method for analyzing frame contrast method according to claim 2, wherein the step S404 specifically comprises the following steps:
comparing the gray value of each pixel in the gray picture with the average gray value respectively; if the gray value of one pixel is larger than the average gray value, setting the value of the corresponding pixel in the binary image to be 1; otherwise, setting the value of the corresponding pixel in the binary image to 0.
4. The method for analyzing frame contrast according to claim 1, wherein the step S6 specifically includes the following steps:
s601, extracting characteristic values which are not traversed from the picture sequence characteristic value set;
s602, obtaining the similarity between the characteristic value of the alarm picture and the characteristic value which is not traversed;
s603, judging the similarity and a first preset threshold value;
s604, if the similarity is larger than or equal to a first preset threshold, judging that the characteristic value is a corresponding characteristic value, and executing a step S7;
s605, if the ratio is smaller than a first preset threshold, judging that the characteristic value is a non-corresponding characteristic value, and judging whether the characteristic value set of the picture sequence is traversed;
s606, if not, S601 is continuously executed.
5. A frame-to-frame analysis method according to claim 1, wherein the alert message queue is generated by:
s101, receiving alarm data sent by a third-party service platform at the current moment, and storing the alarm data into a database according to a time sequence;
s102, analyzing the alarm data to generate first layer data and second layer data, wherein the first layer data is data describing the attribute of the alarm data, and the second layer data is an alarm picture;
s103, calculating a metadata value of the alarm data at the current moment according to the first layer data and the second layer data;
s104, obtaining the metadata value of the alarm data in the database within the optional time period, calculating the similarity between the metadata value of the current time and the metadata value of the alarm data within the optional time period, judging the current time alarm data to be new alarm data if the similarity is smaller than a second preset threshold, and turning to the step S5;
and S105, inserting the alarm data at the current moment into an alarm message queue.
6. The method for analyzing frame contrast according to claim 5, wherein the step S103 specifically comprises the following steps:
s1031, calculating a hash value of the first layer data;
s1032, calculating a characteristic value of second-layer data;
and S1033, summing the hash value of the first layer of data and the characteristic value of the second layer of data to obtain a metadata value of the alarm data.
7. The frame-contrast analysis method according to claim 6, wherein the first layer data includes an alarm data type value, an alarm data content value, and a device id value of the alarm data, and the step S1031 specifically includes the following steps:
summing the alarm data type value, the alarm data content value and the equipment id value of the alarm data to obtain a summation value of the first layer of data;
and carrying out Hash operation on the summation value to obtain a Hash value of the first layer of data.
8. The frame-contrast analysis method according to claim 1, wherein analyzing the alarm picture to obtain a time t of the alarm picture specifically includes:
carrying out noise reduction processing on the alarm picture;
scanning the alarm picture after the noise reduction processing through an optical character recognition technology to obtain time information in the alarm picture, wherein the time information is T at the shooting time of the alarm picture.
9. A frame-contrast analysis apparatus, comprising:
the device comprises an alarm picture acquisition module, a warning message queue processing module and a warning message sending module, wherein the alarm picture acquisition module is used for acquiring a to-be-processed alarm picture in the alarm message queue, analyzing the alarm picture and acquiring a time T of the alarm picture;
a target time period module, configured to obtain a target time period (t-X, t + X) according to the time t and the optional time period X of the alarm picture;
the image sequence module is used for calling a short video corresponding to a target time period (T-X, T + X) from a third-party service platform and converting the short video into an image sequence;
the characteristic value collection module is used for acquiring the characteristic value of each picture in the picture sequence and generating a picture sequence characteristic value set;
the characteristic value module is used for acquiring the characteristic value of the alarm picture;
the characteristic value judging module is used for judging whether a characteristic value corresponding to the characteristic value of the alarm picture exists in the picture sequence characteristic value set or not;
and the pushing module is used for pushing the short video to a processing platform.
10. The apparatus for analyzing frame-to-frame comparison according to claim 9, wherein an alarm message queue module is disposed before the module for obtaining an alarm picture, and the alarm message queue module specifically includes:
an input module: the system comprises a database, a third-party service platform and a data processing module, wherein the database is used for receiving alarm data sent by the third-party service platform and storing the alarm data into the database according to a time sequence;
an analysis module: the alarm data processing device is used for analyzing the alarm data to generate first layer data and second layer data, wherein the first layer data is data describing the attribute of the alarm data, and the second layer data is an alarm picture;
a metadata module: the metadata value of the alarm data is calculated according to the alarm data at the current moment calculated by the first layer data and the second layer data;
a similarity determination module: obtaining a metadata value of the alarm data in the database within the optional time period, calculating the similarity between the metadata value at the current time and the metadata value of the alarm data within the optional time period, and judging that the alarm data at the T1 time is new alarm data if the similarity is smaller than a second preset threshold;
a queue module: for inserting the alarm data of the current moment into the alarm message queue.
CN202111221825.2A 2021-10-20 2021-10-20 Frame comparison method analysis method and device Pending CN113971229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111221825.2A CN113971229A (en) 2021-10-20 2021-10-20 Frame comparison method analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111221825.2A CN113971229A (en) 2021-10-20 2021-10-20 Frame comparison method analysis method and device

Publications (1)

Publication Number Publication Date
CN113971229A true CN113971229A (en) 2022-01-25

Family

ID=79588125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111221825.2A Pending CN113971229A (en) 2021-10-20 2021-10-20 Frame comparison method analysis method and device

Country Status (1)

Country Link
CN (1) CN113971229A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551823A (en) * 2009-04-20 2009-10-07 浙江师范大学 Comprehensive multi-feature image retrieval method
CN103136243A (en) * 2011-11-29 2013-06-05 中国电信股份有限公司 File system duplicate removal method and device based on cloud storage
CN105631487A (en) * 2015-12-31 2016-06-01 北京奇艺世纪科技有限公司 Image comparison method, device, video comparison method and device
CN106557545A (en) * 2016-10-19 2017-04-05 北京小度互娱科技有限公司 Video retrieval method and device
US20180357261A1 (en) * 2015-11-30 2018-12-13 Entit Software Llc Alignment and deduplication of time-series datasets
CN109218721A (en) * 2018-11-26 2019-01-15 南京烽火星空通信发展有限公司 A kind of mutation video detecting method compared based on frame
CN110275975A (en) * 2019-06-26 2019-09-24 北京深醒科技有限公司 A kind of method for quickly retrieving of similar pictures
CN111553259A (en) * 2020-04-26 2020-08-18 北京宙心科技有限公司 Image duplicate removal method and system
CN111639212A (en) * 2020-05-27 2020-09-08 中国矿业大学 Image retrieval method in mining intelligent video analysis
CN111882536A (en) * 2020-07-24 2020-11-03 富德康(北京)科技股份有限公司 Method for monitoring quantity of bulk cargo based on picture comparison
CN112183249A (en) * 2020-09-14 2021-01-05 北京神州泰岳智能数据技术有限公司 Video processing method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101551823A (en) * 2009-04-20 2009-10-07 浙江师范大学 Comprehensive multi-feature image retrieval method
CN103136243A (en) * 2011-11-29 2013-06-05 中国电信股份有限公司 File system duplicate removal method and device based on cloud storage
US20180357261A1 (en) * 2015-11-30 2018-12-13 Entit Software Llc Alignment and deduplication of time-series datasets
CN105631487A (en) * 2015-12-31 2016-06-01 北京奇艺世纪科技有限公司 Image comparison method, device, video comparison method and device
CN106557545A (en) * 2016-10-19 2017-04-05 北京小度互娱科技有限公司 Video retrieval method and device
CN109218721A (en) * 2018-11-26 2019-01-15 南京烽火星空通信发展有限公司 A kind of mutation video detecting method compared based on frame
CN110275975A (en) * 2019-06-26 2019-09-24 北京深醒科技有限公司 A kind of method for quickly retrieving of similar pictures
CN111553259A (en) * 2020-04-26 2020-08-18 北京宙心科技有限公司 Image duplicate removal method and system
CN111639212A (en) * 2020-05-27 2020-09-08 中国矿业大学 Image retrieval method in mining intelligent video analysis
CN111882536A (en) * 2020-07-24 2020-11-03 富德康(北京)科技股份有限公司 Method for monitoring quantity of bulk cargo based on picture comparison
CN112183249A (en) * 2020-09-14 2021-01-05 北京神州泰岳智能数据技术有限公司 Video processing method and device

Similar Documents

Publication Publication Date Title
WO2016082277A1 (en) Video authentication method and apparatus
CN108495135B (en) Quick coding method for screen content video coding
WO2021129435A1 (en) Method for training video definition evaluation model, video recommendation method, and related device
US8340412B2 (en) Image processing
CN110659333B (en) Multi-level visual feature description method and visual retrieval system
CN115019111B (en) Data processing method for Internet literary composition
CN112104869B (en) Video big data storage and transcoding optimization system
CN110958467A (en) Video quality prediction method and device and electronic equipment
CN114640881A (en) Video frame alignment method and device, terminal equipment and computer readable storage medium
CN111369548A (en) No-reference video quality evaluation method and device based on generation countermeasure network
Gupta et al. Video authentication in digital forensic
CN112738442B (en) Intelligent monitoring video storage method and system
CN114187463A (en) Electronic archive generation method and device, terminal equipment and storage medium
CN107292892B (en) Video frame image segmentation method and device
US8238601B2 (en) System and method for removing digital watermarks from a watermarked image
CN109784357A (en) A kind of image based on statistical model retakes detection method
CN106708876B (en) Similar video retrieval method and system based on Lucene
CN113971229A (en) Frame comparison method analysis method and device
CN111369477A (en) Method for pre-analysis and tool self-adaptation of video recovery task
CN108230411B (en) Method and device for detecting tampered image
CN112613396B (en) Task emergency degree processing method and system
CN108665433B (en) No-reference natural image quality evaluation method combining multiple characteristics
CN115115968A (en) Video quality evaluation method and device and computer readable storage medium
CN114584804A (en) Virtual reality video stream data processing system
EP3985983A1 (en) Interpolation filtering method and apparatus for intra-frame prediction, medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination