CN117880759B - Intelligent video short message link efficient detection method - Google Patents

Intelligent video short message link efficient detection method Download PDF

Info

Publication number
CN117880759B
CN117880759B CN202410274939.0A CN202410274939A CN117880759B CN 117880759 B CN117880759 B CN 117880759B CN 202410274939 A CN202410274939 A CN 202410274939A CN 117880759 B CN117880759 B CN 117880759B
Authority
CN
China
Prior art keywords
value
candidate
feature
candidate feature
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410274939.0A
Other languages
Chinese (zh)
Other versions
CN117880759A (en
Inventor
曾永明
黄瑞先
周颖
王金龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Chengliye Technology Development Co ltd
Original Assignee
Shenzhen Chengliye Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Chengliye Technology Development Co ltd filed Critical Shenzhen Chengliye Technology Development Co ltd
Priority to CN202410274939.0A priority Critical patent/CN117880759B/en
Publication of CN117880759A publication Critical patent/CN117880759A/en
Application granted granted Critical
Publication of CN117880759B publication Critical patent/CN117880759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to an intelligent efficient detection method for a video short message link, which comprises the following steps: acquiring video data before and after transmission through a link; acquiring key frame images in videos before and after link transmission by adopting a three-step search method; obtaining candidate feature points in each key frame image by adopting a corner detection algorithm; obtaining texture feature values of candidate feature points based on the gray level co-occurrence matrix and neighborhood differences according to LBP information differences around the candidate feature points; screening feature points by combining the texture feature values and gray level, gradient and distance information between adjacent candidate feature points; calculating the similarity of the videos before and after the link transmission according to the gray scale and the motion vector difference between the feature points in the key frame images before and after the link transmission; and completing the efficient detection of the video short message link based on the video similarity. The invention aims to improve the efficiency of video short message link detection.

Description

Intelligent video short message link efficient detection method
Technical Field
The application relates to the technical field of image processing, in particular to an intelligent efficient detection method for a video short message link.
Background
With the development and popularization of mobile communication technology, short messages are widely used as a convenient and rapid communication mode in personal life and business activities. The stability and reliability of the short message communication link directly affect the service experience and the service operation effect of the user. The quality requirements of users on short message service are higher and higher, including the aspects of delivery rate, delay time, safety and the like. Enterprises and service providers need to ensure efficient operation of short message communication links to meet user needs and remain competitive. By implementing the monitoring of the short message communication link, the running state of the communication link can be known in real time, and the problems of early warning and quick positioning can be solved in advance. The method is beneficial to improving the operation and maintenance efficiency and reducing the influence of fault recovery time and service interruption.
Therefore, the short message communication link monitoring method is generated under the background drive of various aspects such as communication technology development, service quality requirement improvement, operation and maintenance efficiency requirement, service data analysis requirement and the like, and aims to ensure the efficient, safe and reliable operation of the short message communication service. When the three-step searching method is used for obtaining the pixel point motion vector, because the number of the pixels of the image is too large, more resource time is wasted when the three-step searching method is used, and therefore the number of the pixels participating in calculation is detected by screening the characteristic points, so that the detection efficiency is improved.
Disclosure of Invention
In order to solve the technical problems, the invention provides an intelligent efficient detection method for a video short message link, which aims to solve the existing problems.
The invention discloses an intelligent video short message link high-efficiency detection method which adopts the following technical scheme:
The embodiment of the invention provides an intelligent video short message link high-efficiency detection method, which comprises the following steps:
acquiring video data before and after transmission through a link;
Acquiring key frame images in videos before and after link transmission by adopting a three-step search method; obtaining candidate feature points in each key frame image by adopting a corner detection algorithm; acquiring a first texture feature value of the candidate feature point according to LBP information difference around the candidate feature point; constructing a second texture feature value of the candidate feature point based on the gray level co-occurrence matrix and the neighborhood difference thereof; texture feature values of candidate feature points are obtained according to the first texture feature value and the second texture feature value; constructing a feature suppression factor of which the candidate feature points are preferred feature points according to gray level, gradient and distance information between adjacent candidate feature points; constructing a correction characteristic value of the candidate characteristic point according to the characteristic suppression factor and the texture characteristic value of which the candidate characteristic point is the preferred characteristic point, so as to screen the characteristic point;
Calculating the similarity of the videos before and after the link transmission according to the gray scale and the motion vector difference between the feature points in the key frame images before and after the link transmission; when the video similarity is larger than a preset threshold, the video short message link detection is normal; otherwise, the video short message link detects abnormality.
Preferably, the acquiring key frame images in the video before and after the link transmission by using the three-step search method includes:
A walk searching method is adopted to obtain the motion vector of each pixel point between each adjacent image in the video before and after link transmission; calculating Euclidean distance average values between the motion directions and the motion distances of the motion vectors of all pixel points in each adjacent image;
And when the Euclidean distance average value is larger than a preset vector threshold value, taking the adjacent images to which the Euclidean distance average value belongs as key frame images.
Preferably, the obtaining the first texture feature value of the candidate feature point according to the LBP information difference around the candidate feature point includes:
obtaining an LBP image of the image by adopting an LBP algorithm; clustering LBP images by adopting a DBSCAN clustering algorithm to obtain clustering clusters; acquiring the element number of a cluster where the candidate feature points are located;
Respectively calculating the absolute value of the difference value of the LBP value of the cluster center between the cluster where the candidate feature point is located and each cluster, and calculating the sum value of the absolute values of the difference values of all clusters; calculating the sum of the absolute values of the differences of LBP values between the candidate feature points and all the pixel points in the neighborhood; dividing the product of the two sum values by the number of elements to obtain a first texture feature value of the candidate feature point.
Preferably, the constructing the second texture feature value of the candidate feature point based on the gray level co-occurrence matrix and the neighborhood difference thereof includes:
Constructing a first neighborhood and a second neighborhood for each pixel point, and acquiring contrast descriptors of the first neighborhood of the candidate feature points by adopting a gray level co-occurrence matrix;
For each pixel point in the second neighborhood of the candidate feature point, acquiring a contrast descriptor of the first neighborhood of each pixel point by adopting a gray level co-occurrence matrix;
And respectively calculating the absolute value of the difference value of the contrast descriptors between each pixel point and the candidate feature point, calculating the sum value of the absolute values of the difference values of all the pixel points in the second neighborhood of the candidate feature point, and taking the product result of the sum value and the contrast descriptors of the first neighborhood of the candidate feature point as the second texture feature value of the candidate feature point.
Preferably, the texture feature value according to the first texture feature value and the second texture feature value includes: and taking the product result of the first texture characteristic value and the second texture characteristic value as the texture characteristic value of the candidate characteristic point.
Preferably, the constructing the feature suppression factor of the candidate feature point as the preferred feature point according to the gray level, gradient and distance information between adjacent candidate feature points includes:
Combining gray scale, gradient and distance difference to construct a feature distance between any two candidate feature points; and calculating the sum value of feature distances between F candidate feature points nearest to the candidate feature point, and taking the difference value of the normalized value of the sum value subtracted from 1 as a feature suppression factor of the candidate feature point as the preferred feature point.
Preferably, the feature distance between any two candidate feature points is constructed by combining gray scale, gradient and distance difference, and the method comprises the following steps:
Acquiring Euclidean distance between any two candidate feature points in an image; acquiring the Euclidean distance between gray values and gradient values of any two candidate feature points; obtaining a distance coefficient between any two candidate feature points;
and taking the three products of the Euclidean distance, the Euclidean distance sum value and the distance coefficient as the characteristic distance between any two candidate characteristic points.
Preferably, the obtaining a distance coefficient between any two candidate feature points includes:
obtaining edges of the key frame images by adopting an edge detection algorithm; when any two candidate feature points are co-edges, setting a distance coefficient between any two candidate feature points to be 0; otherwise, the distance coefficient between any two candidate feature points is set to 1.
Preferably, the step of constructing a corrected feature value of the candidate feature point according to the feature suppression factor and the texture feature value of the candidate feature point as the preferred feature point, thereby screening the feature point includes:
calculating a difference value result of a feature suppression factor of which the candidate feature point is a preferred feature point by subtracting the candidate feature point from 1, and taking a normalized value of a product of the difference value result and the texture feature value as a correction feature value of the candidate feature point;
And marking the candidate feature points with the corrected feature values larger than a preset preferred threshold value as preferred feature points, and taking the preferred feature points as the feature points after screening.
Preferably, the calculating the similarity of the video before and after the link transmission according to the gray scale and the motion vector difference between the feature points in the key frame images in the video before and after the link transmission includes:
Analyzing each key frame image and each characteristic point in the video before and after link transmission by taking the corresponding union set;
For each feature point in each key frame image, calculating the gray level difference absolute value and the motion vector difference of the feature points of a sender and a receiver, wherein the motion vector difference is the Euclidean distance between the motion direction and the motion distance of a motion vector;
and calculating the product of the gray difference absolute value and the motion vector difference, taking the opposite number of the sum of the products of all the feature points of all the key frame images as an index of an exponential function based on a natural constant, and taking the calculation result of the exponential function as the video similarity before and after link transmission.
The invention has at least the following beneficial effects:
According to the invention, through analyzing videos before and after link transmission, key frame images in the videos are firstly obtained, and the next analysis is performed based on the key frame images, so that the accuracy of data analysis is higher; then, the candidate feature points in the key frame image are obtained by using a corner detection algorithm, and then the feature degree of the candidate feature points is mined from different layers by analyzing the texture features of the candidate feature points in the LBP space and surrounding neighborhood in the gray level co-occurrence matrix, so that the analysis process of the texture feature values of the candidate feature points is more comprehensive; meanwhile, according to the inhibition relation between the candidate feature points and the adjacent candidate feature points, the corrected feature values of the corrected candidate feature points are finally obtained, the preferred feature points are screened out, the feature points have high-value information, and the accuracy of subsequent calculation is improved;
And finally, calculating the similarity between the two videos before and after the link transmission according to the gray level and motion vector difference conditions among the feature points screened in the key frame image, and finishing the comparison between the video received by the receiver and the video sent by the sender, and finally finishing the detection of the video short message link. The invention finishes screening of the feature points by mining the key information in the image, reduces the time complexity of the algorithm, greatly improves the efficiency of calculating the similarity between videos, improves the accuracy of comparing the videos before and after the link transmission, and simultaneously improves the high-efficiency detection of the video short message link.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an intelligent video short message link high-efficiency detection method provided by the invention;
Fig. 2 is a flow chart of index construction for judging the condition of a video short message link.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of specific implementation, structure, characteristics and effects of an intelligent video short message link high-efficiency detection method according to the invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a specific scheme of an intelligent video short message link high-efficiency detection method, which is specifically described below with reference to the accompanying drawings.
The embodiment of the invention provides an intelligent video short message link high-efficiency detection method.
Specifically, the following method for detecting the intelligent video short message link efficiently is provided, please refer to fig. 1, and the method comprises the following steps:
In step S001, video data before and after transmission through a link is acquired.
The detection of the video short message communication link is difficult, so that not only the short message receiving success rate, but also whether the video information accords with the sending information or not is considered. Therefore, the obtained video information needs to be detected and compared, so that more accurate video information link conditions are obtained. Therefore, the method for detecting the video short message link in this embodiment is as follows: and transmitting the video to the receiver through transmission of the link by timing, so as to compare the difference between the video received by the receiver and the video transmitted by the sender, and judge the quality of the link.
So far, the video data before and after transmission through the link can be acquired by the above method.
Step S002, according to the video data before and after the link transmission, obtaining the key frame image, mining the internal characteristics of the key frame image according to the characteristic points which are preliminarily obtained in the key frame image, and calculating the similarity between the videos before and after the link transmission according to the characteristic points obtained by screening, thereby completing the efficient detection of the video short message link.
The video of the receiving party and the video of the sending party are compared, wherein the comparison method is to compare the similarity of the video key frame images, namely, whether the obtained video information is consistent is judged by comparing the corresponding key frame images of the respective video information.
In the embodiment, the feature points in the key frame images are compared to construct the video similarity, but the feature points acquired based on the traditional corner detection algorithm are more, and the information of the feature points is probably not valuable information in the video, so that the embodiment finishes the screening of the feature points by mining the key information in the images, thereby reducing the time complexity of the algorithm and simultaneously improving the efficient detection of the embodiment on the video short message link.
Firstly, comparing the video time lengths of a receiver video and a sender video, and directly judging that the video transmitted by the link has errors when the video time lengths are inconsistent, so as to finish preliminary inspection; when the video time length is consistent, the obtained video is decomposed frame by frame, and the images are obtained by arranging the images in sequence.
And obtaining the motion vector of the frame difference image by using a three-step search method for each image of the video, and recording the video frame image with the average motion vector larger than a preset vector threshold value in the obtained video frame difference image as a key frame image.
The method for acquiring the average motion vector and the vector threshold comprises the following steps: the average value of the euclidean distance between the motion direction and the motion distance of the motion vectors of all the pixel points of two adjacent frame images is set to 10 in this embodiment, and the implementer can set according to the actual situation.
And when the average motion vector of the adjacent frame images is larger than a preset vector threshold, the two frame images are considered as key frame images of the video. And using the method until all the key frame images in the video are acquired.
The traditional video frame image comparison method uses a shape context algorithm to perform matching analysis, and the matching is completed by screening feature pixel points and according to the shape context algorithm, however, the video frame image comparison method only performs analysis according to video key frame images, and cannot guarantee the integrity of the video, so that whether the corresponding video is the same video or not is acquired according to the relation between the video key frame images. Because the block matching algorithm in the three-step search method has certain limitation, when matching is performed, the number of pixels to be matched is excessive, so that the screening of characteristic points is required according to the obtained image, the calculated amount of the matching process is reduced, and the matching efficiency is improved.
Since the feature points play a role in positioning and other information in the image, the corner points in the image can be used as feature points, wherein the corner points in the image are acquired by using the harris corner point detection algorithm, and the acquired corner points are possibly excessive in different images due to different parameter settings in the harris corner point detection algorithm, so that the acquired corner points need to be further screened, the finally valuable corner points are further analyzed from the corner points to acquire the finally valuable corner points as feature points, and the corner points acquired by the algorithm are marked as candidate feature points. The harris corner detection algorithm is a known technique, and this embodiment is not described in detail.
The selected feature points need to reflect the image edge change information and also need to consider the image texture information, so that the embodiment uses an LBP algorithm to acquire an LBP image of the image, analyzes according to the LBP characteristics of the candidate feature points, and acquires the feature values of the candidate feature points in the texture characteristics, thereby facilitating the acquisition of the preferred feature points from the candidate feature points. The LBP algorithm is a known technique, and this embodiment is not described in detail.
And analyzing the difference between the LBP value of the candidate feature point and the LBP value of the surrounding pixel points according to the LBP image, and acquiring the first texture feature value of the candidate feature point according to the surrounding information of the pixel points.
Wherein,First texture feature value representing candidate feature point,/>The number of clusters obtained by using a DBSCAN clustering algorithm for the LBP image is represented, wherein parameters of the DBSCAN clustering algorithm are set to minpts =3, r=3, the DBSCAN clustering algorithm is a known technology, and the embodiment is not repeated, i.e./>Element number of cluster where candidate feature points are located,/>Absolute value of difference value of LBP value representing cluster where candidate feature point is located and cluster center of c-th clusterRepresenting the absolute value of the difference between the candidate feature point and the LBP value of the i-th pixel point in its surrounding 3*3 neighborhood.
Namely, when the LBP value of the cluster center where the candidate feature point is located is different from the LBP values of other clustersThe larger the number/>, of cluster clusters where the current candidate feature points are locatedFewer, while the difference/>, between the LBP values of the candidate feature points and the neighboring pixel points around the candidate feature points is calculatedThe larger the feature value of the current candidate feature point is, namely the first texture feature value/>The larger.
In the gray level co-occurrence matrix, the texture features of the pixel points can be analyzed according to the gray level co-occurrence matrix, so that the embodiment analyzes the difference between the feature values of the candidate feature points and other pixel points and the feature degree of the candidate feature points by constructing the gray level co-occurrence matrix, thereby obtaining the feature values of the candidate feature points.
Therefore, the difference between the texture of the gray level co-occurrence matrix constructed by the neighborhood pixel points of the candidate feature points 3*3 and the texture of the gray level co-occurrence matrix obtained by each pixel point in the range 5*5 is analyzed, so that a second texture feature value of the candidate feature points based on gray level co-occurrence matrix analysis is obtained. The gray level co-occurrence matrix is a known technology, and the description of this embodiment is omitted.
Wherein,Second texture feature value representing candidate feature point,/>Contrast descriptor calculated by gray level co-occurrence matrix representing 3*3 neighborhood of candidate feature pixel point,/>The absolute value of the difference between the contrast descriptor calculated by the gray level co-occurrence matrix of 3*3 neighborhood of the candidate feature pixel and the contrast descriptor calculated by the gray level co-occurrence matrix of 3*3 neighborhood of the j-th pixel in 5*5 neighborhood of the candidate feature pixel.
Namely, the contrast ratio descriptor obtained by the gray level co-occurrence matrix in 3*3 neighborhood of the candidate feature pointThe larger, and the resulting contrast descriptor/>Difference/>, of contrast descriptor of 3*3 neighborhood gray level co-occurrence matrix with each pixel point in 5*5 neighborhood around the contrast descriptorThe larger the second texture feature value/>, of the current candidate feature point is describedThe larger the candidate feature points, the more pronounced the feature degree.
Correspondingly obtaining a first texture characteristic value and a second texture characteristic value of the candidate characteristic point based on the texture information according to the analysis, thereby constructing the texture characteristic value of the candidate characteristic point as follows:
wherein, Texture feature value representing candidate feature points,/>A first texture feature value representing a candidate feature point,And a second texture feature value representing the candidate feature point.
I.e. when the first texture feature value of the candidate feature point is foundThe larger the second texture feature value/>The larger the texture feature value/>, the description of the candidate feature pointsThe larger the current candidate feature point is, the more obvious the texture feature is calculated under the LBP algorithm and the gray level co-occurrence matrix, and the more complicated the texture information representing the candidate feature point is.
The influence degree between candidate feature points is obtained, and because the feature points are selected in the image, not only the condition of corner points in the image edge information is considered, but also the interrelationship between the corner points is considered, namely the corner points cannot be close together, so that the candidate feature points can be obtained as feature suppression factors of the preferred feature points according to the influence degree between the candidate feature points:
wherein, Representing candidate feature points/>And candidate feature points/>Characteristic distance of/(I)Euclidean distance representing gray value and gradient value between two candidate feature points,/>Representing candidate feature points/>And candidate feature points/>Euclidean distance in image,/>Distance coefficient indicating whether two candidate feature points are co-edge, when two candidate feature points are co-edge,/>=0, Otherwise/>=1。/>Representing candidate feature points/>Characteristic suppressing factor as preferred characteristic point,/>Representing a normalization function,/>Representing candidate feature points/>The number of nearest candidate feature points, in this embodiment/>The implementation can be set by the user according to the actual situation. The gradient of the candidate feature points is calculated by a sobel operator, and the edge obtaining method for judging whether the two candidate feature points are co-edges is obtained by a canny edge detection algorithm, wherein the sobel operator and the edge detection algorithm are known techniques, and the embodiment is not repeated.
I.e. the difference between features obtained when the two candidate feature points are combined in the image by gray values and gradient valuesThe smaller and the distance/>, of two candidate feature pointsThe closer together two candidate feature points are on the same edge line, i.e./>=0, The closer the feature distance of the two candidate feature points, i.e./>, is explainedThe smaller. When the feature distance/>, between the candidate feature point and the candidate feature points around the candidate feature pointThe smaller the time, but the corner points cannot be close, the smaller the probability that the current candidate feature point is the preferred feature point, and the suppression factor/>, corresponding to the feature point is the preferred feature pointThe larger.
The texture characteristic values of the corresponding characteristic points are obtained according to the method, and the characteristic values of the candidate characteristic points are corrected according to the characteristic suppression factors of the candidate characteristic points, so that the preferred characteristic points are screened out:
wherein, Modified feature value representing candidate feature points,/>Representing a normalization function,/>Texture feature value representing candidate feature points,/>And a feature suppression factor indicating that the candidate feature point is a preferred feature point.
I.e. when the texture feature value of the candidate feature point is foundThe larger the feature suppression factor/>, whose corresponding candidate feature point is the preferred feature pointThe smaller the correction feature value of the candidate feature point, the larger the correction feature value. The present embodiment sets the preference thresholdThat is, when the corrected feature value of the candidate feature point is larger than the preferred threshold value, the corrected feature value is recorded as the preferred feature point, and the preferred threshold value can be set by the practitioner according to the actual situation.
And marking all the preferred feature points as feature points, and finishing the selection of the preferred feature points.
And comparing the video frame image received by the receiver with the video frame image sent by the sender, and aiming at the gray level and motion vector difference between each feature point in the key frame image, thereby obtaining the corresponding video similarity. Because the keyframe images in the video before and after transmission may be inconsistent, and the feature points in the corresponding keyframes may be less and cannot be paired, in this embodiment, the feature points in the keyframes before and after the link transmission after the union are analyzed by taking the union of the keyframes before and after transmission and the union of the feature points in the corresponding keyframes.
Wherein,Representing video similarity before and after link transmission,/>Representing an exponential function based on a natural constant e,/>Union representing the number of key frame images before and after link transmission,/>A union set representing the number of feature points in the c-th key frame image before and after link transmission,/>Representing the absolute value of gray difference of the o-th feature point in the c-th key frame image between the sender and the receiver,/>And representing the motion vector difference of the o feature point in the c-th key frame image between the sender and the receiver, wherein the motion vector is obtained by using a three-step search method, and the calculation method of the motion vector difference is the Euclidean distance between the motion direction and the motion distance of the motion vector.
Namely, when the gray values of the feature points in the corresponding key frame images of the two videos before and after the transmission of the link are calculatedMotion vector difference/>The smaller the time, the more similar the two video information are, i.e. the video similarity/>, before and after link transmissionThe larger.
Setting a threshold gamma=0.9, namely judging that a video short message link is normal when the video similarity between the video received by the receiver and the video sent by the sender is larger than a preset threshold gamma; otherwise, judging that the video short message link is abnormal. The index construction flow chart of the video short message link condition judgment is shown in fig. 2.
Thus, the high-efficiency detection of the video short message link is completed.
In summary, according to the embodiment of the invention, by analyzing the videos before and after link transmission, firstly, the key frame images in the videos are acquired, and the next analysis is performed based on the key frame images, so that the accuracy of data analysis is higher; then, the embodiment of the invention acquires candidate feature points in the key frame image by using a corner detection algorithm, and then excavates the feature degree of the candidate feature points from different layers by analyzing the texture features of the candidate feature points in the LBP space and surrounding neighborhood in the gray level co-occurrence matrix, so that the analysis process of the texture feature values of the candidate feature points is more comprehensive; meanwhile, according to the inhibition relation between the candidate feature points and the adjacent candidate feature points, the corrected feature values of the corrected candidate feature points are finally obtained, the preferred feature points are screened out, the feature points have high-value information, and the accuracy of subsequent calculation is improved;
Finally, according to the gray level and motion vector difference conditions among the feature points screened in the key frame images, the embodiment of the invention calculates the similarity between the two videos before and after the link transmission, and completes the comparison between the video received by the receiver and the video sent by the sender, and finally completes the detection of the video short message link. According to the embodiment of the invention, the screening of the characteristic points is completed by mining the key information in the image, the time complexity of an algorithm is reduced, the efficiency of calculating the similarity between videos is greatly improved, the accuracy of comparing the videos before and after the link transmission is improved, and the efficient detection of the video short message link by the embodiment of the invention is improved.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and the same or similar parts of each embodiment are referred to each other, and each embodiment mainly describes differences from other embodiments.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; the technical solutions described in the foregoing embodiments are modified or some of the technical features are replaced equivalently, so that the essence of the corresponding technical solutions does not deviate from the scope of the technical solutions of the embodiments of the present application, and all the technical solutions are included in the protection scope of the present application.

Claims (8)

1. The intelligent video short message link high-efficiency detection method is characterized by comprising the following steps of:
acquiring video data before and after transmission through a link;
Acquiring key frame images in videos before and after link transmission by adopting a three-step search method; obtaining candidate feature points in each key frame image by adopting a corner detection algorithm; acquiring a first texture feature value of the candidate feature point according to LBP information difference around the candidate feature point; constructing a second texture feature value of the candidate feature point based on the gray level co-occurrence matrix and the neighborhood difference thereof; texture feature values of candidate feature points are obtained according to the first texture feature value and the second texture feature value; constructing a feature suppression factor of which the candidate feature points are preferred feature points according to gray level, gradient and distance information between adjacent candidate feature points; constructing a correction characteristic value of the candidate characteristic point according to the characteristic suppression factor and the texture characteristic value of which the candidate characteristic point is the preferred characteristic point, so as to screen the characteristic point;
calculating the similarity of the videos before and after the link transmission according to the gray scale and the motion vector difference between the feature points in the key frame images before and after the link transmission; when the video similarity is larger than a preset threshold, the video short message link detection is normal; otherwise, detecting abnormality of the video short message link;
the obtaining the first texture feature value of the candidate feature point according to the LBP information difference around the candidate feature point comprises the following steps:
obtaining an LBP image of the image by adopting an LBP algorithm; clustering LBP images by adopting a DBSCAN clustering algorithm to obtain clustering clusters; acquiring the element number of a cluster where the candidate feature points are located;
Respectively calculating the absolute value of the difference value of the LBP value of the cluster center between the cluster where the candidate feature point is located and each cluster, and calculating the sum value of the absolute values of the difference values of all clusters; calculating the sum of the absolute values of the differences of LBP values between the candidate feature points and all the pixel points in the neighborhood; dividing the product of the two sum values by the element number to obtain a first texture feature value of the candidate feature point;
the construction of the second texture feature value of the candidate feature point based on the gray level co-occurrence matrix and the neighborhood difference thereof comprises the following steps:
Constructing a first neighborhood and a second neighborhood for each pixel point, and acquiring contrast descriptors of the first neighborhood of the candidate feature points by adopting a gray level co-occurrence matrix;
For each pixel point in the second neighborhood of the candidate feature point, acquiring a contrast descriptor of the first neighborhood of each pixel point by adopting a gray level co-occurrence matrix;
And respectively calculating the absolute value of the difference value of the contrast descriptors between each pixel point and the candidate feature point, calculating the sum value of the absolute values of the difference values of all the pixel points in the second neighborhood of the candidate feature point, and taking the product result of the sum value and the contrast descriptors of the first neighborhood of the candidate feature point as the second texture feature value of the candidate feature point.
2. The method for efficiently detecting an intelligent video short message link according to claim 1, wherein the step of obtaining key frame images in videos before and after link transmission by using a three-step search method comprises:
A walk searching method is adopted to obtain the motion vector of each pixel point between each adjacent image in the video before and after link transmission; calculating Euclidean distance average values between the motion directions and the motion distances of the motion vectors of all pixel points in each adjacent image;
And when the Euclidean distance average value is larger than a preset vector threshold value, taking the adjacent images to which the Euclidean distance average value belongs as key frame images.
3. The method for efficiently detecting an intelligent video sms link according to claim 1, wherein said selecting texture feature values according to the first and second texture feature values comprises: and taking the product result of the first texture characteristic value and the second texture characteristic value as the texture characteristic value of the candidate characteristic point.
4. The method for efficiently detecting an intelligent video short message link according to claim 1, wherein the constructing the feature suppression factor with the candidate feature point as the preferred feature point according to the gray level, gradient and distance information between the neighboring candidate feature points comprises:
Combining gray scale, gradient and distance difference to construct a feature distance between any two candidate feature points; and calculating the sum value of feature distances between F candidate feature points nearest to the candidate feature point, and taking the difference value of the normalized value of the sum value subtracted from the number 1 as a feature suppression factor of the candidate feature point as the preferred feature point.
5. The method for efficiently detecting an intelligent video short message link according to claim 4, wherein the step of constructing a feature distance between any two candidate feature points by combining gray scale, gradient and distance difference comprises the steps of:
Acquiring Euclidean distance between any two candidate feature points in an image; acquiring the Euclidean distance between gray values and gradient values of any two candidate feature points; obtaining a distance coefficient between any two candidate feature points;
and taking the three products of the Euclidean distance, the Euclidean distance sum value and the distance coefficient as the characteristic distance between any two candidate characteristic points.
6. The method for efficiently detecting an intelligent video short message link according to claim 5, wherein the step of obtaining a distance coefficient between any two candidate feature points comprises:
obtaining edges of the key frame images by adopting an edge detection algorithm; when any two candidate feature points are co-edges, setting a distance coefficient between any two candidate feature points to be 0; otherwise, the distance coefficient between any two candidate feature points is set to 1.
7. The method for high-efficiency detection of an intelligent video short message link according to claim 1, wherein the constructing a correction feature value of a candidate feature point according to a feature suppression factor and a texture feature value of the candidate feature point as a preferred feature point, thereby screening the feature point comprises:
calculating a difference value result of subtracting the candidate feature point from the number 1 as a feature suppression factor of the preferred feature point, and taking a normalized value of the product of the difference value result and the texture feature value as a correction feature value of the candidate feature point;
And marking the candidate feature points with the corrected feature values larger than a preset preferred threshold value as preferred feature points, and taking the preferred feature points as the feature points after screening.
8. The method for efficiently detecting an intelligent video short message link according to claim 1, wherein the calculating the similarity of the video before and after the link transmission according to the difference of gray scale and motion vector between feature points in key frame images in the video before and after the link transmission comprises:
Analyzing each key frame image and each characteristic point in the video before and after link transmission by taking the corresponding union set;
For each feature point in each key frame image, calculating the gray level difference absolute value and the motion vector difference of the feature points of a sender and a receiver, wherein the motion vector difference is the Euclidean distance between the motion direction and the motion distance of a motion vector;
and calculating the product of the gray difference absolute value and the motion vector difference, taking the opposite number of the sum of the products of all the feature points of all the key frame images as an index of an exponential function based on a natural constant, and taking the calculation result of the exponential function as the video similarity before and after link transmission.
CN202410274939.0A 2024-03-12 2024-03-12 Intelligent video short message link efficient detection method Active CN117880759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410274939.0A CN117880759B (en) 2024-03-12 2024-03-12 Intelligent video short message link efficient detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410274939.0A CN117880759B (en) 2024-03-12 2024-03-12 Intelligent video short message link efficient detection method

Publications (2)

Publication Number Publication Date
CN117880759A CN117880759A (en) 2024-04-12
CN117880759B true CN117880759B (en) 2024-05-17

Family

ID=90595063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410274939.0A Active CN117880759B (en) 2024-03-12 2024-03-12 Intelligent video short message link efficient detection method

Country Status (1)

Country Link
CN (1) CN117880759B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103297851A (en) * 2013-05-16 2013-09-11 中国科学院自动化研究所 Method and device for quickly counting and automatically examining and verifying target contents in long video
CN107844779A (en) * 2017-11-21 2018-03-27 重庆邮电大学 A kind of video key frame extracting method
CN116246174A (en) * 2023-04-26 2023-06-09 山东金诺种业有限公司 Sweet potato variety identification method based on image processing
CN116744006A (en) * 2023-08-14 2023-09-12 光谷技术有限公司 Video monitoring data storage method based on block chain

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103297851A (en) * 2013-05-16 2013-09-11 中国科学院自动化研究所 Method and device for quickly counting and automatically examining and verifying target contents in long video
CN107844779A (en) * 2017-11-21 2018-03-27 重庆邮电大学 A kind of video key frame extracting method
CN116246174A (en) * 2023-04-26 2023-06-09 山东金诺种业有限公司 Sweet potato variety identification method based on image processing
CN116744006A (en) * 2023-08-14 2023-09-12 光谷技术有限公司 Video monitoring data storage method based on block chain

Also Published As

Publication number Publication date
CN117880759A (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN110941594B (en) Splitting method and device of video file, electronic equipment and storage medium
CN101375607B (en) Inter-mode region-of-interest video object segmentation
JP2023519525A (en) Anomaly detection based on autoencoder and clustering
TWI382360B (en) Object detection method and device threrof
WO2023025010A1 (en) Stroboscopic banding information recognition method and apparatus, and electronic device
CN109982071B (en) HEVC (high efficiency video coding) dual-compression video detection method based on space-time complexity measurement and local prediction residual distribution
US11798254B2 (en) Bandwidth limited context based adaptive acquisition of video frames and events for user defined tasks
CN112489076A (en) Multi-target tracking method and system
CN107506691B (en) Lip positioning method and system based on skin color detection
CN117880759B (en) Intelligent video short message link efficient detection method
CN117221609B (en) Centralized monitoring check-in system for expressway toll service
CN112770116B (en) Method for extracting video key frame by using video compression coding information
CN111723735B (en) Pseudo high bit rate HEVC video detection method based on convolutional neural network
CN114529894A (en) Rapid scene text detection method fusing hole convolution
CN115830508B (en) 5G message content detection method
CN115665359B (en) Intelligent compression method for environment monitoring data
CN111191077A (en) Video content identification and automatic classification algorithm
CN111860429A (en) Blast furnace tuyere abnormality detection method, device, electronic apparatus, and storage medium
CN113743235B (en) Electric power inspection image processing method, device and equipment based on edge calculation
CN116468625A (en) Single image defogging method and system based on pyramid efficient channel attention mechanism
CN112188212B (en) Intelligent transcoding method and device for high-definition monitoring video
CN111145219B (en) Efficient video moving target detection method based on Codebook principle
CN116033033B (en) Spatial histology data compression and transmission method combining microscopic image and RNA
CN117593295B (en) Nondestructive testing method for production defects of mobile phone data line
CN114579805B (en) Convolutional neural network similar video retrieval method based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant