CN116935272A - Video content detection method and device, electronic equipment and storage medium - Google Patents

Video content detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116935272A
CN116935272A CN202310857526.0A CN202310857526A CN116935272A CN 116935272 A CN116935272 A CN 116935272A CN 202310857526 A CN202310857526 A CN 202310857526A CN 116935272 A CN116935272 A CN 116935272A
Authority
CN
China
Prior art keywords
video
curve
distance
similarity
curve point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310857526.0A
Other languages
Chinese (zh)
Other versions
CN116935272B (en
Inventor
王伟
陆赞信
莫锡舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iMusic Culture and Technology Co Ltd
Original Assignee
iMusic Culture and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iMusic Culture and Technology Co Ltd filed Critical iMusic Culture and Technology Co Ltd
Priority to CN202310857526.0A priority Critical patent/CN116935272B/en
Publication of CN116935272A publication Critical patent/CN116935272A/en
Application granted granted Critical
Publication of CN116935272B publication Critical patent/CN116935272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video content detection method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video; performing video key frame extraction processing on the video to be detected to obtain video key frames; performing motion characteristic extraction processing on the video key frames to obtain a motion characteristic curve; performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result; and when the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos. The embodiment of the invention can improve the efficiency of video content detection by carrying out similarity calculation processing on the motion characteristic curve, and can be widely applied to the technical field of video detection.

Description

Video content detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of video detection technologies, and in particular, to a method and apparatus for detecting video content, an electronic device, and a storage medium.
Background
With the rapid development of networks, the types of videos and the ways of obtaining resources are various, wherein the types of videos such as film and television types, homemade types and the like related to copyrights can have the condition that titles and covers are different under the same or different search conditions but the contents are repeated in a large amount on the network. The existing video detection technology generally adopts manual detection to detect whether the video content is repeated or directly carries out comparison of the abstract algorithm values of the video file for repeated judgment, but the video detection efficiency is lower, and the repeated content after the video format, resolution, color, brightness change and other treatments are difficult to detect.
In view of the foregoing, there is a need for solving the technical problems in the related art.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, an electronic device, and a storage medium for detecting video content, so as to improve the accuracy of video detection.
In one aspect, the present invention provides a video content detection method, including:
acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video;
performing video key frame extraction processing on the video to be detected to obtain video key frames;
Performing motion characteristic extraction processing on the video key frames to obtain a motion characteristic curve;
performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result;
and when the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos.
Optionally, the performing video key frame extraction processing on the video to be detected to obtain a video key frame includes:
carrying out inter-frame difference processing on two adjacent frames of images which are continuous in time in the video to be detected to obtain an average inter-frame difference image;
and selecting an image frame with the average inter-frame difference local maximum value from the average inter-frame difference image to be determined as a video key frame.
Optionally, the performing motion feature extraction processing on the video key frame to obtain a motion feature curve includes:
performing image segmentation processing on the video key frame to obtain an object contour coordinate array;
performing distance contrast processing on the video key frames according to the object contour coordinate array to obtain a target object;
extracting the position and time of the target object to obtain a key frame position array and a key frame time array;
And calculating the key frame position array and the key frame time array according to the smooth cubic polynomial interpolation motion curve expression to obtain a motion characteristic curve.
Optionally, the performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result, including:
performing object contour comparison processing on the first video and the second video to obtain the same object;
performing similar distance calculation on the same object according to the motion characteristic curve to obtain a similar distance;
and carrying out video similarity calculation according to the same object and the similar distance to obtain a video similarity calculation result.
Optionally, the performing object contour comparison processing on the first video and the second video to obtain the same object includes:
object contour information extraction processing is carried out on the first video and the second video respectively, so that first video object contour information and second video object contour information are obtained;
and carrying out normalization processing on the first video object contour information and the second video object contour information, and then carrying out comparison processing on the first video object contour information and the second video object contour information to obtain the same object.
Optionally, the calculating the similar distance to the same object according to the motion characteristic curve to obtain a similar distance includes:
acquiring a motion characteristic curve of the same object in the first video, and determining the motion characteristic curve as a first curve;
acquiring a motion characteristic curve of the same object in the second video, and determining the motion characteristic curve as a second curve;
filling the first curve and the second curve respectively to obtain a first curve position array and a second curve position array;
and carrying out recursive calculation on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance.
Optionally, the performing recursive computation on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance includes:
acquiring a first curve point and a second curve point from the first curve position array, wherein the second curve point is the previous curve point of the first curve point;
acquiring a third curve point and a fourth curve point from the second curve position array, wherein the fourth curve point is a previous curve point of the third curve point;
Performing recursive traversal on the first curve position array and the second curve position array through the first curve point and the third curve point, and performing similarity distance calculation on the first curve point and the third curve point to obtain a similarity distance;
and calculating the similarity distance between the first curve point and the third curve point, wherein the similarity distance calculation comprises the following steps:
similarity distance calculation is carried out on the second curve point and the fourth curve point, and a first distance is obtained;
similarity distance calculation is carried out on the first curve point and the fourth curve point, and a second distance is obtained;
performing similarity distance calculation on the second curve point and the third curve point to obtain a third distance;
comparing the first distance, the second distance and the third distance, and selecting the minimum distance value as a first calculation result;
carrying out coordinate linear distance calculation on the first curve point and the third curve point to obtain a fourth distance;
and comparing the first calculation result with the fourth distance, and selecting the maximum distance value as a second calculation result.
On the other hand, the embodiment of the invention also provides a video content detection device, which comprises:
The first module is used for acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video;
the second module is used for extracting and processing video key frames of the video to be detected to obtain video key frames;
the third module is used for extracting the motion characteristics of the video key frames to obtain a motion characteristic curve;
the fourth module is used for carrying out similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result;
and a fifth module, configured to determine that the first video and the second video are duplicate content videos when the video similarity calculation result meets a preset condition.
Optionally, the second module is configured to perform video key frame extraction processing on the video to be detected to obtain a video key frame, and includes:
the first unit is used for carrying out inter-frame difference processing on two adjacent frames of images which are continuous in time in the video to be detected to obtain an average inter-frame difference image;
and a second unit, configured to select an image frame with an average inter-frame difference local maximum value from the average inter-frame difference image, and determine the image frame as a video key frame.
Optionally, the third module is configured to perform motion feature extraction processing on the video keyframe to obtain a motion feature curve, and includes:
the third unit is used for carrying out image segmentation processing on the video key frames to obtain an object contour coordinate array;
a fourth unit, configured to perform distance comparison processing on the video keyframe according to the object profile coordinate array to obtain a target object;
a fifth unit, configured to extract the position and time of the target object to obtain a key frame position array and a key frame time array;
and a sixth unit, configured to calculate the key frame position array and the key frame time array according to a smooth cubic polynomial interpolation motion curve expression, so as to obtain a motion characteristic curve.
Optionally, the fourth module is configured to perform similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result, and includes:
a seventh unit, configured to perform object contour comparison processing on the first video and the second video to obtain the same object;
an eighth unit, configured to perform similar distance calculation on the same object according to the motion characteristic curve, to obtain a similar distance;
And a ninth unit, configured to perform video similarity calculation according to the same object and the similar distance, to obtain a video similarity calculation result.
Optionally, the seventh unit is configured to perform object contour comparison processing on the first video and the second video to obtain the same object, and includes:
the first subunit is used for respectively extracting object contour information of the first video and the second video to obtain first video object contour information and second video object contour information;
and the second subunit is used for carrying out normalization processing on the first video object contour information and the second video object contour information, and then carrying out comparison processing on the first video object contour information and the second video object contour information to obtain the same object.
Optionally, the eighth unit is configured to perform similar distance calculation on the same object according to the motion characteristic curve, to obtain a similar distance, and includes:
a third subunit, configured to acquire a motion characteristic curve of the same object in the first video, and determine the motion characteristic curve as a first curve;
a fourth subunit, configured to acquire a motion characteristic curve of the same object in the second video, and determine the motion characteristic curve as a second curve;
A fifth subunit, configured to fill the first curve and the second curve respectively, to obtain a first curve position array and a second curve position array;
and the sixth subunit is used for carrying out recursive calculation processing on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance.
The sixth subunit is configured to perform recursive computation on the first curve position array and the second curve position array according to a similarity distance calculation formula, to obtain a similarity distance, and includes:
acquiring a first curve point and a second curve point from the first curve position array, wherein the second curve point is the previous curve point of the first curve point;
acquiring a third curve point and a fourth curve point from the second curve position array, wherein the fourth curve point is a previous curve point of the third curve point;
performing recursive traversal on the first curve position array and the second curve position array through the first curve point and the third curve point, and performing similarity distance calculation on the first curve point and the third curve point to obtain a similarity distance;
And calculating the similarity distance between the first curve point and the third curve point, wherein the similarity distance calculation comprises the following steps:
similarity distance calculation is carried out on the second curve point and the fourth curve point, and a first distance is obtained;
similarity distance calculation is carried out on the first curve point and the fourth curve point, and a second distance is obtained;
performing similarity distance calculation on the second curve point and the third curve point to obtain a third distance;
comparing the first distance, the second distance and the third distance, and selecting the minimum distance value as a first calculation result;
carrying out coordinate linear distance calculation on the first curve point and the third curve point to obtain a fourth distance;
and comparing the first calculation result with the fourth distance, and selecting the maximum distance value as a second calculation result.
On the other hand, the embodiment of the invention also discloses electronic equipment, which comprises a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the method as described above.
In another aspect, embodiments of the present invention also disclose a computer readable storage medium storing a program for execution by a processor to implement a method as described above.
In another aspect, embodiments of the present application also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the foregoing method.
Compared with the prior art, the technical scheme provided by the application has the following technical effects: according to the embodiment of the application, the motion characteristic curve is obtained by carrying out motion characteristic extraction processing on the video key frame, and then the similarity calculation processing is carried out on the video to be detected according to the motion characteristic curve, so that the video similarity calculation result is obtained; the video similarity can be calculated through the motion characteristic curve, and repeated contents after processing of video formats, resolution, color, brightness changes and the like can be detected.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation environment of a video content detection method according to an embodiment of the present application;
fig. 2 is a flowchart of a method for detecting video content according to an embodiment of the present application;
fig. 3 is a specific flowchart of a video content detection method according to an embodiment of the present application;
fig. 4 is a flowchart of step S302 in fig. 3;
fig. 5 is a flowchart of step S303 in fig. 3;
fig. 6 is a flowchart of step S304 in fig. 3;
FIG. 7 is a schematic diagram of similarity distance calculation according to an embodiment of the present application;
FIG. 8 is a timing diagram of one implementation provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a video content detection apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In the related art, in the video repetition detection technology, whether the video content is repeated or not is generally detected manually or the comparison of the summary algorithm values of the video file is directly carried out for repeated judgment, or the comparison of the similarity of cosine, hash algorithm, histogram and the like is carried out on the picture after the frame picture of each second of the video is extracted, and the video repetition degree is judged according to the similarity of the final video obtained by the fixed algorithm. However, whether the manual identification of the video is repeated is inefficient; comparing the video file abstract algorithm with the repetition that the video format, resolution, brightness and the like are different and the content is the same and can not be detected; the contrast method converted into the picture similarity only considers the similarity among the pictures of frames per second, can not detect the repetition of the processed pictures due to nonlinear change of the colors and the like, and has lower video content detection efficiency.
In order to solve the problems in the related art, embodiments of the present application provide a video content detection method, apparatus, electronic device, and storage medium, where in the video content detection method, a video to be detected is obtained, where the video to be detected includes a first video and a second video; performing video key frame extraction processing on the video to be detected to obtain video key frames; performing motion characteristic extraction processing on the video key frames to obtain a motion characteristic curve; performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result; and when the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos. According to the embodiment of the application, the video repeatability is obtained by extracting the object motion characteristics in the video and performing comparison calculation so as to determine whether the video content is repeated or not, so that the accuracy of video content detection is improved.
Fig. 1 is a schematic diagram of an implementation environment of a video content detection method according to an embodiment of the present application. Referring to fig. 1, the software and hardware main body of the implementation environment mainly includes a video object motion feature extractor 101 and a video repeatability calculator 102, where the video object motion feature extractor 101 is used for inputting a video file, extracting video key frames, performing image segmentation, performing the same object estimation extraction in an image, and extracting motion vector features of the same object; the video repetition calculator 102 is configured to perform feature similarity calculation on video segments according to motion vector features of different objects in the video, and perform feature similarity comprehensive calculation on different video segments.
Fig. 2 is a flowchart of a video content detection method provided in the embodiment of the application in the above implementation environment, where a video to be detected and a reference video are input, a video object motion feature extractor performs object extraction and object motion feature curve generation on the video, and a video repeatability calculator performs contour similarity comparison and contour similarity calculation on a plurality of objects in different videos according to the result of the video object motion feature extractor, and finally calculates and outputs a video similarity result to determine whether the video is repeated.
Referring to fig. 3, an embodiment of the present application provides a video content detection method, including:
s301, acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video;
s302, video key frame extraction processing is carried out on the video to be detected, and video key frames are obtained;
s303, performing motion feature extraction processing on the video key frames to obtain motion feature curves;
s304, performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result;
and S305, when the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos.
In the embodiment of the invention, firstly, a video to be detected is acquired, wherein the video to be detected comprises a first video and a second video. The first video is a comparison video, the second video is a reference video, and the first video can be used as the reference video, and the second video can be used as the comparison video. Extracting video key frames of the video to be detected to obtain video key frame pictures in the video to be detected, wherein the video key frames comprise video key frames of a first video and video key frames of a second video, and the video key frames are used for representing video frames with obvious changes of video pictures. And performing motion characteristic extraction processing on the video key frames, and respectively performing motion characteristic extraction on similar objects in the video key frames of the first video and the second video to obtain motion characteristic curves, wherein the motion characteristic curves comprise a first video motion characteristic curve and a second video motion characteristic curve. Performing similarity calculation processing on the video to be detected according to the motion characteristic curve, and obtaining a video similarity calculation result by calculating the similarity of the first video motion characteristic curve and the second video motion characteristic curve; and when the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos.
Further as an optional embodiment, referring to fig. 4, in step S302, the performing video key frame extraction processing on the video to be detected to obtain a video key frame includes:
s401, carrying out inter-frame difference processing on two adjacent frames of images which are continuous in time in the video to be detected to obtain an average inter-frame difference image;
s402, selecting an image frame with the average inter-frame difference local maximum value from the average inter-frame difference image to be determined as a video key frame.
In the embodiment of the invention, the adjacent two frames of images which are continuous in time in the video to be detected are subjected to inter-frame difference processing, and the first video is taken as an example, and the adjacent two frames of images which are continuous in time in the first video are sequentially subjected to corresponding pixel point gray value subtraction absolute value to obtain the average inter-frame difference image. And then selecting the original frame with the average inter-frame difference local maximum value as a key frame of the video to obtain the video key frame.
Further as an optional embodiment, referring to fig. 5, in step S303, the performing motion feature extraction processing on the video key frame to obtain a motion feature curve includes:
s501, performing image segmentation processing on the video key frame to obtain an object contour coordinate array;
S502, performing distance comparison processing on the video key frames according to the object contour coordinate array to obtain a target object;
s503, extracting the position and time of the target object to obtain a key frame position array and a key frame time array;
s504, calculating the key frame position array and the key frame time array according to the smooth cubic polynomial interpolation motion curve expression to obtain a motion characteristic curve.
In the embodiment of the invention, the video key frames in the video to be detected are respectively subjected to motion feature extraction processing to obtain a motion feature curve, and the first video is taken as an example. And carrying out image segmentation processing on the video key frames of the first video, carrying out a histogram-based image segmentation method on each video key frame in the first video to obtain the object contour of the object in each video key frame, and storing the continuous sequential coordinate value numbers of the object contour to obtain an object contour coordinate array. And then, performing distance comparison processing on a plurality of video key frames in the first video according to the object contour coordinate arrays, and determining whether the object is the same object or not by calculating the distance between each coordinate in a plurality of object contour edge coordinate arrays in the continuous video key frames to obtain the target object. Extracting the position and time of the target object by sequentially storing a time array of the target object appearing in the video key frame, e.g. T i =[t 0 ,t 1 ,…t n-1 ,t n ]N represents the number of times the target object appears in the first video, t n Representing the time of appearance of the target object in the first video, T i Representing a key time array. And based on setting the left lower corner of the picture as an origin 0, extracting and storing the central position array of the target object in different key frame pictures as follows: p (P) i =[p 0 ,p 1 ,...p n-1 ,p n ],p n Representing the position of the target object in the first video, P i Representing a key time array. Dividing the first and last point speeds into: v 0 =v n Except for =0, the speed of the remaining points is calculated from the current position and the last position distance and time difference. According to smoothing three times moreAnd calculating the key frame position array and the key frame time array by using a polynomial interpolation motion curve expression to obtain a motion characteristic curve. Wherein the motion curve expression is interpolated according to a smooth cubic polynomial for two continuous motion coordinates p n-1 And p n Calculating, wherein the expression of the smooth cubic polynomial interpolation motion curve is as follows:
Q(t)=a 0 +a 1 (t-t 0 )+a 2 (t-t 0 ) 2 +a 3 (t-t 0 ) 3 t 0 ≤t≤t n
wherein a is 0 ,a 1 ,a 2 ,a 3 For the parameters to be determined, let h=p n -p n-1 ,W=t n -t n-1Then two adjacent points are calculated using the above formula substituting the following parameters:
with continuously-moving coordinate point p 0 And point p 1 For example, point p 0 To p 1 The formula of the motion points between the two is as follows:
all the motion points between the object positions of all the other continuous two key frame pictures can be brought into respective parameters to be calculated by a smooth cubic polynomial interpolation motion curve expression, so as to obtain a motion characteristic curve.
Further as an optional implementation manner, referring to fig. 6, in step S304, the performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result includes:
s601, performing object contour comparison processing on the first video and the second video to obtain the same object;
s602, carrying out similar distance calculation on the same object according to the motion characteristic curve to obtain a similar distance;
and S603, performing video similarity calculation according to the same object and the similar distance to obtain a video similarity calculation result.
In the embodiment of the invention, object contour comparison processing is carried out on the first video and the second video to obtain the same object. It should be noted that, the performing distance comparison processing on the video key frames according to the object profile coordinate array to obtain the target object refers to finding the same object in the video key frames in the first video and the second video as the target object, and comparing the first video and the second video to find the same object in the first video and the second video. And calculating the similar distance of the same object according to the motion characteristic curve to obtain the similar distance, and calculating the similar distance through the motion characteristic curve of the same object in the first video and the characteristic curve of the same object in the second video to obtain the similar distance. Video similarity calculation is carried out according to the same object and the similar distance to obtain a video similarity calculation result, wherein the video similarity calculation result comprises the following steps: by counting a plurality of results such as the number of all objects in the first video, the number of similar object outlines, the similar distances between the motion characteristic curves of the similar objects and the like. According to the principle that the more similar objects are, the smaller the similar distance of the motion characteristic curve of the similar objects is, the higher the similarity of video contents is, the number of the extracted objects in the first video is set as a, and the number of the objects with similar outlines with the second video is set as c (wherein c <a) The motion characteristic curve of each similar outline object is similar to the distance s 1 ,s 2 …,s c ]The calculation formula of the final video similarity calculation result S is:
and (3) initially taking a repetition threshold value of 0.7, and when the video similarity calculation result S is more than 0.7, obtaining a repeated video, wherein the repetition threshold value can be optimized according to a test result.
Further optionally, in step S601, the performing object contour comparison processing on the first video and the second video to obtain the same object includes:
object contour information extraction processing is carried out on the first video and the second video respectively, so that first video object contour information and second video object contour information are obtained;
and carrying out normalization processing on the first video object contour information and the second video object contour information, and then carrying out comparison processing on the first video object contour information and the second video object contour information to obtain the same object.
In the embodiment of the invention, object contour information extraction processing is respectively carried out on the first video and the second video to obtain first video object contour information and second video object contour information. And carrying out normalization processing on the first video object contour information and the second video object contour information, uniformly reducing the object contour to 32 x 32 after the side with the longest rotating contour is the bottom side, comparing the object contours in the two video object motion segments, and determining whether the object is the same object according to a set threshold value. And comparing the distance between each coordinate in the edge coordinate arrays of the profiles of the plurality of objects in the first video and the second video to determine whether the objects are the same object, comparing the calculated distance value of the obtained coordinates with a set threshold value, and determining the objects to be unified objects when the calculated distance value of the obtained coordinates is less than or equal to the preset threshold value to obtain the same objects in the first video and the second video, wherein the preset distance value can be set according to actual conditions.
Further optionally, in step S602, the calculating the similar distance to the same object according to the motion characteristic curve to obtain the similar distance includes:
acquiring a motion characteristic curve of the same object in the first video, and determining the motion characteristic curve as a first curve;
acquiring a motion characteristic curve of the same object in the second video, and determining the motion characteristic curve as a second curve;
filling the first curve and the second curve respectively to obtain a first curve position array and a second curve position array;
and carrying out recursive calculation on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance.
In the embodiment of the invention, a motion characteristic curve of the same object in the first video is acquired and is determined as a first curve; acquiring a motion characteristic curve of the same object in the second video, and determining the motion characteristic curve as a second curve; and filling the first curve and the second curve respectively to obtain a first curve position array and a second curve position array. The method comprises the following steps of: p (P) i =[p 0 ,p 1 ,…p n-1 ,p n ]Two continuous motion coordinates p n-1 And p n Calculating, as a part of the motion characteristic curve, the original array point p n-1 And point p n Calculating points on a plurality of motion curves, and supplementing a point position array P= [ P ] which is inserted into the middle of corresponding points of the origin position array to generate a complete motion characteristic curve 0 ,p 1 ,…p n-1 ,p n ]The same operation is carried out on the second video to obtain a related point array as D= [ D ] 0 ,d 1 ,…d m-1 ,d m ]). And carrying out recursive calculation on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance.
Further, as a preferred embodiment, the performing, according to a similarity distance calculation formula, a recursive calculation on the first curve position array and the second curve position array to obtain a similarity distance includes:
acquiring a first curve point and a second curve point from the first curve position array, wherein the second curve point is the previous curve point of the first curve point;
acquiring a third curve point and a fourth curve point from the second curve position array, wherein the fourth curve point is a previous curve point of the third curve point;
performing recursive traversal on the first curve position array and the second curve position array through the first curve point and the third curve point, and performing similarity distance calculation on the first curve point and the third curve point to obtain a similarity distance;
And calculating the similarity distance between the first curve point and the third curve point, wherein the similarity distance calculation comprises the following steps:
similarity distance calculation is carried out on the second curve point and the fourth curve point, and a first distance is obtained;
similarity distance calculation is carried out on the first curve point and the fourth curve point, and a second distance is obtained;
performing similarity distance calculation on the second curve point and the third curve point to obtain a third distance;
comparing the first distance, the second distance and the third distance, and selecting the minimum distance value as a first calculation result;
carrying out coordinate linear distance calculation on the first curve point and the third curve point to obtain a fourth distance;
and comparing the first calculation result with the fourth distance, and selecting the maximum distance value as a second calculation result.
In the embodiment of the present invention, referring to fig. 7, a first curve point and a second curve point are obtained from the first curve position array, where the second curve point is a previous curve point of the first curve point; acquiring a third curve point and a fourth curve point from the second curve position array, wherein the fourth curve point is a previous curve point of the third curve point; performing recursive traversal on the first curve position array and the second curve position array through the first curve point and the third curve point, selecting the first curve point as an end point of the first curve, selecting the second curve point as an end point of the second curve, and performing similarity distance calculation on the first curve point and the third curve point, wherein a similarity distance calculation formula is as follows:
T(n,m)=max(min(T(n-1,m-1),T(n,m-1),T(n-1,m)),p n d m );
T(0,0)=p 0 d 0 ;T(1,0)=p 1 d 0 ;T(0,1)=p 0 d 1
Wherein p is n d m Representing the point p in the first curve n And point d in the second curve m Is a coordinate straight line distance of (c).
Obtaining a first distance by carrying out similarity distance calculation on the second curve point and the fourth curve point; similarity distance calculation is carried out on the first curve point and the fourth curve point, and a second distance is obtained; performing similarity distance calculation on the second curve point and the third curve point to obtain a third distance; comparing the first distance, the second distance and the third distance, and selecting the minimum distance value as a first calculation result; carrying out coordinate linear distance calculation on the first curve point and the third curve point to obtain a fourth distance; and comparing the first calculation result with the fourth distance, and selecting the maximum distance value as a second calculation result. To the midpoint P of the first curve P 1 For example, the similarity distance matched to curve D, the formula is: t (1, 1) =max (min (p 0 d 0 ,p 0 d 1 ,d 0 p 1 ),p 1 d 1 ) I.e. to judge p first 0 d 0 ,p 0 d 1 ,d 0 p 1 Minimum value between, take the result and p again 1 d 1 The maximum value of the comparison, as shown in FIG. 7, should be taken as p 1 d 1 The distance between them is taken as point p 1 Is a matching distance of (a). Finally, matching calculation is carried out on each point of the point position array of the curve P and the point position array of the D through recursive calculation, and the obtained value of T (n, m) is the maximum difference value of the two final motion characteristic curves, namely the similar distance of the motion characteristic curves.
Referring to fig. 8, an implementation manner of the embodiment of the present invention is to extract a video key frame picture according to an uploaded video to be detected and a reference video; carrying out image segmentation on each key frame picture to extract an object contour; presuming the same object in continuous key frame pictures in the same video, de-duplicating object data, simultaneously acquiring a motion point array in continuous frame pictures of the same object, and calculating the corresponding speed of each point according to the time of all points; rotating, scaling, and other normalization processing is carried out on all objects in the two videos; judging whether objects with similar outlines exist in the two videos or not; respectively generating three polynomial interpolation motion characteristic curves of objects with similar outlines in two videos; calculating the similar distance of a cubic polynomial interpolation motion characteristic curve of an object with similar outline in two videos respectively; calculating the similarity of the final video according to the number of the similar outline objects and the similarity distance of the motion characteristic curves of the similar outline objects; and outputting the similarity and judging whether the video is repeated.
Referring to fig. 9, an embodiment of the present invention further provides a video content detection apparatus, including:
the first module 901 is configured to obtain a video to be detected, where the video to be detected includes a first video and a second video;
A second module 902, configured to perform video key frame extraction processing on the video to be detected to obtain a video key frame;
a third module 903, configured to perform motion feature extraction processing on the video key frame to obtain a motion feature curve;
a fourth module 904, configured to perform similarity calculation processing on the video to be detected according to the motion characteristic curve, so as to obtain a video similarity calculation result;
a fifth module 905 is configured to determine that the first video and the second video are duplicate content videos when the video similarity calculation result meets a preset condition.
Referring to fig. 10, an embodiment of the present invention further provides an electronic device including a processor 1002 and a memory 1001; the memory is used for storing programs; the processor executes the program to implement the method as described above.
Corresponding to the method of fig. 1, an embodiment of the present invention also provides a computer-readable storage medium storing a program to be executed by a processor to implement the method as described above.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
In summary, the embodiment of the invention has the following advantages: according to the embodiment of the invention, the generation of the motion characteristic curve of the same object containing continuous speed in the video key frame is realized through the generation of the cubic polynomial interpolation motion characteristic curve of the video object, and the similar distance calculation is carried out according to the generated object motion vector characteristic curve, so that the similar distance calculation of two motion characteristic curves which are possibly misaligned is realized. According to the embodiment of the invention, the video similarity is comprehensively calculated by carrying out video similarity comprehensive calculation on the similar distances of the motion characteristic curves of a plurality of similar objects in two videos, so that the accuracy and the efficiency of video content detection are improved.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the application, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the embodiments described above, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present application, and these equivalent modifications or substitutions are included in the scope of the present application as defined in the appended claims.

Claims (10)

1. A method for detecting video content, the method comprising:
acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video;
performing video key frame extraction processing on the video to be detected to obtain video key frames;
performing motion characteristic extraction processing on the video key frames to obtain a motion characteristic curve;
performing similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result;
And when the video similarity calculation result meets a preset condition, determining that the first video and the second video are repeated content videos.
2. The method according to claim 1, wherein the performing video key frame extraction on the video to be detected to obtain a video key frame includes:
carrying out inter-frame difference processing on two adjacent frames of images which are continuous in time in the video to be detected to obtain an average inter-frame difference image;
and selecting an image frame with the average inter-frame difference local maximum value from the average inter-frame difference image to be determined as a video key frame.
3. The method according to claim 1, wherein the performing motion feature extraction on the video key frame to obtain a motion feature curve includes:
performing image segmentation processing on the video key frame to obtain an object contour coordinate array;
performing distance contrast processing on the video key frames according to the object contour coordinate array to obtain a target object;
extracting the position and time of the target object to obtain a key frame position array and a key frame time array;
and calculating the key frame position array and the key frame time array according to the smooth cubic polynomial interpolation motion curve expression to obtain a motion characteristic curve.
4. The method of claim 1, wherein the performing similarity calculation on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result includes:
performing object contour comparison processing on the first video and the second video to obtain the same object;
performing similar distance calculation on the same object according to the motion characteristic curve to obtain a similar distance;
and carrying out video similarity calculation according to the same object and the similar distance to obtain a video similarity calculation result.
5. The method of claim 4, wherein performing object contour contrast processing on the first video and the second video to obtain the same object comprises:
object contour information extraction processing is carried out on the first video and the second video respectively, so that first video object contour information and second video object contour information are obtained;
and carrying out normalization processing on the first video object contour information and the second video object contour information, and then carrying out comparison processing on the first video object contour information and the second video object contour information to obtain the same object.
6. The method of claim 4, wherein said calculating the similar distance from the same object based on the motion profile comprises:
acquiring a motion characteristic curve of the same object in the first video, and determining the motion characteristic curve as a first curve;
acquiring a motion characteristic curve of the same object in the second video, and determining the motion characteristic curve as a second curve;
filling the first curve and the second curve respectively to obtain a first curve position array and a second curve position array;
and carrying out recursive calculation on the first curve position array and the second curve position array according to a similarity distance calculation formula to obtain a similarity distance.
7. The method of claim 6, wherein said recursively calculating said first and second arrays of curve positions according to a similarity distance calculation formula to obtain a similarity distance comprises:
acquiring a first curve point and a second curve point from the first curve position array, wherein the second curve point is the previous curve point of the first curve point;
acquiring a third curve point and a fourth curve point from the second curve position array, wherein the fourth curve point is a previous curve point of the third curve point;
Performing recursive traversal on the first curve position array and the second curve position array through the first curve point and the third curve point, and performing similarity distance calculation on the first curve point and the third curve point to obtain a similarity distance;
and calculating the similarity distance between the first curve point and the third curve point, wherein the similarity distance calculation comprises the following steps:
similarity distance calculation is carried out on the second curve point and the fourth curve point, and a first distance is obtained;
similarity distance calculation is carried out on the first curve point and the fourth curve point, and a second distance is obtained;
performing similarity distance calculation on the second curve point and the third curve point to obtain a third distance;
comparing the first distance, the second distance and the third distance, and selecting the minimum distance value as a first calculation result;
carrying out coordinate linear distance calculation on the first curve point and the third curve point to obtain a fourth distance;
and comparing the first calculation result with the fourth distance, and selecting the maximum distance value as a second calculation result.
8. A video content detection apparatus, the apparatus comprising:
The first module is used for acquiring a video to be detected, wherein the video to be detected comprises a first video and a second video;
the second module is used for extracting and processing video key frames of the video to be detected to obtain video key frames;
the third module is used for extracting the motion characteristics of the video key frames to obtain a motion characteristic curve;
the fourth module is used for carrying out similarity calculation processing on the video to be detected according to the motion characteristic curve to obtain a video similarity calculation result;
and a fifth module, configured to determine that the first video and the second video are duplicate content videos when the video similarity calculation result meets a preset condition.
9. An electronic device comprising a memory and a processor;
the memory is used for storing programs;
the processor executing the program implements the method of any one of claims 1 to 7.
10. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method of any one of claims 1 to 7.
CN202310857526.0A 2023-07-12 2023-07-12 Video content detection method and device, electronic equipment and storage medium Active CN116935272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310857526.0A CN116935272B (en) 2023-07-12 2023-07-12 Video content detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310857526.0A CN116935272B (en) 2023-07-12 2023-07-12 Video content detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116935272A true CN116935272A (en) 2023-10-24
CN116935272B CN116935272B (en) 2024-05-28

Family

ID=88385608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310857526.0A Active CN116935272B (en) 2023-07-12 2023-07-12 Video content detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116935272B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120114167A1 (en) * 2005-11-07 2012-05-10 Nanyang Technological University Repeat clip identification in video data
CN102779184A (en) * 2012-06-29 2012-11-14 中国科学院自动化研究所 Automatic positioning method of approximately repeated video clips
CN113313065A (en) * 2021-06-23 2021-08-27 北京奇艺世纪科技有限公司 Video processing method and device, electronic equipment and readable storage medium
CN113496187A (en) * 2020-09-22 2021-10-12 华扬联众数字技术股份有限公司 Video matching method and device based on video fingerprints
CN115346145A (en) * 2021-05-13 2022-11-15 北京字跳网络技术有限公司 Method, device, storage medium and computer program product for identifying repeated video
CN115471772A (en) * 2022-09-16 2022-12-13 中国农业银行股份有限公司 Method, device, equipment and medium for extracting key frame
CN116188815A (en) * 2022-12-12 2023-05-30 北京数美时代科技有限公司 Video similarity detection method, system, storage medium and electronic equipment
CN116343080A (en) * 2023-02-20 2023-06-27 华南理工大学 Dynamic sparse key frame video target detection method, device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120114167A1 (en) * 2005-11-07 2012-05-10 Nanyang Technological University Repeat clip identification in video data
CN102779184A (en) * 2012-06-29 2012-11-14 中国科学院自动化研究所 Automatic positioning method of approximately repeated video clips
CN113496187A (en) * 2020-09-22 2021-10-12 华扬联众数字技术股份有限公司 Video matching method and device based on video fingerprints
CN115346145A (en) * 2021-05-13 2022-11-15 北京字跳网络技术有限公司 Method, device, storage medium and computer program product for identifying repeated video
CN113313065A (en) * 2021-06-23 2021-08-27 北京奇艺世纪科技有限公司 Video processing method and device, electronic equipment and readable storage medium
CN115471772A (en) * 2022-09-16 2022-12-13 中国农业银行股份有限公司 Method, device, equipment and medium for extracting key frame
CN116188815A (en) * 2022-12-12 2023-05-30 北京数美时代科技有限公司 Video similarity detection method, system, storage medium and electronic equipment
CN116343080A (en) * 2023-02-20 2023-06-27 华南理工大学 Dynamic sparse key frame video target detection method, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HYEWON CHOI ET AL: "Effective fake news video detection using domain knowledge and multimodal data fusion on youtube", 《PATTERN RECOGNITION LETTERS》, pages 44 - 52 *
袁祉赟: "基于内容的视频结构化方法研究_袁祉赟2017年第02期", 《中国优秀硕士学位论文全文数据库(电子期刊)》, vol. 2017, no. 02 *

Also Published As

Publication number Publication date
CN116935272B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
US9396569B2 (en) Digital image manipulation
CN108256394B (en) Target tracking method based on contour gradient
US20160086048A1 (en) Device and Method for Analyzing the Correlation Between an Image and Another Image or Between an Image and a Video
CN110599486A (en) Method and system for detecting video plagiarism
KR20070068408A (en) Video content understanding through real time video motion analysis
US11145080B2 (en) Method and apparatus for three-dimensional object pose estimation, device and storage medium
Kordelas et al. Content-based guided image filtering, weighted semi-global optimization, and efficient disparity refinement for fast and accurate disparity estimation
CN106169173B (en) Image interpolation method
Arbel et al. Texture-preserving shadow removal in color images containing curved surfaces
JP2015520467A (en) Apparatus and method for color harmonization of images
Fu et al. Quality assessment of retargeted images using hand-crafted and deep-learned features
CN113411582A (en) Video coding method, system, device and medium based on active contour
CN108960012B (en) Feature point detection method and device and electronic equipment
EP2536123B1 (en) Image processing method and image processing apparatus
Mukherjee et al. A hybrid algorithm for disparity calculation from sparse disparity estimates based on stereo vision
CN116935272B (en) Video content detection method and device, electronic equipment and storage medium
CN107704864A (en) Well-marked target detection method based on image object Semantic detection
CN116188535A (en) Video tracking method, device, equipment and storage medium based on optical flow estimation
Le et al. SpatioTemporal utilization of deep features for video saliency detection
US7386169B2 (en) Method for edge detection and contour stroke generation
CN112085683B (en) Depth map credibility detection method in saliency detection
Wong et al. Recognition of fish based on generalized color fourier descriptor
CN115210758A (en) Motion blur robust image feature matching
Izquierdo et al. Nonlinear Gaussian filtering approach for object segmentation
Abhayadev et al. Efficient retargeting of shadow images using improved CRIST

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant