CN115330711B - Image video content management method and system based on data processing - Google Patents

Image video content management method and system based on data processing Download PDF

Info

Publication number
CN115330711B
CN115330711B CN202210954101.7A CN202210954101A CN115330711B CN 115330711 B CN115330711 B CN 115330711B CN 202210954101 A CN202210954101 A CN 202210954101A CN 115330711 B CN115330711 B CN 115330711B
Authority
CN
China
Prior art keywords
video
image
representative
frame
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210954101.7A
Other languages
Chinese (zh)
Other versions
CN115330711A (en
Inventor
陈伟锋
杨毅伟
马三兵
罗耀忠
郑泽标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Youhaoxi Network Technology Co ltd
Original Assignee
Guangzhou Youhaoxi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Youhaoxi Network Technology Co ltd filed Critical Guangzhou Youhaoxi Network Technology Co ltd
Priority to CN202210954101.7A priority Critical patent/CN115330711B/en
Publication of CN115330711A publication Critical patent/CN115330711A/en
Application granted granted Critical
Publication of CN115330711B publication Critical patent/CN115330711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Technology Law (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a video content management method and a system based on data processing, which comprises the following steps: the method comprises the steps of obtaining a first characteristic pair of a copyright video, carrying out scene segmentation on the copyright video according to a first characteristic diagram, obtaining a background image of the scene video by adopting a streamer method, obtaining a representative diagram of the background image by adopting wavelet transformation, obtaining a second representative diagram of a target video, and determining whether the target video infringes according to the first representative diagram and the second representative diagram. The scheme solves the problem that whether UGC video infringes or not is difficult to identify.

Description

Image video content management method and system based on data processing
Technical Field
The invention relates to the field of video management, in particular to a method and a system for managing video content of an image based on data processing.
Background
With the development of UGC (User Generated Content) websites (such as B-site and Youku) and the like, more and more users upload videos made by themselves to the UGC websites, UGC Content greatly enriches the Internet, but the UGC is usually low in legal consciousness, videos with copyrights are easily introduced when the videos are made, so that video infringement made by the users is caused, the right of copyright owners is damaged by the video infringement, legal risks are brought to the UGC websites, and therefore the UGC websites urgently need the function of copyright identification of the videos created by the users.
The existing technologies of video infringement identification exist at present, for example, CN113569719a discloses an infringement video determination method, device, storage medium and electronic device, which determine whether feature information matched with the feature information of a video to be determined exists in a pre-established copyright video feature information base by extracting the feature information of the video to be determined, and when feature information matched with the feature information of the video to be determined exists in the copyright video feature information base, determine that the video to be determined infringes, so that a system can automatically determine whether the obtained video is an infringement video, and compared with the existing manual operation, the infringement determination efficiency is higher; meanwhile, the characteristic information of all copyright videos is stored in the copyright video characteristic information base, so that a more accurate infringement judgment result can be obtained by judging whether the characteristic information matched with the characteristic information of the video to be judged exists in the copyright video characteristic information base or not. In the scheme, fingerprint information of the copyright video is required to be known, and the fingerprint information is usually known by a copyright party, so that the UGC platform has difficulty in knowing the fingerprint information of all the copyright videos.
In the scheme disclosed in CN113435391a, a video to be detected may be determined first, a first representative image of the video to be detected is obtained, then, a similarity between the first representative image and a second representative image of the copyrighted video stored in advance is determined, and in a case where the similarity meets a requirement, the video to be detected is determined as a suspected infringing video, the suspected infringing video is obtained, and infringement identification for identifying whether the video infringes the copyright of the copyrighted video is performed on the suspected infringing video. However, in this scheme, the representative image is a preview, a cover, a poster, a drama, and the like of the video to be detected, but the UGC video usually only intercepts a part of the copyright video, for example, when the user intercepts a part of the war field production video from the historical war movie during production of the historical commentary video, the video cannot be identified by the method.
Disclosure of Invention
In order to solve the problem that whether UGC video is infringed or not is difficult to identify, the invention provides a video content management method and system based on data processing.
In one aspect of the present invention, a method for managing video content based on data processing is provided, where the method includes: step S1, acquiring a first copyright video, and intercepting four first characteristic diagrams for each frame image of the first copyright video, wherein the intercepting of the four first characteristic diagrams comprises the following steps: intercepting an image with a first preset value size at each corner of four corners of the rectangular image to serve as the first feature map; s2, calculating the difference between the tone values of the four first feature maps of the ith frame of the first copyright video and the four first feature maps of the corresponding (i + 1) th frame, and when the difference between the tone values is greater than a second preset value, dividing the first copyright video from the ith frame to the (i + 1) th frame to obtain a plurality of scene videos; wherein i is the frame number of the copyright video, and i is 1 to the maximum frame value of the first copyright video minus one; s3, extracting a background image frame by frame in each scene video by adopting an optical flow method to obtain a plurality of background images corresponding to each scene video, and taking the background image containing the most background pixels in the background images as a representative background image of the corresponding scene video; s4, performing wavelet transformation on the representative background image, and extracting an LL frequency band in the representative background image to be used as a first representative image of a corresponding scene video; s5, repeating the steps S1-S4 on all the copyright videos to obtain a plurality of scene videos corresponding to each copyright video and a first representative graph corresponding to each scene video to obtain a first representative graph set; s6, acquiring a target video, and intercepting four first characteristic maps of each frame image of the target video; respectively performing wavelet transformation on the four first characteristic graphs of the target video, and extracting LL frequency bands in the four first characteristic graphs to obtain four second representative graphs of each frame; and S7, respectively searching four second representative graphs of each frame of the target video in each first representative graph in the first representative graph set, and prompting that the target video is possibly infringed when the second representative graphs of at least two same frames can be found in a certain first representative graph.
Further, the first preset size is: l = L0.15, H = H0.15, where L is the length of the first preset size image, H is the height of the first preset size image, L is the length of the original image, and H is the height of the original image.
Further, the hue is calculated by the following method: and respectively carrying out arithmetic mean on pixels of each image in the four first feature maps, wherein the obtained four arithmetic mean values are four tone values of the four first feature maps.
Further, the difference between the hue values is greater than a second preset value is determined by the following method: and comparing the four first feature maps of the two previous frames with the four first feature maps in a one-to-one correspondence manner, and when the difference of the tone values of at least one first feature map is more than 20%, determining that the difference of the tone values is more than a second preset value.
Further, the background image is obtained by the following method: after the moving object is detected, subtracting the moving object from the original picture to obtain a stationary background, and filling all holes appearing in the background image after the moving object is removed by 0 to obtain a background image.
Further, the background image containing the most background pixels is obtained by the following method: and calculating the 0 pixel ratio of each image in the plurality of background images, and taking the background image with the minimum 0 pixel ratio as the background image containing the most background pixels.
Further, after prompting that the target video may infringe, further prompting which particular copyrighted video was infringed, the third minute of infringement of the copyrighted video.
The invention also discloses a video content management system based on data processing, which comprises a memory and at least one processor, wherein the memory stores program codes for realizing the method of the method, and the at least one processor executes the program codes.
According to the technical scheme, the copyright video is subjected to first feature pair acquisition, the copyright video is subjected to scene segmentation according to the first feature graph, the background image of the scene video is acquired by adopting a streamer method, the representative graph of the background image is acquired by adopting wavelet transformation, the second representative graph of the target video is acquired, and whether the target video infringes or not is determined according to the first representative graph and the second representative graph. The scheme solves the problem that whether UGC video infringes or not is difficult to identify.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 shows a correspondence between copyrighted video and representative pictures;
fig. 3 is a correspondence between the target video and the representative graph.
Detailed Description
The invention is described in detail with reference to the following drawings and detailed description.
As shown in fig. 1, in an embodiment, the present invention discloses a method for managing video content based on data processing, which specifically includes the following steps:
step S1, acquiring a first copyright video, and intercepting four first characteristic diagrams for each frame image of the first copyright video, wherein the intercepting of the four first characteristic diagrams comprises the following steps: and intercepting an image with a first preset value size at each corner of four corners of the rectangular image to serve as the first feature map.
The copyright video described in the application refers to copyrighted video works, movies and television recording works, and the copyright of copyright owners can be infringed when the video is used or distributed in whole or in part without permission of a copyright party. The invention aims to identify whether the UGC content of a user has the content including the version video when the UGC content is specially used by the user, so as to avoid unnecessary troubles for UGC creators and video websites.
The video is usually a frame meter, for example, a movie is usually 24 frames, some videos can reach 60 frames or more, the video frames can be intercepted by reading a local video file, or can be acquired by streaming media in a Stream manner, and the specific acquisition method is not limited in the present invention.
In the film and television works, in order to improve the viewing experience, a moving part such as a person is usually in the middle of a video, and in the invention, in order to improve the contrast efficiency, an immovable background is adopted as a characteristic, so that an image with a first preset size is extracted only for each of four corners of the image.
The first preset size may be set empirically; preferably, L = L0.15, H = H0.15, where L is the length of the first preset size image, H is the height of the first preset size image, L is the length of the original image, and H is the height of the original image.
S2, calculating the difference between the tone values of the four first feature maps of the ith frame of the first copyright video and the four first feature maps of the corresponding (i + 1) th frame, and when the difference between the tone values is greater than a second preset value, dividing the first copyright video from the ith frame to the (i + 1) th frame to obtain a plurality of scene videos; wherein i is the frame number of the copyright video, and i is 1 to the maximum frame value of the first copyright video minus one;
the movie works are often transferred, for example, from indoor to outdoor, and from daytime to night, and meanwhile, the movie works also have different near and far scenes, for example, the proportion of characters in the video image is large in the near scene, and the proportion of characters in the video image is small in the far scene. For the same scene, such as indoors, the indoor lighting is stable, and the background is blurred to some extent by modern photography, so that the color tone of the background of the person is similar whether the person looks from a close view or a distant view. However, when the indoor is transferred to the outdoor, the outdoor light is sufficient, and the color tone of the background can be obviously changed; the invention starts from the principle, and compares the color tones of the background of the front frame and the back frame to judge whether the transition exists.
Preferably, the color tone of the first feature map is obtained by: and respectively carrying out arithmetic mean on the pixels of each image in the four first feature maps. The four first feature maps are obtained by cutting the four corners of the image, and thus the arithmetic mean of the respective pixels of the four images is calculated, and four arithmetic means, that is, four hue values are obtained.
Preferably, the calculation method of the difference between the tone values of the four first feature maps of the ith frame of the first copyright video and the four first feature maps of the corresponding (i + 1) th frame is as follows: carrying out one-to-one correspondence on the four first feature maps of the front frame and the rear frame(i.e., upper left to upper left ratio, lower left to lower left ratio, upper right to upper right ratio, lower right to lower right ratio), when at least one of the first feature maps has a hue value difference of greater than 20% ((C) i -C i+1 )/(C i+1 )>0.2 of wherein C i Is the tone value of the i-th frame, C i+1 The tone value of the i +1 th frame), it is considered that there is no transition between the two frames before and after, otherwise it is considered that a transition has occurred.
When a transition occurs, it is described that the background of the video has a large change, and at this time, the video may be cut so as to perform different processing on different scenes. Since transition occurs between the i-th frame and the i + 1-th frame, the i-th frame and the i + 1-th frame are cut, each frame of the first copyright video is processed, and the copyright video can be divided into a plurality of small sections of videos, each small section of video represents a scene (when description is needed, the transition is only identified in a program, and in some videos, two frames before and after the transition may be in a room, but after the shot is transferred, the color of the background has a large change, so that the program identifies the transition, and such a situation still falls into the scope of the present invention, which can be regarded as a special situation of the present invention).
S3, extracting a background image frame by frame in each scene video by adopting an optical flow method to obtain a plurality of background images corresponding to each scene video, and taking the background image containing the most background pixels in the plurality of background images as a representative background image of the corresponding scene video;
the optical flow method is mainly used for motion detection at present, is one of the most efficient motion detection methods at present, and is selected in the invention in order to improve the efficiency of a program. In contrast to motion detection, after a moving object is detected, a stationary background is obtained by subtracting the moving object from the original picture. After the moving object is removed, a hole (namely, a gap of a person exists after the person is removed) is probably formed in the background image, and all pixels at the hole are filled with 0; since a plurality of frames are included in the scene video, a plurality of background images may be recognized from a plurality of frames of the first scene video.
The scene video comprises a plurality of frames, so a plurality of background images can be obtained, the 0 pixel proportion of each image in the plurality of background images is calculated, and the background image with the minimum 0 pixel proportion is used as the background image of the scene video. The small 0 pixel ratio indicates that the ratio of movable elements such as characters is small, the ratio of background pixels is large, and the lens is pulled far, so that the background of the current scene can be reflected well, and therefore the background image containing the most background pixels in the background images is taken as the representative background image of the corresponding scene video.
And S4, performing wavelet transformation on the representative background image, and extracting an LL band in the representative background image to be used as a first representative image of the corresponding scene video.
Because the original background image still retains too many details, if the original background image is directly compared, resources are particularly consumed, in addition, the video resolution of the UGC content may be very low, the original background image has more detail information, and if the original background image is directly compared with the original background, a very large error may be generated (many details cannot be displayed on the image with low resolution, and cannot be matched with the original background image), so that the first background image cannot be normally matched, and therefore, the first background image needs to be further processed.
Wavelet transform is performed on an image, which belongs to the conventional operation in the field, and after the wavelet transform, four components of HH, HL, LH, and LL are obtained, wherein the LL band collects low-frequency information of an original image, LH and HL bands respectively represent high-frequency edge information of the original image in vertical and horizontal directions, and the HH band reflects high-frequency edge information of the original image in diagonal directions. The HH mainly has contour information, and each contour of elements in the image can be easily seen from the HH, so that objects in the image can be accurately reflected; the LL is low-frequency stationary information in the image, which reflects the state of the distribution of the chromaticity of the image and can reflect the chromaticity distribution of the blurred background, so the LL component is taken as the background representative of the scene video. After wavelet processing is carried out, only the LL part is taken, the fuzziness of a background image is increased, and balanced signals are obtained, so that the variance of pixels is small, the contrast range is reduced during calculation, and the processing speed is increased. At the same time, contour information is removed, and elements with too high precision are prevented from being compared with other images, so that the image which is matched is identified as not matched.
And S5, repeating the steps S1-S4 for all the copyright videos to obtain a plurality of scene videos corresponding to each copyright video and a first representative graph corresponding to each scene video to obtain a first representative graph set.
After all known copyright videos are processed similarly, a plurality of scene videos of all the copyright videos and a representative image of each scene video can be obtained, and the relationship among the copyright videos, the scene videos and the representative images is shown in fig. 2. Each copyright video may correspond to a plurality of scenes, each scene corresponds to one representative graph, a corresponding scene video may also be found from one representative graph, and the corresponding scene video corresponds to the unique copyright video.
S6, acquiring a target video, and intercepting four first characteristic maps of each frame image of the target video; and respectively carrying out wavelet transformation on the four first characteristic graphs of the target video, and extracting LL bands in the four first characteristic graphs as a plurality of second representative graphs of the target video.
The target video can be a video made by a UGC user, the video is uploaded to a website platform for auditing after the user finishes making, and after the platform acquires the target video, the first characteristic diagram is intercepted on each frame in the video by adopting the same method as the method in the step S1, so that four first characteristic diagrams corresponding to each frame of image are obtained.
Similarly to the processing of the copyright video, each of the four first feature maps is wavelet transformed independently, and the LL band is extracted as a plurality of second feature maps of the target video, which are similar to the first feature maps, with contour information removed, thereby being able to approximately represent the background information of the corresponding frame. Relationship of target videos to second representative diagrams as shown in fig. 3, each target video may be divided into a plurality of frames, and each frame has four second representative diagrams.
And S7, respectively searching four second representative graphs of each frame of the target video in each first representative graph in the first representative graph set, and prompting that the target video is possibly infringed when the second representative graphs of at least two same frames can be found in a certain first representative graph.
The four second representative pictures of each frame of the target video are partial information of the background pictures of the corresponding frame, and the first representative picture is all information of the background pictures, so that whether the second representative picture is included in the first representative picture can be searched to determine whether the background represented by the first representative picture is the same as the background represented by the second representative picture. Therefore, when a second representative map of at least two identical frames can be found in a certain first representative map, it indicates that the corresponding frame in the target video may exist in the copyrighted video, thus prompting that the target video may infringe.
Further, since the first representative diagram corresponds to a scene video and a scene video corresponds to a copyright video, it is possible to further prompt which copyright video is infringed, and the third minute of the copyright video is infringed.
The invention also discloses a video content management system based on data processing, which comprises a memory and at least one processor, wherein the memory stores program codes for realizing the video content management method based on data processing, and the at least one processor executes the program codes.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.
The present invention is not limited to the specific module configuration described in the related art. The prior art mentioned in the background section and the detailed description section can be used as part of the invention to understand the meaning of some technical features or parameters. The scope of the present invention is defined by the claims.

Claims (8)

1. A method for managing video content based on data processing, the method comprising:
step S1, acquiring a first copyright video, and intercepting four first characteristic diagrams for each frame image of the first copyright video, wherein the intercepting of the four first characteristic diagrams comprises the following steps: intercepting an image with a first preset value size at each corner of four corners of the rectangular image to serve as the first feature map;
s2, calculating the difference of tone values of the four first feature maps of the ith frame of the first copyright video and the four first feature maps of the corresponding i +1 th frame, and when the difference of the tone values is larger than a second preset value, dividing the first copyright video from the position between the ith frame and the i +1 th frame to obtain a plurality of scene videos; wherein i is the frame number of the copyright video, and i is 1 to the maximum frame value of the first copyright video minus one;
s3, extracting a background image frame by frame in each scene video by adopting an optical flow method to obtain a plurality of background images corresponding to each scene video, and taking the background image containing the most background pixels in the plurality of background images as a representative background image of the corresponding scene video;
s4, performing wavelet transformation on the representative background image, and extracting an LL frequency band in the representative background image to be used as a first representative image of a corresponding scene video;
s5, repeating the steps S1-S4 for all the copyright videos to obtain a plurality of scene videos corresponding to each copyright video and a first representative image corresponding to each scene video to obtain a first representative image set;
s6, acquiring a target video, and intercepting four first characteristic maps of each frame image of the target video; respectively performing wavelet transformation on the four first characteristic graphs of the target video, and respectively extracting LL frequency bands in the four first characteristic graphs to obtain four second representative graphs of each frame;
and S7, respectively searching four second representative graphs of each frame of the target video in each first representative graph in the first representative graph set, and prompting that the target video is possibly infringed when the second representative graphs of at least two same frames can be searched in one first representative graph.
2. The method as claimed in claim 1, wherein the method comprises: the first preset size is: l = L0.15, H = H0.15, where L is the length of the first preset size image, H is the height of the first preset size image, L is the length of the original image, and H is the height of the original image.
3. The method as claimed in claim 1, wherein the hue is calculated by: and respectively carrying out arithmetic mean on the pixels of each image in the four first feature maps, wherein the obtained four arithmetic mean values are four tone values of the four first feature maps.
4. A method for managing video content of an image based on data processing as claimed in claim 3, wherein said difference between hue values greater than a second predetermined value is determined by: and comparing the four first feature maps of the two previous frames with the four first feature maps in a one-to-one correspondence manner, and when the difference of the tone values of at least one first feature map is more than 20%, determining that the difference of the tone values is more than a second preset value.
5. The method as claimed in claim 1, wherein the background image is obtained by: after the moving object is detected, subtracting the moving object from the original picture to obtain a stationary background, and filling all holes appearing in the background image after the moving object is removed by 0 to obtain a background image.
6. The method as claimed in claim 5, wherein the background image containing the most background pixels is obtained by: and calculating the 0 pixel ratio of each image in the plurality of background images, and taking the background image with the minimum 0 pixel ratio as the background image containing the most background pixels.
7. The method as claimed in claim 1, wherein the method comprises: after prompting that the target video may infringe, further prompting which copyrighted video was specifically infringed, the third minute of infringement of the copyrighted video.
8. A system for managing video content based on data processing, characterized in that said system comprises a memory and at least one processor, said memory storing program code for implementing a method according to any one of claims 1 to 7, and said program code being executable by said at least one processor.
CN202210954101.7A 2022-08-09 2022-08-09 Image video content management method and system based on data processing Active CN115330711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210954101.7A CN115330711B (en) 2022-08-09 2022-08-09 Image video content management method and system based on data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210954101.7A CN115330711B (en) 2022-08-09 2022-08-09 Image video content management method and system based on data processing

Publications (2)

Publication Number Publication Date
CN115330711A CN115330711A (en) 2022-11-11
CN115330711B true CN115330711B (en) 2023-03-10

Family

ID=83922342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210954101.7A Active CN115330711B (en) 2022-08-09 2022-08-09 Image video content management method and system based on data processing

Country Status (1)

Country Link
CN (1) CN115330711B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564597A (en) * 2018-03-05 2018-09-21 华南理工大学 A kind of video foreground target extraction method of fusion gauss hybrid models and H-S optical flow methods
CN113435391A (en) * 2021-07-09 2021-09-24 支付宝(杭州)信息技术有限公司 Method and device for identifying infringement video
CN113569719A (en) * 2021-07-26 2021-10-29 上海艾策通讯科技股份有限公司 Video infringement judgment method and device, storage medium and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050078873A1 (en) * 2003-01-31 2005-04-14 Cetin Ahmet Enis Movement detection and estimation in wavelet compressed video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564597A (en) * 2018-03-05 2018-09-21 华南理工大学 A kind of video foreground target extraction method of fusion gauss hybrid models and H-S optical flow methods
CN113435391A (en) * 2021-07-09 2021-09-24 支付宝(杭州)信息技术有限公司 Method and device for identifying infringement video
CN113569719A (en) * 2021-07-26 2021-10-29 上海艾策通讯科技股份有限公司 Video infringement judgment method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN115330711A (en) 2022-11-11

Similar Documents

Publication Publication Date Title
US8254677B2 (en) Detection apparatus, detection method, and computer program
US9036977B2 (en) Automatic detection, removal, replacement and tagging of flash frames in a video
US9226048B2 (en) Video delivery and control by overwriting video data
US7123769B2 (en) Shot boundary detection
Niu et al. What makes a professional video? A computational aesthetics approach
JP5305557B2 (en) Method for viewing audiovisual records at a receiver and receiver for viewing such records
US20110075924A1 (en) Color adjustment
JP2004512595A (en) Method for automatically or semi-automatically converting digital image data to provide a desired image appearance
CN107430780B (en) Method for output creation based on video content characteristics
CA3039239C (en) Conformance of media content to original camera source using optical character recognition
KR20070112130A (en) Method and electronic device for detecting a graphical object
CA2727397C (en) System and method for marking a stereoscopic film
CN106960211B (en) Key frame acquisition method and device
WO2013036086A2 (en) Apparatus and method for robust low-complexity video fingerprinting
CN107636728B (en) Method and apparatus for determining a depth map for an image
JP3649468B2 (en) Electronic album system with shooting function
CN113312949B (en) Video data processing method, video data processing device and electronic equipment
CN115330711B (en) Image video content management method and system based on data processing
CN113255423A (en) Method and device for extracting color scheme from video
KR102136716B1 (en) Apparatus for Improving Image Quality and Computer-Readable Recording Medium with Program Therefor
CN112991419B (en) Parallax data generation method, parallax data generation device, computer equipment and storage medium
US8600151B2 (en) Producing stereoscopic image
Ekin et al. Spatial detection of TV channel logos as outliers from the content
CN112399250A (en) Movie and television program poster generation method and device based on image recognition
US20150370875A1 (en) Content creation method, content registration method, devices and corresponding programs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant