CN104954892A - Method and device for showing video subject content - Google Patents

Method and device for showing video subject content Download PDF

Info

Publication number
CN104954892A
CN104954892A CN201510329798.9A CN201510329798A CN104954892A CN 104954892 A CN104954892 A CN 104954892A CN 201510329798 A CN201510329798 A CN 201510329798A CN 104954892 A CN104954892 A CN 104954892A
Authority
CN
China
Prior art keywords
video image
image subsequence
treatment
subsequence
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510329798.9A
Other languages
Chinese (zh)
Other versions
CN104954892B (en
Inventor
高同庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Co Ltd
Original Assignee
Hisense Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Co Ltd filed Critical Hisense Group Co Ltd
Priority to CN201510329798.9A priority Critical patent/CN104954892B/en
Publication of CN104954892A publication Critical patent/CN104954892A/en
Application granted granted Critical
Publication of CN104954892B publication Critical patent/CN104954892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for showing video subject content; the method and the device are used for automatically generating an animation file capable of dynamically showing the subject content of a video, so that a user can know the subject content of the video within a short time according to the animation file. The method comprises the following steps: analyzing a video file to acquire a video image sequence forming the video file; determining color difference of every two adjacent frames of video images in the acquired video image sequence, and dividing the video image sequence into multiple first video image subsequences according to the color differences; respectively extracting one frame of video image from each first video image subsequence, and generating the animation file capable of dynamically showing the subject content of the video according to the extracted video images on the basis of the time order of the multiple first video image subsequences in the video file.

Description

A kind of method and device showing video subject content
Technical field
The present invention relates to technical field of video processing, particularly relate to a kind of method and the device of showing video subject content.
Background technology
Video occupies more and more important position in modern media.The current introduction about video content mainly adopts the form of picture and spelling words intellectual, and user cannot understand the subject content of video rapidly, intuitively.
At present, some is had to the video file of theme video, the theme video of this video file is made specially by professional, need special video production technology, even need special shooting, this just needs human resources and the material resources of at substantial, the propaganda film of such as film.
For existing massive video file, particularly news video, often not free and human cost makes corresponding theme video, if make corresponding theme video by professional for each video file, then needs the time of at substantial, is difficult to realize.
Summary of the invention
The embodiment of the present invention provides a kind of method and the device of showing video subject content, can the animation file of subject content of Dynamic Display video in order to automatically to generate, and makes user can understand the subject content of this video at short notice according to this animation file.
The concrete technical scheme that the embodiment of the present invention provides is as follows:
First aspect, provides a kind of method of showing video subject content, comprising:
Resolve video file, obtain the sequence of video images of the described video file of composition;
Determine the color distortion of adjacent two frame video images in the sequence of video images obtained, according to described color distortion, described sequence of video images is divided into multiple first video image subsequence;
A frame video image is extracted respectively from each described first video image subsequence, according to the time order and function order of described multiple first video image subsequence at described video file, generating according to the described video image extracted can the animation file of Dynamic Display video subject content.
In enforcement, determine the color distortion of adjacent two frame video images in the sequence of video images obtained, according to described color distortion, described sequence of video images be divided into multiple first video image subsequence, comprise:
According to the red R component of pixel each in every frame video image, green G component and blue B component, determine the leading RGB of video image described in every frame respectively, the leading RGB of described video image is the value that in the average RGB of each pixel in described video image, occurrence number is maximum, and the average RGB of described pixel is the R component of described pixel, the mean value of G component and B component rounds gained;
Described sequence of video images is on average divided into the second video image subsequence of the first setting number;
According to the time order and function order of each described second video image subsequence in described video file, carry out merging treatment to described second video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described second video image subsequence of pre-treatment, determine first standard deviation of the described second video image subsequence when pre-treatment; If described first standard deviation is less than the first preset value, the described second video image subsequence and adjacent untreated second video image subsequence of working as pre-treatment are merged into the second new video image subsequence, using the second video image subsequence after merging as the second video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the second video image subsequence of pre-treatment is not less than described first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as; If described first standard deviation is not less than the first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as.
In enforcement, when carrying out merging treatment to described second video image subsequence, described method also comprises:
If described first standard deviation is not less than the first preset value and is greater than the second preset value, described second preset value is greater than described first preset value, the described second video image subsequence when pre-treatment is on average divided into the 3rd video image subsequence of the second setting number;
According to the time order and function order of each described 3rd video image subsequence, carry out merging treatment to described 3rd video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described 3rd video image subsequence of pre-treatment, determine first standard deviation of the described 3rd video image subsequence when pre-treatment, if described first standard deviation is less than the first preset value, the described 3rd video image subsequence and adjacent untreated 3rd video image subsequence of working as pre-treatment are merged into the 3rd new video image subsequence, using the 3rd video image subsequence after merging as the 3rd video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the 3rd video image subsequence of pre-treatment is not less than described first preset value and is not more than described second preset value, the first video image subsequence of 3rd video image subsequence as correspondence of pre-treatment will be worked as, if described first standard deviation is not less than the first preset value and be not more than described second preset value, the first video image subsequence of the 3rd video image subsequence as correspondence of pre-treatment will be worked as.
In enforcement, according to described color distortion, described sequence of video images is divided into multiple first video image subsequence, also comprises:
If the playing duration that described first video image subsequence is corresponding is greater than the first duration, according to the colourity H component of each pixel in frame video image every in described first video image subsequence, saturation S component and brightness V component, determine the leading HSV of every frame video image respectively, the leading HSV of described video image is the value that in the average HSV of each pixel in described video image, occurrence number is maximum, and the average HSV of pixel is the H component of this pixel, the mean value of S component and V component rounds gained;
Described first video image subsequence is on average divided into the 4th video image subsequence of the 3rd setting number;
According to the time order and function order of each described 4th video image subsequence in described video file, carry out merging treatment to described 4th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 4th video image subsequence of pre-treatment, determine second standard deviation of the described 4th video image subsequence when pre-treatment; If described second standard deviation is less than the 3rd preset value, the described 4th video image subsequence and adjacent untreated 4th video image subsequence of working as pre-treatment are merged, using the video image subsequence that obtains after merging as the 4th video image subsequence when pre-treatment, repeat described processing procedure, until when second standard deviation of the 4th video image subsequence of pre-treatment is not less than described 3rd preset value, the first video image subsequence of the 4th video image subsequence as correspondence of pre-treatment will be worked as; If described second standard deviation is not less than the 3rd preset value, the first video image subsequence of the 4th video image subsequence as correspondence of pre-treatment will be worked as.
In enforcement, when carrying out merging treatment to described 4th video image subsequence, described method also comprises:
If described second standard deviation is not less than the 3rd preset value and be greater than the 4th preset value, described 4th preset value is greater than described 3rd preset value, the described 4th video image subsequence when pre-treatment is on average divided into the 5th video image subsequence of the 4th setting number;
According to the time order and function order of each described 5th video image subsequence, carry out merging treatment to described 5th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 5th video image subsequence of pre-treatment, determine first standard deviation of the described 5th video image subsequence when pre-treatment; If described first standard deviation is less than the 3rd preset value, the described 5th video image subsequence and adjacent untreated 5th video image subsequence of working as pre-treatment are merged, using the video image subsequence after merging as the 5th video image subsequence when pre-treatment, repeat described processing procedure, until when second standard deviation of the 5th video image subsequence of pre-treatment is not less than described 3rd preset value and is not more than described 4th preset value, the first video image subsequence of the 5th video image subsequence as correspondence of pre-treatment will be worked as; If described second standard deviation is not less than the 3rd preset value and be not more than described 4th preset value, the first video image subsequence of the 5th video image subsequence as correspondence of pre-treatment will be worked as.
In enforcement, according to described color distortion, described sequence of video images is divided into multiple first video image subsequence, also comprises:
If the playing duration that described first video image subsequence is corresponding is greater than the first duration, in units of the number of the video image comprised by default playing duration, described first video image subsequence is divided into multiple video image group;
To calculate in each described video image group every frame video image respectively relative to the motor image prime number of former frame video image, and calculate the motion pixel value of mean value as described video image group of the motor image prime number of each video image in each described video image group;
Described first video image subsequence is divided into the 6th video image subsequence of the 5th setting number, each described 6th video image subsequence is the integral multiple of described video image group;
According to the time order and function order of each described 6th video image subsequence in described video file, carry out merging treatment to described 6th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the motor image prime number when each video image group in the described 6th video image subsequence of pre-treatment, determine the 3rd standard deviation of the described 6th video image subsequence when pre-treatment; If described 3rd standard deviation is less than the 5th preset value, the described 6th video image subsequence and adjacent untreated 6th video image subsequence of working as pre-treatment are merged, using the video image subsequence that obtains after merging as the 6th video image subsequence when pre-treatment, repeat described processing procedure, until when the 3rd standard deviation of the 6th video image subsequence of pre-treatment is not less than described 5th preset value, the first video image subsequence of the 6th video image subsequence as correspondence of pre-treatment will be worked as; If described 3rd standard deviation is not less than the 5th preset value, the first video image subsequence of the 6th video image subsequence as correspondence of pre-treatment will be worked as.
In enforcement, carry out merging treatment to described 6th video image subsequence, described method also comprises:
If described 3rd standard deviation is not less than the 5th preset value and is greater than the 6th preset value, described 6th preset value is greater than described 5th preset value, the described 6th video image subsequence when pre-treatment is on average divided into the 7th video image subsequence of the 6th setting number;
According to the time order and function order of each described 7th video image subsequence, carry out merging treatment to described 7th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 7th video image subsequence of pre-treatment, determine the 3rd standard deviation of the described 7th video image subsequence when pre-treatment; If described 3rd standard deviation is less than the 5th preset value, the described 7th video image subsequence and adjacent untreated 7th video image subsequence of working as pre-treatment are merged, using the video image subsequence after merging as the 7th video image subsequence when pre-treatment, repeat described processing procedure, until when the 3rd standard deviation of the 7th video image subsequence of pre-treatment is not less than described 5th preset value and is not more than described 6th preset value, the first video image subsequence of the 7th video image subsequence as correspondence of pre-treatment will be worked as; If described 3rd standard deviation is not less than the 5th preset value and be not more than described 6th preset value, the first video image subsequence of the 7th video image subsequence as correspondence of pre-treatment will be worked as.
Second aspect, provides a kind of device showing video subject content, comprising:
Parsing module, for resolving video file, obtains the sequence of video images of the described video file of composition;
Dividing module, for determining the color distortion of adjacent two frame video images in the sequence of video images that obtains, according to described color distortion, described sequence of video images being divided into multiple first video image subsequence;
Generation module, for extracting a frame video image respectively from each described first video image subsequence, according to the time order and function order of described multiple first video image subsequence at described video file, generating according to the described video image extracted can the animation file of Dynamic Display video subject content.
In enforcement, described division module specifically for:
According to the red R component of pixel each in every frame video image, green G component and blue B component, determine the leading RGB of video image described in every frame respectively, the leading RGB of described video image is the value that in the average RGB of each pixel in described video image, occurrence number is maximum, and the average RGB of described pixel is the R component of described pixel, the mean value of G component and B component rounds gained;
Described sequence of video images is on average divided into the second video image subsequence of the first setting number;
According to the time order and function order of each described second video image subsequence in described video file, carry out merging treatment to described second video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described second video image subsequence of pre-treatment, determine first standard deviation of the described second video image subsequence when pre-treatment; If described first standard deviation is less than the first preset value, the described second video image subsequence and adjacent untreated second video image subsequence of working as pre-treatment are merged into the second new video image subsequence, using the second video image subsequence after merging as the second video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the second video image subsequence of pre-treatment is not less than described first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as; If described first standard deviation is not less than the first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as.
In enforcement, described division module specifically for:
When merging treatment is carried out to described second video image subsequence, if described first standard deviation is not less than the first preset value and is greater than the second preset value, described second preset value is greater than described first preset value, the described second video image subsequence when pre-treatment is on average divided into the 3rd video image subsequence of the second setting number;
According to the time order and function order of each described 3rd video image subsequence, carry out merging treatment to described 3rd video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described 3rd video image subsequence of pre-treatment, determine first standard deviation of the described 3rd video image subsequence when pre-treatment, if described first standard deviation is less than the first preset value, the described 3rd video image subsequence and adjacent untreated 3rd video image subsequence of working as pre-treatment are merged into the 3rd new video image subsequence, using the 3rd video image subsequence after merging as the 3rd video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the 3rd video image subsequence of pre-treatment is not less than described first preset value and is not more than described second preset value, the first video image subsequence of 3rd video image subsequence as correspondence of pre-treatment will be worked as, if described first standard deviation is not less than the first preset value and be not more than described second preset value, the first video image subsequence of the 3rd video image subsequence as correspondence of pre-treatment will be worked as.
In enforcement, described division module also for:
If the playing duration that described first video image subsequence is corresponding is greater than the first duration, according to the colourity H component of each pixel in frame video image every in described first video image subsequence, saturation S component and brightness V component, determine the leading HSV of every frame video image respectively, the leading HSV of described video image is the value that in the average HSV of each pixel in described video image, occurrence number is maximum, and the average HSV of pixel is the H component of this pixel, the mean value of S component and V component rounds gained;
Described first video image subsequence is on average divided into the 4th video image subsequence of the 3rd setting number;
According to the time order and function order of each described 4th video image subsequence in described video file, carry out merging treatment to described 4th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 4th video image subsequence of pre-treatment, determine second standard deviation of the described 4th video image subsequence when pre-treatment; If described second standard deviation is less than the 3rd preset value, the described 4th video image subsequence and adjacent untreated 4th video image subsequence of working as pre-treatment are merged, using the video image subsequence that obtains after merging as the 4th video image subsequence when pre-treatment, repeat described processing procedure, until when second standard deviation of the 4th video image subsequence of pre-treatment is not less than described 3rd preset value, the first video image subsequence of the 4th video image subsequence as correspondence of pre-treatment will be worked as; If described second standard deviation is not less than the 3rd preset value, the first video image subsequence of the 4th video image subsequence as correspondence of pre-treatment will be worked as.
In enforcement, described division module also for:
When merging treatment is carried out to described 4th video image subsequence,
If described second standard deviation is not less than the 3rd preset value and be greater than the 4th preset value, described 4th preset value is greater than described 3rd preset value, the described 4th video image subsequence when pre-treatment is on average divided into the 5th video image subsequence of the 4th setting number;
According to the time order and function order of each described 5th video image subsequence, carry out merging treatment to described 5th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 5th video image subsequence of pre-treatment, determine first standard deviation of the described 5th video image subsequence when pre-treatment; If described first standard deviation is less than the 3rd preset value, the described 5th video image subsequence and adjacent untreated 5th video image subsequence of working as pre-treatment are merged, using the video image subsequence after merging as the 5th video image subsequence when pre-treatment, repeat described processing procedure, until when second standard deviation of the 5th video image subsequence of pre-treatment is not less than described 3rd preset value and is not more than described 4th preset value, the first video image subsequence of the 5th video image subsequence as correspondence of pre-treatment will be worked as; If described second standard deviation is not less than the 3rd preset value and be not more than described 4th preset value, the first video image subsequence of the 5th video image subsequence as correspondence of pre-treatment will be worked as.
In enforcement, described division module also for:
If the playing duration that described first video image subsequence is corresponding is greater than the first duration, in units of the number of the video image comprised by default playing duration, described first video image subsequence is divided into multiple video image group;
To calculate in each described video image group every frame video image respectively relative to the motor image prime number of former frame video image, and calculate the motion pixel value of mean value as described video image group of the motor image prime number of each video image in each described video image group;
Described first video image subsequence is divided into the 6th video image subsequence of the 5th setting number, each described 6th video image subsequence is the integral multiple of described video image group;
According to the time order and function order of each described 6th video image subsequence in described video file, carry out merging treatment to described 6th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the motor image prime number when each video image group in the described 6th video image subsequence of pre-treatment, determine the 3rd standard deviation of the described 6th video image subsequence when pre-treatment; If described 3rd standard deviation is less than the 5th preset value, the described 6th video image subsequence and adjacent untreated 6th video image subsequence of working as pre-treatment are merged, using the video image subsequence that obtains after merging as the 6th video image subsequence when pre-treatment, repeat described processing procedure, until when the 3rd standard deviation of the 6th video image subsequence of pre-treatment is not less than described 5th preset value, the first video image subsequence of the 6th video image subsequence as correspondence of pre-treatment will be worked as; If described 3rd standard deviation is not less than the 5th preset value, the first video image subsequence of the 6th video image subsequence as correspondence of pre-treatment will be worked as.
In enforcement, described division module also for:
If described 3rd standard deviation is not less than the 5th preset value and is greater than the 6th preset value, described 6th preset value is greater than described 5th preset value, the described 6th video image subsequence when pre-treatment is on average divided into the 7th video image subsequence of the 6th setting number;
According to the time order and function order of each described 7th video image subsequence, carry out merging treatment to described 7th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 7th video image subsequence of pre-treatment, determine the 3rd standard deviation of the described 7th video image subsequence when pre-treatment; If described 3rd standard deviation is less than the 5th preset value, the described 7th video image subsequence and adjacent untreated 7th video image subsequence of working as pre-treatment are merged, using the video image subsequence after merging as the 7th video image subsequence when pre-treatment, repeat described processing procedure, until when the 3rd standard deviation of the 7th video image subsequence of pre-treatment is not less than described 5th preset value and is not more than described 6th preset value, the first video image subsequence of the 7th video image subsequence as correspondence of pre-treatment will be worked as; If described 3rd standard deviation is not less than the 5th preset value and be not more than described 6th preset value, the first video image subsequence of the 7th video image subsequence as correspondence of pre-treatment will be worked as.
Based on technique scheme, in the embodiment of the present invention, by the color distortion of adjacent two frame video images in the sequence of video images of determining to form video file, according to this color distortion, sequence of video images is divided into multiple first video image subsequence, a frame video image is extracted respectively from each described first video image subsequence, according to the time order and function order of described multiple first video image subsequence at described video file, generating according to the described video image extracted can the animation file of Dynamic Display video subject content, thus can realize automatically generating can the animation file of subject content of Dynamic Display video, make user can understand the subject content of this video at short notice according to this animation file.
Accompanying drawing explanation
Fig. 1 is the generative process schematic diagram of the animation file showing video subject content in the embodiment of the present invention;
Fig. 2 is the apparatus structure schematic diagram showing video subject content in the embodiment of the present invention.
Embodiment
In order to make the object, technical solutions and advantages of the present invention clearly, below in conjunction with accompanying drawing, the present invention is described in further detail, and obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making other embodiments all obtained under creative work prerequisite, belong to the scope of protection of the invention.
Theme video is defined as the video can showing the theme that a certain video will describe herein, is namely made up of the parts of images of this certain video.
In the embodiment of the present invention, as shown in Figure 1, can the generative process of animation file of Dynamic Display video subject content specific as follows:
Step 101: resolve video file, obtains the sequence of video images of the described video file of composition.
In enforcement, the present invention does not limit the mode of resolving video file.
Step 102: the color distortion determining adjacent two frame video images in the sequence of video images obtained, is divided into multiple first video image subsequence according to described color distortion by described sequence of video images.
When in video, occurrence scene switches, the master color feature of video image also can change usually, the present invention utilizes this feature, by the color distortion in video between video image, distinguish different scenes, video is divided into multiple timeslice, the corresponding first video image subsequence of each timeslice, thus the animation file of the video image composition extracted from each timeslice can be made can to embody the subject content of video.
In concrete enforcement, according to the red R component of pixel each in every frame video image, green G component and blue B component, determine the leading RGB of video image described in every frame respectively, the leading RGB of described video image is the value that in described video image, in the average RGB of each pixel, occurrence number is maximum, and the average RGB of pixel is the R component of this pixel, the mean value of G component and B component rounds gained;
Described sequence of video images is on average divided into the second video image subsequence of the first setting number;
According to the time order and function order of each described second video image subsequence in described video file, carry out merging treatment to described second video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described second video image subsequence of pre-treatment, determine first standard deviation of the described second video image subsequence when pre-treatment; If described first standard deviation is less than the first preset value, the described second video image subsequence and adjacent untreated second video image subsequence of working as pre-treatment are merged into the second new video image subsequence, using the second video image subsequence after merging as the second video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the second video image subsequence of pre-treatment is not less than described first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as; If described first standard deviation is not less than the first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as.
Suppose there is N frame video image in video A, in video image, the color of each pixel is made up of red R, green G and blue B component, and the span of each component of R, G or B is 0 ~ 255.The computational process of the leading RGB of video image is specially: the mean value calculating R, G and B component of each pixel in video image, is expressed as: the value mean value of R, G and B component of a pixel being rounded rear gained is called the average RGB of pixel; There is in statistics video image the number of pixels of identical average RGB, be expressed as: with corresponding relation as shown in table 1, in maximum corresponding for the leading RGB of this frame video image.
Table 1
Wherein, the computing formula of the first standard deviation can be expressed as:
Wherein, δ 1 represents the first standard deviation, and Y represents the number of the video image in video image subsequence, RGB irepresent the leading RGB of video frame image i, represent the mean value of the leading RGB of each video image in video image subsequence.
In enforcement, because the computational process of RGB is simple, processing speed is fast, can save process resource.
Alternatively, in this specific embodiment, when merging treatment is carried out to described second video image subsequence, if described first standard deviation is not less than the first preset value and is greater than the second preset value, second preset value is greater than the first preset value, the described second video image subsequence when pre-treatment is on average divided into the 3rd video image subsequence of the second setting number;
According to the time order and function order of each described 3rd video image subsequence, carry out merging treatment to described 3rd video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described 3rd video image subsequence of pre-treatment, determine first standard deviation of the described 3rd video image subsequence when pre-treatment, if described first standard deviation is less than the first preset value, the described 3rd video image subsequence and adjacent untreated 3rd video image subsequence of working as pre-treatment are merged into the 3rd new video image subsequence, using the 3rd video image subsequence after merging as the 3rd video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the 3rd video image subsequence of pre-treatment is not less than described first preset value and is not more than described second preset value, the first video image subsequence of 3rd video image subsequence as correspondence of pre-treatment will be worked as, if described first standard deviation is not less than the first preset value and be not more than described second preset value, the first video image subsequence of the 3rd video image subsequence as correspondence of pre-treatment will be worked as.
Alternatively, in enforcement, if the playing duration that described first video image subsequence is corresponding is greater than the first duration, according to the colourity H component of each pixel in frame video image every in described first video image subsequence, saturation S component and brightness V component, determine the leading HSV of every frame video image respectively, the leading HSV of described video image is the value that in the average HSV of each pixel in described video image, occurrence number is maximum, and the average HSV of pixel is the H component of this pixel, the mean value of S component and V component rounds gained;
Described first video image subsequence is on average divided into the 4th video image subsequence of the 3rd setting number;
According to the time order and function order of each described 4th video image subsequence in described video file, carry out merging treatment to described 4th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 4th video image subsequence of pre-treatment, determine second standard deviation of the described 4th video image subsequence when pre-treatment; If described second standard deviation is less than the 3rd preset value, the described 4th video image subsequence and adjacent untreated 4th video image subsequence of working as pre-treatment are merged, using the video image subsequence that obtains after merging as the 4th video image subsequence when pre-treatment, repeat described processing procedure, until when second standard deviation of the 4th video image subsequence of pre-treatment is not less than described 3rd preset value, the first video image subsequence of the 4th video image subsequence as correspondence of pre-treatment will be worked as; If described second standard deviation is not less than the 3rd preset value, the first video image subsequence of the 4th video image subsequence as correspondence of pre-treatment will be worked as.
Wherein, suppose there is N frame video image in video A, in video image, the color of each pixel is made up of red R, green G and blue B component, the H component of this pixel, S component and V component can be determined according to the red R of pixel, green G and blue B component, the concrete defining method of H, S, V can adopt existing mode to realize, and repeats no more herein.The computational process of the leading HSV of video image is: the mean value calculating H, S and V component of each pixel in video image, is expressed as: the mean value of H, S and V component of pixel is rounded to the average HSV obtaining this pixel; There is in statistics video image the number of pixels of the mean value of identical H, S and V component, be expressed as: in maximum corresponding for the leading HSV of this frame video image.
Wherein, the computing formula of the second standard deviation can be expressed as:
Wherein, δ 2 represents the second standard deviation, and Y represents the number of the video image in video image subsequence, HSV irepresent the leading HSV of video image, represent the mean value of the leading HSV of each video image in video image subsequence.
Alternatively, when merging treatment is carried out to described 4th video image subsequence, if described second standard deviation is not less than the 3rd preset value and be greater than the 4th preset value, 4th preset value is greater than the 3rd preset value, the described 4th video image subsequence when pre-treatment is on average divided into the 5th video image subsequence of the 4th setting number;
According to the time order and function order of each described 5th video image subsequence, carry out merging treatment to described 5th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 5th video image subsequence of pre-treatment, determine first standard deviation of the described 5th video image subsequence when pre-treatment; If described first standard deviation is less than the 3rd preset value, the described 5th video image subsequence and adjacent untreated 5th video image subsequence of working as pre-treatment are merged, using the video image subsequence after merging as the 5th video image subsequence when pre-treatment, repeat described processing procedure, until when second standard deviation of the 5th video image subsequence of pre-treatment is not less than described 3rd preset value and is not more than described 4th preset value, the first video image subsequence of the 5th video image subsequence as correspondence of pre-treatment will be worked as; If described second standard deviation is not less than the 3rd preset value and be not more than described 4th preset value, the first video image subsequence of the 5th video image subsequence as correspondence of pre-treatment will be worked as.
Alternatively, if playing duration corresponding to described first video image subsequence is greater than the first duration, in units of the number of the video image comprised by default playing duration, described first video image subsequence is divided into multiple video image group;
To calculate in each described video image group every frame video image respectively relative to the motor image prime number of former frame video image, and calculate the motion pixel value of mean value as described video image group of the motor image prime number of each video image in each described video image group;
Described first video image subsequence is divided into the 6th video image subsequence of the 5th setting number, each described 6th video image subsequence is the integral multiple of described video image group;
According to the time order and function order of each described 6th video image subsequence in described video file, carry out merging treatment to described 6th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the motor image prime number when each video image group in the described 6th video image subsequence of pre-treatment, determine the 3rd standard deviation of the described 6th video image subsequence when pre-treatment; If described 3rd standard deviation is less than the 5th preset value, the described 6th video image subsequence and adjacent untreated 6th video image subsequence of working as pre-treatment are merged, using the video image subsequence that obtains after merging as the 6th video image subsequence when pre-treatment, repeat described processing procedure, until when the 3rd standard deviation of the 6th video image subsequence of pre-treatment is not less than described 5th preset value, the first video image subsequence of the 6th video image subsequence as correspondence of pre-treatment will be worked as; If described 3rd standard deviation is not less than the 5th preset value, the first video image subsequence of the 6th video image subsequence as correspondence of pre-treatment will be worked as.
Alternatively, when merging treatment is carried out to described 6th video image subsequence, if described 3rd standard deviation is not less than the 5th preset value and is greater than the 6th preset value, 6th preset value is greater than the 5th preset value, the described 6th video image subsequence when pre-treatment is on average divided into the 7th video image subsequence of the 6th setting number;
According to the time order and function order of each described 7th video image subsequence, carry out merging treatment to described 7th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 7th video image subsequence of pre-treatment, determine the 3rd standard deviation of the described 7th video image subsequence when pre-treatment; If described 3rd standard deviation is less than the 5th preset value, the described 7th video image subsequence and adjacent untreated 7th video image subsequence of working as pre-treatment are merged, using the video image subsequence after merging as the 7th video image subsequence when pre-treatment, repeat described processing procedure, until when the 3rd standard deviation of the 7th video image subsequence of pre-treatment is not less than described 5th preset value and is not more than described 6th preset value, the first video image subsequence of the 7th video image subsequence as correspondence of pre-treatment will be worked as; If described 3rd standard deviation is not less than the 5th preset value and be not more than described 6th preset value, the first video image subsequence of the 7th video image subsequence as correspondence of pre-treatment will be worked as.
Step 103: extract a frame video image respectively from each described first video image subsequence, according to the time order and function order of described multiple first video image subsequence at described video file, generating according to the described video image extracted can the animation file of Dynamic Display video subject content.
Specifically implement for one, extract a frame video image arbitrarily respectively from each first video image subsequence, generating according to the video image extracted can the dynamic gif file of Dynamic Display video subject content.In this realization, the memory space that dynamic gif file takies is less, when the video file for magnanimity generates the dynamic gif file of each video file, can save a large amount of memory spaces.
Based on same inventive concept, a kind of device showing video subject content is additionally provided in the embodiment of the present invention, the concrete enforcement of this device can see the description of said method part, this device can be applied to the equipment such as television set, mobile phone, panel computer, such as, this device can be applied to the video request program module of television set.As shown in Figure 2, this device mainly comprises:
Parsing module 201, for resolving video file, obtains the sequence of video images of the described video file of composition;
Dividing module 202, for determining the color distortion of adjacent two frame video images in the sequence of video images that obtains, according to described color distortion, described sequence of video images being divided into multiple first video image subsequence;
Generation module 203, for extracting a frame video image respectively from each described first video image subsequence, according to the time order and function order of described multiple first video image subsequence at described video file, generating according to the described video image extracted can the animation file of Dynamic Display video subject content.
In enforcement, described division module 202 specifically for:
According to the red R component of pixel each in every frame video image, green G component and blue B component, determine the leading RGB of video image described in every frame respectively, the leading RGB of described video image is the value that in the average RGB of each pixel in described video image, occurrence number is maximum, and the average RGB of described pixel is the R component of described pixel, the mean value of G component and B component rounds gained;
Described sequence of video images is on average divided into the second video image subsequence of the first setting number;
According to the time order and function order of each described second video image subsequence in described video file, carry out merging treatment to described second video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described second video image subsequence of pre-treatment, determine first standard deviation of the described second video image subsequence when pre-treatment; If described first standard deviation is less than the first preset value, the described second video image subsequence and adjacent untreated second video image subsequence of working as pre-treatment are merged into the second new video image subsequence, using the second video image subsequence after merging as the second video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the second video image subsequence of pre-treatment is not less than described first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as; If described first standard deviation is not less than the first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as.
Wherein, described division module 202 specifically for:
When merging treatment is carried out to described second video image subsequence, if described first standard deviation is not less than the first preset value and is greater than the second preset value, described second preset value is greater than described first preset value, the described second video image subsequence when pre-treatment is on average divided into the 3rd video image subsequence of the second setting number;
According to the time order and function order of each described 3rd video image subsequence, carry out merging treatment to described 3rd video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described 3rd video image subsequence of pre-treatment, determine first standard deviation of the described 3rd video image subsequence when pre-treatment, if described first standard deviation is less than the first preset value, the described 3rd video image subsequence and adjacent untreated 3rd video image subsequence of working as pre-treatment are merged into the 3rd new video image subsequence, using the 3rd video image subsequence after merging as the 3rd video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the 3rd video image subsequence of pre-treatment is not less than described first preset value and is not more than described second preset value, the first video image subsequence of 3rd video image subsequence as correspondence of pre-treatment will be worked as, if described first standard deviation is not less than the first preset value and be not more than described second preset value, the first video image subsequence of the 3rd video image subsequence as correspondence of pre-treatment will be worked as.
Alternatively, described division module 202 also for:
If the playing duration that described first video image subsequence is corresponding is greater than the first duration, according to the colourity H component of each pixel in frame video image every in described first video image subsequence, saturation S component and brightness V component, determine the leading HSV of every frame video image respectively, the leading HSV of described video image is the value that in the average HSV of each pixel in described video image, occurrence number is maximum, and the average HSV of pixel is the H component of this pixel, the mean value of S component and V component rounds gained;
Described first video image subsequence is on average divided into the 4th video image subsequence of the 3rd setting number;
According to the time order and function order of each described 4th video image subsequence in described video file, carry out merging treatment to described 4th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 4th video image subsequence of pre-treatment, determine second standard deviation of the described 4th video image subsequence when pre-treatment; If described second standard deviation is less than the 3rd preset value, the described 4th video image subsequence and adjacent untreated 4th video image subsequence of working as pre-treatment are merged, using the video image subsequence that obtains after merging as the 4th video image subsequence when pre-treatment, repeat described processing procedure, until when second standard deviation of the 4th video image subsequence of pre-treatment is not less than described 3rd preset value, the first video image subsequence of the 4th video image subsequence as correspondence of pre-treatment will be worked as; If described second standard deviation is not less than the 3rd preset value, the first video image subsequence of the 4th video image subsequence as correspondence of pre-treatment will be worked as.
Wherein, described division module also for:
When merging treatment is carried out to described 4th video image subsequence,
If described second standard deviation is not less than the 3rd preset value and be greater than the 4th preset value, described 4th preset value is greater than described 3rd preset value, the described 4th video image subsequence when pre-treatment is on average divided into the 5th video image subsequence of the 4th setting number;
According to the time order and function order of each described 5th video image subsequence, carry out merging treatment to described 5th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 5th video image subsequence of pre-treatment, determine first standard deviation of the described 5th video image subsequence when pre-treatment; If described first standard deviation is less than the 3rd preset value, the described 5th video image subsequence and adjacent untreated 5th video image subsequence of working as pre-treatment are merged, using the video image subsequence after merging as the 5th video image subsequence when pre-treatment, repeat described processing procedure, until when second standard deviation of the 5th video image subsequence of pre-treatment is not less than described 3rd preset value and is not more than described 4th preset value, the first video image subsequence of the 5th video image subsequence as correspondence of pre-treatment will be worked as; If described second standard deviation is not less than the 3rd preset value and be not more than described 4th preset value, the first video image subsequence of the 5th video image subsequence as correspondence of pre-treatment will be worked as.
Alternatively, described division module 202 also for:
If the playing duration that described first video image subsequence is corresponding is greater than the first duration, in units of the number of the video image comprised by default playing duration, described first video image subsequence is divided into multiple video image group;
To calculate in each described video image group every frame video image respectively relative to the motor image prime number of former frame video image, and calculate the motion pixel value of mean value as described video image group of the motor image prime number of each video image in each described video image group;
Described first video image subsequence is divided into the 6th video image subsequence of the 5th setting number, each described 6th video image subsequence is the integral multiple of described video image group;
According to the time order and function order of each described 6th video image subsequence in described video file, carry out merging treatment to described 6th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the motor image prime number when each video image group in the described 6th video image subsequence of pre-treatment, determine the 3rd standard deviation of the described 6th video image subsequence when pre-treatment; If described 3rd standard deviation is less than the 5th preset value, the described 6th video image subsequence and adjacent untreated 6th video image subsequence of working as pre-treatment are merged, using the video image subsequence that obtains after merging as the 6th video image subsequence when pre-treatment, repeat described processing procedure, until when the 3rd standard deviation of the 6th video image subsequence of pre-treatment is not less than described 5th preset value, the first video image subsequence of the 6th video image subsequence as correspondence of pre-treatment will be worked as; If described 3rd standard deviation is not less than the 5th preset value, the first video image subsequence of the 6th video image subsequence as correspondence of pre-treatment will be worked as.
Wherein, described division module also for:
If described 3rd standard deviation is not less than the 5th preset value and is greater than the 6th preset value, described 6th preset value is greater than described 5th preset value, the described 6th video image subsequence when pre-treatment is on average divided into the 7th video image subsequence of the 6th setting number;
According to the time order and function order of each described 7th video image subsequence, carry out merging treatment to described 7th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 7th video image subsequence of pre-treatment, determine the 3rd standard deviation of the described 7th video image subsequence when pre-treatment; If described 3rd standard deviation is less than the 5th preset value, the described 7th video image subsequence and adjacent untreated 7th video image subsequence of working as pre-treatment are merged, using the video image subsequence after merging as the 7th video image subsequence when pre-treatment, repeat described processing procedure, until when the 3rd standard deviation of the 7th video image subsequence of pre-treatment is not less than described 5th preset value and is not more than described 6th preset value, the first video image subsequence of the 7th video image subsequence as correspondence of pre-treatment will be worked as; If described 3rd standard deviation is not less than the 5th preset value and be not more than described 6th preset value, the first video image subsequence of the 7th video image subsequence as correspondence of pre-treatment will be worked as.
Based on technique scheme, in the embodiment of the present invention, by the color distortion of adjacent two frame video images in the sequence of video images of determining to form video file, according to this color distortion, sequence of video images is divided into multiple first video image subsequence, a frame video image is extracted respectively from each described first video image subsequence, according to the time order and function order of described multiple first video image subsequence at described video file, generating according to the described video image extracted can the animation file of Dynamic Display video subject content, thus can realize automatically generating can the animation file of subject content of Dynamic Display video, make user can understand the subject content of this video at short notice according to this animation file.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.And the present invention can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disc store and optical memory etc.) of computer usable program code.
The present invention describes with reference to according to the flow chart of the method for the embodiment of the present invention, equipment (system) and computer program and/or block diagram.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame.These computer program instructions can being provided to the processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce a machine, making the instruction performed by the processor of computer or other programmable data processing device produce device for realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices is provided for the step realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (10)

1. show a method for video subject content, it is characterized in that, comprising:
Resolve video file, obtain the sequence of video images of the described video file of composition;
Determine the color distortion of adjacent two frame video images in the sequence of video images obtained, according to described color distortion, described sequence of video images is divided into multiple first video image subsequence;
A frame video image is extracted respectively from each described first video image subsequence, according to the time order and function order of described multiple first video image subsequence at described video file, generating according to the described video image extracted can the animation file of Dynamic Display video subject content.
2. the method for claim 1, is characterized in that, determines the color distortion of adjacent two frame video images in the sequence of video images obtained, according to described color distortion, described sequence of video images is divided into multiple first video image subsequence, comprises:
According to the red R component of pixel each in every frame video image, green G component and blue B component, determine the leading RGB of video image described in every frame respectively, the leading RGB of described video image is the value that in the average RGB of each pixel in described video image, occurrence number is maximum, and the average RGB of described pixel is the R component of described pixel, the mean value of G component and B component rounds gained;
Described sequence of video images is on average divided into the second video image subsequence of the first setting number;
According to the time order and function order of each described second video image subsequence in described video file, carry out merging treatment to described second video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described second video image subsequence of pre-treatment, determine first standard deviation of the described second video image subsequence when pre-treatment; If described first standard deviation is less than the first preset value, the described second video image subsequence and adjacent untreated second video image subsequence of working as pre-treatment are merged into the second new video image subsequence, using the second video image subsequence after merging as the second video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the second video image subsequence of pre-treatment is not less than described first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as; If described first standard deviation is not less than the first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as.
3. method as claimed in claim 2, it is characterized in that, when carrying out merging treatment to described second video image subsequence, described method also comprises:
If described first standard deviation is not less than the first preset value and is greater than the second preset value, described second preset value is greater than described first preset value, the described second video image subsequence when pre-treatment is on average divided into the 3rd video image subsequence of the second setting number;
According to the time order and function order of each described 3rd video image subsequence, carry out merging treatment to described 3rd video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described 3rd video image subsequence of pre-treatment, determine first standard deviation of the described 3rd video image subsequence when pre-treatment, if described first standard deviation is less than the first preset value, the described 3rd video image subsequence and adjacent untreated 3rd video image subsequence of working as pre-treatment are merged into the 3rd new video image subsequence, using the 3rd video image subsequence after merging as the 3rd video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the 3rd video image subsequence of pre-treatment is not less than described first preset value and is not more than described second preset value, the first video image subsequence of 3rd video image subsequence as correspondence of pre-treatment will be worked as, if described first standard deviation is not less than the first preset value and be not more than described second preset value, the first video image subsequence of the 3rd video image subsequence as correspondence of pre-treatment will be worked as.
4. method as claimed in claim 2 or claim 3, is characterized in that, according to described color distortion, described sequence of video images is divided into multiple first video image subsequence, also comprises:
If the playing duration that described first video image subsequence is corresponding is greater than the first duration, according to the colourity H component of each pixel in frame video image every in described first video image subsequence, saturation S component and brightness V component, determine the leading HSV of every frame video image respectively, the leading HSV of described video image is the value that in the average HSV of each pixel in described video image, occurrence number is maximum, and the average HSV of pixel is the H component of this pixel, the mean value of S component and V component rounds gained;
Described first video image subsequence is on average divided into the 4th video image subsequence of the 3rd setting number;
According to the time order and function order of each described 4th video image subsequence in described video file, carry out merging treatment to described 4th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 4th video image subsequence of pre-treatment, determine second standard deviation of the described 4th video image subsequence when pre-treatment; If described second standard deviation is less than the 3rd preset value, the described 4th video image subsequence and adjacent untreated 4th video image subsequence of working as pre-treatment are merged, using the video image subsequence that obtains after merging as the 4th video image subsequence when pre-treatment, repeat described processing procedure, until when second standard deviation of the 4th video image subsequence of pre-treatment is not less than described 3rd preset value, the first video image subsequence of the 4th video image subsequence as correspondence of pre-treatment will be worked as; If described second standard deviation is not less than the 3rd preset value, the first video image subsequence of the 4th video image subsequence as correspondence of pre-treatment will be worked as.
5. method as claimed in claim 4, it is characterized in that, when carrying out merging treatment to described 4th video image subsequence, described method also comprises:
If described second standard deviation is not less than the 3rd preset value and be greater than the 4th preset value, described 4th preset value is greater than described 3rd preset value, the described 4th video image subsequence when pre-treatment is on average divided into the 5th video image subsequence of the 4th setting number;
According to the time order and function order of each described 5th video image subsequence, carry out merging treatment to described 5th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 5th video image subsequence of pre-treatment, determine first standard deviation of the described 5th video image subsequence when pre-treatment; If described first standard deviation is less than the 3rd preset value, the described 5th video image subsequence and adjacent untreated 5th video image subsequence of working as pre-treatment are merged, using the video image subsequence after merging as the 5th video image subsequence when pre-treatment, repeat described processing procedure, until when second standard deviation of the 5th video image subsequence of pre-treatment is not less than described 3rd preset value and is not more than described 4th preset value, the first video image subsequence of the 5th video image subsequence as correspondence of pre-treatment will be worked as; If described second standard deviation is not less than the 3rd preset value and be not more than described 4th preset value, the first video image subsequence of the 5th video image subsequence as correspondence of pre-treatment will be worked as.
6. method as claimed in claim 2 or claim 3, is characterized in that, according to described color distortion, described sequence of video images is divided into multiple first video image subsequence, also comprises:
If the playing duration that described first video image subsequence is corresponding is greater than the first duration, in units of the number of the video image comprised by default playing duration, described first video image subsequence is divided into multiple video image group;
To calculate in each described video image group every frame video image respectively relative to the motor image prime number of former frame video image, and calculate the motion pixel value of mean value as described video image group of the motor image prime number of each video image in each described video image group;
Described first video image subsequence is divided into the 6th video image subsequence of the 5th setting number, each described 6th video image subsequence is the integral multiple of described video image group;
According to the time order and function order of each described 6th video image subsequence in described video file, carry out merging treatment to described 6th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the motor image prime number when each video image group in the described 6th video image subsequence of pre-treatment, determine the 3rd standard deviation of the described 6th video image subsequence when pre-treatment; If described 3rd standard deviation is less than the 5th preset value, the described 6th video image subsequence and adjacent untreated 6th video image subsequence of working as pre-treatment are merged, using the video image subsequence that obtains after merging as the 6th video image subsequence when pre-treatment, repeat described processing procedure, until when the 3rd standard deviation of the 6th video image subsequence of pre-treatment is not less than described 5th preset value, the first video image subsequence of the 6th video image subsequence as correspondence of pre-treatment will be worked as; If described 3rd standard deviation is not less than the 5th preset value, the first video image subsequence of the 6th video image subsequence as correspondence of pre-treatment will be worked as.
7. method as claimed in claim 6, it is characterized in that, carry out merging treatment to described 6th video image subsequence, described method also comprises:
If described 3rd standard deviation is not less than the 5th preset value and is greater than the 6th preset value, described 6th preset value is greater than described 5th preset value, the described 6th video image subsequence when pre-treatment is on average divided into the 7th video image subsequence of the 6th setting number;
According to the time order and function order of each described 7th video image subsequence, carry out merging treatment to described 7th video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading HSV when frame video image every in the described 7th video image subsequence of pre-treatment, determine the 3rd standard deviation of the described 7th video image subsequence when pre-treatment; If described 3rd standard deviation is less than the 5th preset value, the described 7th video image subsequence and adjacent untreated 7th video image subsequence of working as pre-treatment are merged, using the video image subsequence after merging as the 7th video image subsequence when pre-treatment, repeat described processing procedure, until when the 3rd standard deviation of the 7th video image subsequence of pre-treatment is not less than described 5th preset value and is not more than described 6th preset value, the first video image subsequence of the 7th video image subsequence as correspondence of pre-treatment will be worked as; If described 3rd standard deviation is not less than the 5th preset value and be not more than described 6th preset value, the first video image subsequence of the 7th video image subsequence as correspondence of pre-treatment will be worked as.
8. show a device for video subject content, it is characterized in that, comprising:
Parsing module, for resolving video file, obtains the sequence of video images of the described video file of composition;
Dividing module, for determining the color distortion of adjacent two frame video images in the sequence of video images that obtains, according to described color distortion, described sequence of video images being divided into multiple first video image subsequence;
Generation module, for extracting a frame video image respectively from each described first video image subsequence, according to the time order and function order of described multiple first video image subsequence at described video file, generating according to the described video image extracted can the animation file of Dynamic Display video subject content.
9. device as claimed in claim 8, is characterized in that, described division module specifically for:
According to the red R component of pixel each in every frame video image, green G component and blue B component, determine the leading RGB of video image described in every frame respectively, the leading RGB of described video image is the value that in the average RGB of each pixel in described video image, occurrence number is maximum, and the average RGB of described pixel is the R component of described pixel, the mean value of G component and B component rounds gained;
Described sequence of video images is on average divided into the second video image subsequence of the first setting number;
According to the time order and function order of each described second video image subsequence in described video file, carry out merging treatment to described second video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described second video image subsequence of pre-treatment, determine first standard deviation of the described second video image subsequence when pre-treatment; If described first standard deviation is less than the first preset value, the described second video image subsequence and adjacent untreated second video image subsequence of working as pre-treatment are merged into the second new video image subsequence, using the second video image subsequence after merging as the second video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the second video image subsequence of pre-treatment is not less than described first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as; If described first standard deviation is not less than the first preset value, the first video image subsequence of the second video image subsequence as correspondence of pre-treatment will be worked as.
10. device as claimed in claim 9, is characterized in that, described division module specifically for:
When merging treatment is carried out to described second video image subsequence, if described first standard deviation is not less than the first preset value and is greater than the second preset value, described second preset value is greater than described first preset value, the described second video image subsequence when pre-treatment is on average divided into the 3rd video image subsequence of the second setting number;
According to the time order and function order of each described 3rd video image subsequence, carry out merging treatment to described 3rd video image subsequence successively and obtain multiple described first video image subsequence, single treatment process is as follows:
According to the leading RGB when frame video image every in the described 3rd video image subsequence of pre-treatment, determine first standard deviation of the described 3rd video image subsequence when pre-treatment, if described first standard deviation is less than the first preset value, the described 3rd video image subsequence and adjacent untreated 3rd video image subsequence of working as pre-treatment are merged into the 3rd new video image subsequence, using the 3rd video image subsequence after merging as the 3rd video image subsequence when pre-treatment, repeat described processing procedure, until when first standard deviation of the 3rd video image subsequence of pre-treatment is not less than described first preset value and is not more than described second preset value, the first video image subsequence of 3rd video image subsequence as correspondence of pre-treatment will be worked as, if described first standard deviation is not less than the first preset value and be not more than described second preset value, the first video image subsequence of the 3rd video image subsequence as correspondence of pre-treatment will be worked as.
CN201510329798.9A 2015-06-15 2015-06-15 A kind of method and device showing video subject content Active CN104954892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510329798.9A CN104954892B (en) 2015-06-15 2015-06-15 A kind of method and device showing video subject content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510329798.9A CN104954892B (en) 2015-06-15 2015-06-15 A kind of method and device showing video subject content

Publications (2)

Publication Number Publication Date
CN104954892A true CN104954892A (en) 2015-09-30
CN104954892B CN104954892B (en) 2018-12-18

Family

ID=54169176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510329798.9A Active CN104954892B (en) 2015-06-15 2015-06-15 A kind of method and device showing video subject content

Country Status (1)

Country Link
CN (1) CN104954892B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104702914A (en) * 2015-01-14 2015-06-10 汉柏科技有限公司 Monitored video data processing method and system
CN106777114A (en) * 2016-12-15 2017-05-31 北京奇艺世纪科技有限公司 A kind of video classification methods and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130094831A1 (en) * 2011-10-18 2013-04-18 Sony Corporation Image processing apparatus, image processing method, and program
CN103345764A (en) * 2013-07-12 2013-10-09 西安电子科技大学 Dual-layer surveillance video abstraction generating method based on object content
CN104320670A (en) * 2014-11-17 2015-01-28 东方网力科技股份有限公司 Summary information extracting method and system for network video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130094831A1 (en) * 2011-10-18 2013-04-18 Sony Corporation Image processing apparatus, image processing method, and program
CN103345764A (en) * 2013-07-12 2013-10-09 西安电子科技大学 Dual-layer surveillance video abstraction generating method based on object content
CN104320670A (en) * 2014-11-17 2015-01-28 东方网力科技股份有限公司 Summary information extracting method and system for network video

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104702914A (en) * 2015-01-14 2015-06-10 汉柏科技有限公司 Monitored video data processing method and system
CN106777114A (en) * 2016-12-15 2017-05-31 北京奇艺世纪科技有限公司 A kind of video classification methods and system
CN106777114B (en) * 2016-12-15 2023-05-19 北京奇艺世纪科技有限公司 Video classification method and system

Also Published As

Publication number Publication date
CN104954892B (en) 2018-12-18

Similar Documents

Publication Publication Date Title
US20180084292A1 (en) Web-based live broadcast
US10728510B2 (en) Dynamic chroma key for video background replacement
CN103440674B (en) A kind of rapid generation of digital picture wax crayon specially good effect
CN110248115B (en) Image processing method, device and storage medium
US20140078170A1 (en) Image processing apparatus and method, and program
CN113225606B (en) Video barrage processing method and device
CN105282622A (en) Scene switching method and device
US9407835B2 (en) Image obtaining method and electronic device
CN112037160A (en) Image processing method, device and equipment
CN114071223A (en) Optical flow-based video interpolation frame generation method, storage medium and terminal equipment
CN111787240B (en) Video generation method, apparatus and computer readable storage medium
CN104954892A (en) Method and device for showing video subject content
EP3206387B1 (en) Image dynamic range adjustment method, terminal, and storage media
CN112419218A (en) Image processing method and device and electronic equipment
CN116614716A (en) Image processing method, image processing device, storage medium, and electronic apparatus
CN108876866B (en) Media data processing method, device and storage medium
CN110582021A (en) Information processing method and device, electronic equipment and storage medium
CN105306961B (en) A kind of method and device for taking out frame
JP2013055489A (en) Image processing apparatus and program
CN109949377B (en) Image processing method and device and electronic equipment
CN112488972A (en) Method and device for synthesizing green screen image and virtual image in real time
CN111415367A (en) Method and device for removing image background
CN108335292B (en) Method for inserting picture in scene switching
KR101831138B1 (en) Method and apparatus for manufacturing animation sticker using video
CN113115109B (en) Video processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant