CN109831680A - A kind of evaluation method and device of video definition - Google Patents

A kind of evaluation method and device of video definition Download PDF

Info

Publication number
CN109831680A
CN109831680A CN201910203182.5A CN201910203182A CN109831680A CN 109831680 A CN109831680 A CN 109831680A CN 201910203182 A CN201910203182 A CN 201910203182A CN 109831680 A CN109831680 A CN 109831680A
Authority
CN
China
Prior art keywords
frame
image frame
video
clarity
picture frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910203182.5A
Other languages
Chinese (zh)
Inventor
姚亮
冯巍
蒋紫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201910203182.5A priority Critical patent/CN109831680A/en
Publication of CN109831680A publication Critical patent/CN109831680A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

This application provides a kind of evaluation method of video definition and devices, image frame sequence in video to be evaluated is divided into subsequence according to structural similarity, the abstract image frame from each subsequence respectively, evaluate the clarity of each picture frame extracted, clarity according to each picture frame, determine the clarity of the video to be evaluated, it can be seen that, realize the purpose of the clarity of automatic Evaluation video, and, in any one subsequence, the difference of the structural similarity of adjacent picture frame is not more than preset threshold, therefore, the biggish picture frame of scene difference is divided into different subsequences, and the abstract image frame from each subsequence, enable to the different scenes in the picture frame extracted covering video, so, in the case where the clarity of the clarity evaluation video with picture frame, it can be improved evaluation The accuracy of effect.

Description

A kind of evaluation method and device of video definition
Technical field
This application involves the evaluation methods and device of electronic information field more particularly to a kind of video definition.
Background technique
Current major video companies are all greatly developing short video traffic, and short-sighted frequency refers to oneself shooting uploaded by user And the video of editing, short video traffic makes the type and content of video in video website more abundant, but there is also views The problems such as frequency quality is irregular.Some low-quality video, the i.e. poor video of clarity, will affect the viewing of user Experience.
So video website to user when showing search result and recommending video, it should reduce these low quality The priority of video allows users preferentially it is seen that the video of high quality.Therefore, evaluate video clarity become can not or Scarce technology.
Short video traffic can all generate a large amount of short-sighted frequency daily, and the mode for only relying on manual examination and verification at present is evaluated and selected The high short-sighted frequency of clarity, this mode is not only inefficient, but also the subjective impact by people, and accuracy is not also high.
Summary of the invention
This application provides a kind of evaluation method of video definition and devices, it is therefore intended that how accurate and efficient solve Evaluate the problem of clarity of video in ground.
To achieve the goals above, this application provides following technical schemes:
A kind of evaluation method of video definition, comprising:
Image frame sequence in video to be evaluated is divided into subsequence according to structural similarity, wherein any one In subsequence, the difference of the structural similarity of adjacent picture frame is not more than preset threshold;
The abstract image frame from each subsequence respectively;
Evaluate the clarity of each the described image frame extracted;
According to the clarity of each described image frame, the clarity of the video to be evaluated is determined.
Optionally, the image frame sequence by video to be evaluated is divided into sub-series of packets according to structural similarity It includes:
According to the sequence in described image frame sequence, by the picture frame in described image frame sequence successively as current figure As frame, until each of traversal described image frame sequence picture frame executes following current image frame described in each Step:
The difference of structural similarity between the current image frame and the latter picture frame is no more than the threshold value In the case of, the latter picture frame and the current image frame are divided into identical subsequence, otherwise, by the latter figure As frame is divided into different subsequences from the current image frame.
It is optionally, described that abstract image frame includes: from each subsequence respectively
For any one subsequence, with preset frame period, the abstract image frame from the subsequence.
Optionally, the process for evaluating the clarity of any one of picture frame includes:
By any one of picture frame input clarity detection model that training obtains in advance, the picture frame is obtained Articulation score, the clarity detection model in advance training obtains;
Wherein, the clarity detection model is used to extract feature, and the feature according to extraction from the picture frame of input, The probability that described image frame is clear image is exported, the probability is the articulation score.
Optionally, the clarity of each described image frame described in the foundation determines the clear of the video to be evaluated Clear degree includes:
The articulation score of whole described image frames is ranked up, fraction sequence is obtained;
Selection target picture frame, target image frame are the picture frame with target sharpness score, the target sharpness It is scored at the score come within the scope of predeterminated position in the fraction sequence;
According to the articulation score of the target image frame, the articulation score of the video to be evaluated is determined.
A kind of evaluating apparatus of video definition, comprising:
Categorization module, for the image frame sequence in video to be evaluated to be divided into subsequence according to structural similarity, Wherein, in any one subsequence, the difference of the structural similarity of adjacent picture frame is not more than preset threshold;
Abstraction module, for the abstract image frame from each subsequence respectively;
Evaluation module, for evaluating the clarity of each the described image frame extracted;
Determining module determines the clear of the video to be evaluated for the clarity according to each described image frame Degree.
Optionally, the categorization module is for dividing the image frame sequence in video to be evaluated according to structural similarity Include: for subsequence
The categorization module is specifically used for,
According to the sequence in described image frame sequence, by the picture frame in described image frame sequence successively as current figure As frame, until each of traversal described image frame sequence picture frame executes following current image frame described in each Step:
The difference of structural similarity between the current image frame and the latter picture frame is no more than the threshold value In the case of, the latter picture frame and the current image frame are divided into identical subsequence, otherwise, by the latter figure As frame is divided into different subsequences from the current image frame.
Optionally, the abstraction module is for abstract image frame to include: from each subsequence respectively
The abstraction module is specifically used for, any one subsequence is taken out from the subsequence with preset frame period Take picture frame.
Optionally, the process of clarity of the evaluation module for evaluating any one of picture frame includes:
The evaluation module is specifically used for, by any one of picture frame input clarity that training obtains in advance Detection model obtains the articulation score of the picture frame, and training obtains the clarity detection model in advance;
Wherein, the clarity detection model is used to extract feature, and the feature according to extraction from the picture frame of input, The probability that described image frame is clear image is exported, the probability is the articulation score.
Optionally, the determining module determines described to be evaluated for the clarity according to each described image frame The clarity of video includes:
The determining module is specifically used for, and the articulation score of whole described image frames is ranked up, score sequence is obtained Column;
Selection target picture frame, target image frame are the picture frame with target sharpness score, the target sharpness It is scored at the score come within the scope of predeterminated position in the fraction sequence;
According to the articulation score of the target image frame, the articulation score of the video to be evaluated is determined.
The evaluation method and device of video definition described herein, the image frame sequence in video to be evaluated is pressed It is divided into subsequence according to structural similarity, respectively the abstract image frame from each subsequence, evaluates each image of extraction The clarity of frame determines the clarity of the video to be evaluated according to the clarity of each picture frame, it is seen then that uses figure As the clarity of the clarity evaluation video of frame, it is thereby achieved that the purpose of the clarity of automatic Evaluation video, also, it is any In one subsequence, the difference of the structural similarity of adjacent picture frame is not more than preset threshold, because of the difference of structural similarity Different reaction be scene between different images frame difference, difference is not more than threshold value, then illustrates that scene difference is little, therefore, The biggish picture frame of scene difference is divided into different subsequences, and the abstract image frame from each subsequence, can be made The different scenes in picture frame covering video that must be extracted, so, the clarity of video is being evaluated with the clarity of picture frame In the case of, it can be improved the accuracy of evaluation effect.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is a kind of flow chart of the evaluation method of video definition disclosed in the embodiment of the present application;
Fig. 2 is the flow chart of the evaluation method of another video definition disclosed in the embodiment of the present application;
Fig. 3 is a kind of structural schematic diagram of the evaluating apparatus of video definition disclosed in the embodiment of the present application.
Specific embodiment
Applicant has found that most of picture frame is all high definition in HD video, and fuzzy video in the course of the study Middle major part picture frame is all fuzzy, that is to say, the clarity of most of picture frame, determines the clear of video in video Degree.
The embodiment of the present application discloses the evaluation method of the video definition based on above-mentioned discovery, can apply in video network On standing, evaluated for the clarity to the picture frame in the video for uploading to video website, to obtain for the clear of video The evaluation result of clear degree.
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
Fig. 1 is a kind of evaluation method of video definition disclosed in the embodiment of the present application, comprising the following steps:
S101: video to be evaluated is obtained.
In the present embodiment, video to be evaluated is the video that user uploads, and further, the video that user uploads is user Shoot the video of (being also possible to editing).
Specifically, video to be evaluated can be obtained from user's uploaded videos library of existing Video Applications, alternatively, The video of user's upload can directly be received.
It should be noted that the present embodiment is only by taking the video that voluntarily shoots of user that user uploads as an example, but the application institute The method stated is suitable for all videos.S102: the image frame sequence in video to be evaluated is divided into according to structural similarity Subsequence.
Video according to the multiple images frame sequence of Time alignment by forming, and applicant has found in the course of the study, video In most of picture frames clarity it is consistent with the clarity of entire video, therefore, in the present embodiment, with the clear of multiple images frame Clear degree calculates the clarity of entire video.
Structural similarity (structural similarity, SSIM) is to be at an angle of reflection body structure from image group Attribute, when SSIM is greater than threshold value G, expression belongs to different scenes, indicates to belong to the same scene when less than threshold value G.SSIM is got over Greatly, illustrate that the similitude of the scene of two picture frames is higher.Wherein, scene can be the content of picture frame, for example, two people exist Office's talk is a kind of scene, and two people are another scene in outdoor running.
In the present embodiment, SSIM is divided into identical subsequence no more than the picture frame of preset threshold, SSIM is greater than The picture frame of preset threshold is divided into different subsequences.Wherein, threshold value can be empirically arranged.
In the present embodiment, the SSIM between two picture frames X and Y is calculated using following formula:
Wherein μxIndicate the mean value of image X, μyIndicate the mean value of image Y, δxIndicate the standard deviation of image X, δyIndicate image The standard deviation of Y, C1It indicates constant, generally takes 6.5, C2It indicates constant, generally takes 58.5.
Specifically, according to the sequence in image frame sequence, by the picture frame in image frame sequence successively as current figure As frame, until each of traversal image frame sequence picture frame executes following steps for each current image frame: In the case that the difference of structural similarity between current image frame and the latter picture frame is not more than institute's threshold value, by the latter figure As frame and current image frame are divided into identical subsequence, otherwise, the latter picture frame is divided into different sons from current image frame Sequence.
For example: assuming that including 5 picture frames in image frame sequence: according to the sequence after arriving first, number is respectively 1,2,3,4,5.First it regard the picture frame (first picture frame i.e. in image frame sequence) that number is 1 as current image frame, meter Calculate the SSIM between current image frame and the latter picture frame (numbering the picture frame for being 2), it is assumed that SSIM between the two is not Greater than threshold value, then the picture frame that the picture frame that number is 2 is 1 with number is divided into a subsequence.
The picture frame for being again 2 using number calculates current image frame and (numbers with the latter picture frame as current image frame Picture frame for 3) between SSIM, it is assumed that SSIM between the two be not more than threshold value, then will number be 3 picture frame and number It is divided into a subsequence for 2 picture frame, because being divided into the picture frame that the picture frame that number is 2 is 1 with number One subsequence, so, the picture frame of number 1,2 and 3 is divided into a subsequence.
Continue to number the picture frame for being 3 as current image frame, calculates current image frame and (compiled with the latter picture frame Number be 4 picture frame) between SSIM, it is assumed that SSIM between the two be greater than threshold value, then will number be 4 picture frame and number It is divided into different subsequences for 3 picture frame, because the picture frame that number is 1,2 and 3 is divided into a subsequence, institute With the picture frame that number is 4 is divided into a new subsequence.
Continue to number the picture frame for being 4 as current image frame, calculates current image frame and (compiled with the latter picture frame Number be 5 picture frame) between SSIM, it is assumed that SSIM between the two be greater than threshold value, then will number be 5 picture frame and number Different subsequences is divided into for 4 picture frame.
So far, image frame sequence is divided into 3 subsequences, wherein first subsequence includes that number is 1,2 and 3 Picture frame, second subsequence include the picture frame that number is 4, and third subsequence includes the picture frame that number is 5.
S103: the abstract image frame from each subsequence respectively.
Specifically, the quantity for the picture frame that can be extracted from each different subsequence may be the same or different.
Optionally, in the present embodiment, the total number of the picture frame of extraction can be empirically arranged, the picture frame of extraction Quantity it is more, the result of the clarity of obtained entire video closer to video actual mass, i.e., it is more accurate, but for interior Deposit, time, the consuming of the resources such as CPU occupancy it is bigger, therefore, it is necessary to comprehensively consider accuracy and resource cost amount, with determination The quantity of the picture frame of extraction.That is, the number of the picture frame extracted is according to clarity evaluation accuracy and computing resource Occupancy determines.
Using the first numerical value as the upper limit quantity of the picture frame extracted, the first numerical value can be empirical value, i.e., by artificial The first numerical value of experimental setup.For example, the clarity evaluation result (including high definition, general cleer and peaceful fuzzy) of 100 videos is manually provided, Method described in the present embodiment is reused, clarity evaluation result (including the high definition, general cleer and peaceful mould of this 100 videos are obtained Paste), compare the same video artificial result and using method described in the present embodiment determine as a result, as identical, as 1, Difference is 0, calculates 1 ratio shared in whole results, adjusts the first numerical value in the embodiment of the present application, until being expired The ratio (such as 90%) of meaning, and (such as two kinds of occupancies are not for the memory and CPU occupancy meet demand for reaching this ratio More than 50%).
S104: the clarity of each of picture frame of extraction picture frame is evaluated.
Clarity is to characterize the parameter of the readability of image.Specifically, clarity can be one point in the present embodiment Numerical value (referred to as articulation score).I.e. by calculating the articulation score of each picture frame, realizes and evaluate each image The purpose of the clarity of frame.
In the present embodiment, the detailed process of the articulation score of each picture frame is calculated are as follows:
1, by a picture frame input clarity detection model that training obtains in advance, the clarity for obtaining the picture frame is obtained Point.
2, for each of the picture frame of extraction picture frame, 1 is executed, the articulation score of all picture frames is obtained.
Wherein, clarity detection model is two disaggregated models, and the structure of two disaggregated models can use existing structure, And using two disaggregated model of sample training of preset kind, clarity detection model is obtained.
The training process of clarity detection model are as follows: collect the sample of high-definition image collection and fuzzy graph image set as model training This, such as sample can (image be pros using happy high 50000 high-definition images provided and 50000 blurred pictures Shape), it is trained study using two classification mobilenet deep learning networks, obtains clarity detection model.Specifically, will The articulation score of high definition sample image is labeled as 1, the articulation score of fuzzy sample image is labeled as 0, for any one A sample image, clarity detection model extracts the feature of the sample image, and determines that the sample image is clear according to feature The probability of image, i.e. articulation score.Preset loss function is reused, between the probability being calculated and sample labeling value Difference value, and the parameter of clarity detection model is adjusted, until obtaining minimum difference value.Trained concrete mode may refer to existing There is technology, which is not described herein again.
The clarity detection model that the adjustment of trained clarity detection model, as parameter is completed, trained model Therefore the feature of " study is arrived " high-definition image frame and blurred picture frame after receiving picture frame to be evaluated, is extracted to be evaluated Picture frame feature, and the feature according to extraction determine picture frame to be evaluated be clear image the probability (range of probability Between [0,1]).
Optionally, the higher articulation score of accuracy in order to obtain, each of picture frame of extraction can be schemed After being converted to size same as sample image and/or format as frame, then input clarity detection model.With above-mentioned sample image For, conversion process includes using shorter one in the length of original image (i.e. the picture frame of any one model to be entered) and width as side It is long, a square is intercepted in original image, then the square is zoomed to the size of sample image, such as 224*224, finally again will Image is normalized, normalized a kind of citing are as follows: bianry image will be converted in image.
S105: according to the clarity of each picture frame in the picture frame extracted, the clarity of video to be evaluated is determined.
Specifically, can be by the mean value of the clear score of each picture frame in the picture frame of extraction, as the clear of video Clear degree score.
If the articulation score of video is less than first threshold, it is determined that video is HD video, if video is clear It spends score to be greater than first threshold and be less than second threshold, it is determined that video is general clear video, if the articulation score of video is big In second threshold, it is determined that video is fuzzy video.Wherein first threshold and second threshold can be joined by many experiments tune It arrives.
It should be noted that above articulation score clarity corresponding with the size relation of first threshold and second threshold As a result, only a kind of example, in fact, for example, clarity can also be divided into fuzzy and two class of high definition.In another example in model Distinguished number change in the case where, the corresponding relationship for being determined as HD video is possible to become articulation score less than third Threshold value (or articulation score is greater than the 4th threshold value etc.).In short, above example is not to according to each in the picture frame extracted The clarity of a picture frame determines that the clarity of video causes to limit.
In method shown in FIG. 1, using the clarity of the multiple images frame in video, the clarity of video is determined, with people Work differentiates that the method for clarity is compared, and not only efficiency increases, it may have unified evaluation criterion, further, it is possible to avoid people The subjectivity of work judgement, so, under the premise of improving efficiency, objectivity and credibility with higher.
Also, the picture frame in video is divided into the subsequence under different scenes according to SSIM, from each subsequence Abstract image frame, therefore, the picture frame of extraction can cover each scene in video, so, based on the clear of picture frame Under the mode of the clarity of degree evaluation video, additionally it is possible to obtain more accurate evaluation result.
Fig. 2 is compared to Figure 1 the method for the evaluation of another video definition disclosed in the embodiment of the present application increases The step of result accuracy can be further increased.Method shown in Fig. 2 the following steps are included:
S201: training clarity detection model.
S202: reading one section of video, and it is unsuccessful or to be empty to judge whether video reads, if so, S202 is re-executed, Or miscue is issued, if not, executing S203.
S203: the image frame sequence in video to be evaluated is divided into subsequence according to structural similarity.
S204: according to default frame period (a frame is extracted every 30 frames in such as interval of 30 frames) from each subsequence Middle abstract image frame.
It should be noted that being pressed in the case that the quantity for the picture frame for including in a subsequence is greater than preset interval According to preset interval (i.e. preset step-length) abstract image frame, and the quantity for the picture frame for including in a subsequence is no more than default In the case where interval, all images frame of the subsequence can be extracted, or randomly selects picture frame.For example, in a sub- sequence In the case where only including a picture frame in column, this picture frame is only extracted.
In the present embodiment, for the accuracy of Balance Treatment time and result, the maximum of the picture frame of extraction is set and is extracted Total quantity is the first numerical value, such as 600.S205: each of the picture frame of extraction picture frame is converted to and sample image Same size and/or format.
S206: trained clarity detection model will be inputted by S205 treated each picture frame respectively, obtained To the articulation score of each picture frame.
S207: all articulation scores are ranked up, fraction sequence is obtained.
Specifically, can be ranked up according to sequence from small to large or from big to small.
S208: selection target picture frame.
Wherein, target image frame is the picture frame with target sharpness score.Target sharpness is scored at fraction sequence In come score within the scope of predeterminated position.
It is mesh for example, coming the score of centre 40% in fraction sequence in addition to coming the score of preceding 30% and rear 30% Mark articulation score.
The selection of target image frame can choose picture frame of the articulation score in middle position and evaluate the clear of entire video Clear degree, and the higher and lower picture frame of score is removed, for the angle of probability distribution, it is distributed in data sequence intermediate range Numerical value accuracy it is higher, therefore, the accuracy of the video definition obtained by score in the picture frame of intermediate range is higher.
S209: the mean value of the articulation score of target image frame, the articulation score as video are calculated.
S210: articulation score and preset clarity threshold range according to video determine the clarity of video.
During shown in Fig. 2, the picture frame of default frame period is extracted, enables to the picture frame proposed in video It more uniformly spreads, so that clarity of the clarity of the picture frame extracted more representative of video, also, arranged with articulation score The clarity of video is determined in the picture frame of target position (such as middle section position), is conducive to obtain more accurate clarity As a result.
Below in conjunction with the application scenarios of video class APP, process shown in Fig. 2 is illustrated:
APP receives and stores the small video of user's upload, and APP defines the duration of small video, in regulation duration, one Small video includes up to 2000 picture frames.Using method shown in Fig. 2, a small video is obtained from small video library, as Video to be evaluated, the video include 2000 picture frames.According to SSIM, three subsequences are divided video into, with frame work For interval, the abstract image frame from each subsequence, it is assumed that be drawn into 600 picture frames in total, 600 picture frames are distinguished Clarity evaluation model is inputted, 600 articulation scores are obtained, according to the sequence of articulation score, score is selected to come centre 40% 240 scores, by the mean value of this 240 scores, as the articulation score of the small video, and according to Score Control height Clearly, it is general it is cleer and peaceful obscure corresponding fraction range, determine that the clarity of small video is high definition, general clear or fuzzy.
By the above process, the clarity of each small video can be determined, clarity is high definition or general clear small video, APP allows user to select to play, and for fuzzy small video, APP does not allow user to play, to improve user experience.
Fig. 3 is a kind of evaluating apparatus of video definition disclosed in the embodiment of the present application, comprising: categorization module extracts mould Block, evaluation module and determining module.
Wherein, categorization module is used to the image frame sequence in video to be evaluated being divided into sub- sequence according to structural similarity Column, wherein in any one subsequence, the difference of the structural similarity of adjacent picture frame is not more than preset threshold.Extract mould Block is for the abstract image frame from each subsequence respectively.Evaluation module is used to evaluate each the described image frame extracted Clarity.Determining module determines the clarity of the video to be evaluated for the clarity according to each described image frame.
Specifically, the image frame sequence in video to be evaluated is divided into subsequence according to structural similarity by categorization module Specific implementation are as follows: according to the sequence in described image frame sequence, successively by the picture frame in described image frame sequence As current image frame, until traversing each of described image frame sequence picture frame, and described current updating each time After picture frame, execute following steps: the difference of the structural similarity between the current image frame and the latter picture frame is not In the case where greater than the threshold value, the latter picture frame and the current image frame are divided into identical subsequence, otherwise, The latter picture frame is divided into different subsequences from the current image frame.
Specifically, abstraction module respectively from each subsequence abstract image frame specific implementation are as follows: for appoint It anticipates a subsequence, with preset frame period, the abstract image frame from the subsequence.
Specifically, evaluation module evaluates the process of the clarity of any one of picture frame are as follows: will it is described any one The described image frame input clarity detection model that training obtains in advance obtains the articulation score of the picture frame, described clear Training obtains degree detection model in advance;Wherein, the clarity detection model is used to extract feature from the picture frame of input, and According to the feature extracted, output described image frame is the probability of clear image, and the probability is the articulation score.
Specifically, clarity of the determining module according to each described image frame, determines the clear of the video to be evaluated The specific implementation of clear degree are as follows: the articulation score of whole described image frames is ranked up, fraction sequence is obtained;Select mesh Logo image frame, target image frame are the picture frame with target sharpness score, and the target sharpness is scored at the score The score within the scope of predeterminated position is come in sequence;According to the articulation score of the target image frame, determine described to be evaluated Video articulation score.
Device shown in Fig. 3, can be in the case where efficiently and accurately obtaining the articulation score of video, additionally it is possible into The consumption of one step saving resource.
If function described in the embodiment of the present application method is realized in the form of SFU software functional unit and as independent production Product when selling or using, can store in a storage medium readable by a compute device.Based on this understanding, the application is real The part for applying a part that contributes to existing technology or the technical solution can be embodied in the form of software products, The software product is stored in a storage medium, including some instructions are used so that a calculating equipment (can be personal meter Calculation machine, server, mobile computing device or network equipment etc.) execute each embodiment the method for the application whole or portion Step by step.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), with Machine accesses various Jie that can store program code such as memory (RAM, Random Access Memory), magnetic or disk Matter.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with it is other The difference of embodiment, same or similar part may refer to each other between each embodiment.
The foregoing description of the disclosed embodiments makes professional and technical personnel in the field can be realized or use the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the application.Therefore, the application It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.

Claims (10)

1. a kind of evaluation method of video definition characterized by comprising
Image frame sequence in video to be evaluated is divided into subsequence according to structural similarity, wherein any one sub- sequence In column, the difference of the structural similarity of adjacent picture frame is not more than preset threshold;
The abstract image frame from each subsequence respectively;
Evaluate the clarity of each the described image frame extracted;
According to the clarity of each described image frame, the clarity of the video to be evaluated is determined.
2. the method according to claim 1, wherein the image frame sequence by video to be evaluated according to Structural similarity is divided into subsequence
According to the sequence in described image frame sequence, it successively regard the picture frame in described image frame sequence as present image Frame, until each of traversal described image frame sequence picture frame executes following step for current image frame described in each It is rapid:
The difference of structural similarity between the current image frame and the latter picture frame is not more than the case where threshold value Under, the latter picture frame and the current image frame are divided into identical subsequence, otherwise, by the latter picture frame Different subsequences are divided into from the current image frame.
3. method according to claim 1 or 2, which is characterized in that the abstract image from each subsequence respectively Frame includes:
For any one subsequence, with preset frame period, the abstract image frame from the subsequence.
4. the method according to claim 1, wherein evaluating the process of the clarity of any one of picture frame Include:
By any one of picture frame input clarity detection model that training obtains in advance, the clear of the picture frame is obtained Clear degree score, training obtains the clarity detection model in advance;
Wherein, the clarity detection model is exported for extracting feature, and the feature according to extraction from the picture frame of input Described image frame is the probability of clear image, and the probability is the articulation score.
5. method according to claim 1 or 4, which is characterized in that each described image frame is clear described in the foundation Clear degree determines that the clarity of the video to be evaluated includes:
The articulation score of whole described image frames is ranked up, fraction sequence is obtained;
Selection target picture frame, target image frame are the picture frame with target sharpness score, the target sharpness score To come the score within the scope of predeterminated position in the fraction sequence;
According to the articulation score of the target image frame, the articulation score of the video to be evaluated is determined.
6. a kind of evaluating apparatus of video definition characterized by comprising
Categorization module, for the image frame sequence in video to be evaluated to be divided into subsequence according to structural similarity, wherein In any one subsequence, the difference of the structural similarity of adjacent picture frame is not more than preset threshold;
Abstraction module, for the abstract image frame from each subsequence respectively;
Evaluation module, for evaluating the clarity of each the described image frame extracted;
Determining module determines the clarity of the video to be evaluated for the clarity according to each described image frame.
7. device according to claim 6, which is characterized in that the categorization module is used for the figure in video to be evaluated Include: as frame sequence is divided into subsequence according to structural similarity
The categorization module is specifically used for,
According to the sequence in described image frame sequence, it successively regard the picture frame in described image frame sequence as present image Frame, until each of traversal described image frame sequence picture frame executes following step for current image frame described in each It is rapid:
The difference of structural similarity between the current image frame and the latter picture frame is not more than the case where threshold value Under, the latter picture frame and the current image frame are divided into identical subsequence, otherwise, by the latter picture frame Different subsequences are divided into from the current image frame.
8. device according to claim 6 or 7, which is characterized in that the abstraction module is for respectively from each sub- sequence Abstract image frame includes: in column
The abstraction module is specifically used for, and for any one subsequence, with preset frame period, figure is extracted from the subsequence As frame.
9. device according to claim 6, which is characterized in that the evaluation module is for evaluating any one of image The process of the clarity of frame includes:
The evaluation module is specifically used for, and any one of picture frame input clarity that training obtains in advance is detected Model obtains the articulation score of the picture frame, and training obtains the clarity detection model in advance;
Wherein, the clarity detection model is exported for extracting feature, and the feature according to extraction from the picture frame of input Described image frame is the probability of clear image, and the probability is the articulation score.
10. device according to claim 6 or 9, which is characterized in that the determining module is used for according to figure described in each As the clarity of frame, determine that the clarity of the video to be evaluated includes:
The determining module is specifically used for, and the articulation score of whole described image frames is ranked up, fraction sequence is obtained;
Selection target picture frame, target image frame are the picture frame with target sharpness score, the target sharpness score To come the score within the scope of predeterminated position in the fraction sequence;
According to the articulation score of the target image frame, the articulation score of the video to be evaluated is determined.
CN201910203182.5A 2019-03-18 2019-03-18 A kind of evaluation method and device of video definition Pending CN109831680A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910203182.5A CN109831680A (en) 2019-03-18 2019-03-18 A kind of evaluation method and device of video definition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910203182.5A CN109831680A (en) 2019-03-18 2019-03-18 A kind of evaluation method and device of video definition

Publications (1)

Publication Number Publication Date
CN109831680A true CN109831680A (en) 2019-05-31

Family

ID=66870342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910203182.5A Pending CN109831680A (en) 2019-03-18 2019-03-18 A kind of evaluation method and device of video definition

Country Status (1)

Country Link
CN (1) CN109831680A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490845A (en) * 2019-07-26 2019-11-22 北京大米科技有限公司 A kind of image characteristic extracting method, device, storage medium and electronic equipment
CN110781740A (en) * 2019-09-20 2020-02-11 网宿科技股份有限公司 Video image quality identification method, system and equipment
CN111061895A (en) * 2019-07-12 2020-04-24 北京达佳互联信息技术有限公司 Image recommendation method and device, electronic equipment and storage medium
CN111163338A (en) * 2019-12-27 2020-05-15 广州市百果园网络科技有限公司 Video definition evaluation model training method, video recommendation method and related device
CN111242205A (en) * 2020-01-07 2020-06-05 北京小米移动软件有限公司 Image definition detection method, device and storage medium
CN111314733A (en) * 2020-01-20 2020-06-19 北京百度网讯科技有限公司 Method and apparatus for evaluating video sharpness
CN111696078A (en) * 2020-05-14 2020-09-22 国家广播电视总局广播电视规划院 Ultrahigh-definition video detection method and system
CN111836073A (en) * 2020-07-10 2020-10-27 腾讯科技(深圳)有限公司 Method, device and equipment for determining video definition and storage medium
CN112135140A (en) * 2020-09-17 2020-12-25 上海连尚网络科技有限公司 Video definition recognition method, electronic device and storage medium
CN112233075A (en) * 2020-09-30 2021-01-15 腾讯科技(深圳)有限公司 Video definition evaluation method and device, storage medium and electronic equipment
CN112365447A (en) * 2020-10-20 2021-02-12 四川长虹电器股份有限公司 Multidimensional movie and television scoring method
CN112818737A (en) * 2020-12-18 2021-05-18 广州视源电子科技股份有限公司 Video identification method and device, storage medium and terminal
CN114095722A (en) * 2021-10-08 2022-02-25 钉钉(中国)信息技术有限公司 Definition determining method, device and equipment
CN114449343A (en) * 2022-01-28 2022-05-06 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN117041625A (en) * 2023-08-02 2023-11-10 成都梵辰科技有限公司 Method and system for constructing ultra-high definition video image quality detection network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070088516A1 (en) * 2005-10-14 2007-04-19 Stephen Wolf Low bandwidth reduced reference video quality measurement method and apparatus
CN105761263A (en) * 2016-02-19 2016-07-13 浙江大学 Video key frame extraction method based on shot boundary detection and clustering
CN106412567A (en) * 2016-09-19 2017-02-15 北京小度互娱科技有限公司 Method and system for determining video definition
CN108154103A (en) * 2017-12-21 2018-06-12 百度在线网络技术(北京)有限公司 Detect method, apparatus, equipment and the computer storage media of promotion message conspicuousness
CN108305240A (en) * 2017-05-22 2018-07-20 腾讯科技(深圳)有限公司 Picture quality detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070088516A1 (en) * 2005-10-14 2007-04-19 Stephen Wolf Low bandwidth reduced reference video quality measurement method and apparatus
CN105761263A (en) * 2016-02-19 2016-07-13 浙江大学 Video key frame extraction method based on shot boundary detection and clustering
CN106412567A (en) * 2016-09-19 2017-02-15 北京小度互娱科技有限公司 Method and system for determining video definition
CN108305240A (en) * 2017-05-22 2018-07-20 腾讯科技(深圳)有限公司 Picture quality detection method and device
CN108154103A (en) * 2017-12-21 2018-06-12 百度在线网络技术(北京)有限公司 Detect method, apparatus, equipment and the computer storage media of promotion message conspicuousness

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111061895A (en) * 2019-07-12 2020-04-24 北京达佳互联信息技术有限公司 Image recommendation method and device, electronic equipment and storage medium
CN110490845A (en) * 2019-07-26 2019-11-22 北京大米科技有限公司 A kind of image characteristic extracting method, device, storage medium and electronic equipment
CN110781740A (en) * 2019-09-20 2020-02-11 网宿科技股份有限公司 Video image quality identification method, system and equipment
CN110781740B (en) * 2019-09-20 2023-04-07 网宿科技股份有限公司 Video image quality identification method, system and equipment
WO2021129435A1 (en) * 2019-12-27 2021-07-01 百果园技术(新加坡)有限公司 Method for training video definition evaluation model, video recommendation method, and related device
CN111163338A (en) * 2019-12-27 2020-05-15 广州市百果园网络科技有限公司 Video definition evaluation model training method, video recommendation method and related device
CN111242205A (en) * 2020-01-07 2020-06-05 北京小米移动软件有限公司 Image definition detection method, device and storage medium
CN111242205B (en) * 2020-01-07 2023-11-28 北京小米移动软件有限公司 Image definition detection method, device and storage medium
CN111314733A (en) * 2020-01-20 2020-06-19 北京百度网讯科技有限公司 Method and apparatus for evaluating video sharpness
CN111696078A (en) * 2020-05-14 2020-09-22 国家广播电视总局广播电视规划院 Ultrahigh-definition video detection method and system
CN111836073A (en) * 2020-07-10 2020-10-27 腾讯科技(深圳)有限公司 Method, device and equipment for determining video definition and storage medium
CN112135140A (en) * 2020-09-17 2020-12-25 上海连尚网络科技有限公司 Video definition recognition method, electronic device and storage medium
WO2022057789A1 (en) * 2020-09-17 2022-03-24 上海连尚网络科技有限公司 Video definition identification method, electronic device, and storage medium
CN112135140B (en) * 2020-09-17 2023-11-28 上海连尚网络科技有限公司 Video definition identification method, electronic device and storage medium
CN112233075A (en) * 2020-09-30 2021-01-15 腾讯科技(深圳)有限公司 Video definition evaluation method and device, storage medium and electronic equipment
CN112233075B (en) * 2020-09-30 2024-02-20 腾讯科技(深圳)有限公司 Video definition evaluation method and device, storage medium and electronic equipment
CN112365447A (en) * 2020-10-20 2021-02-12 四川长虹电器股份有限公司 Multidimensional movie and television scoring method
CN112818737A (en) * 2020-12-18 2021-05-18 广州视源电子科技股份有限公司 Video identification method and device, storage medium and terminal
CN112818737B (en) * 2020-12-18 2024-02-02 广州视源电子科技股份有限公司 Video identification method, device, storage medium and terminal
CN114095722A (en) * 2021-10-08 2022-02-25 钉钉(中国)信息技术有限公司 Definition determining method, device and equipment
WO2023056896A1 (en) * 2021-10-08 2023-04-13 钉钉(中国)信息技术有限公司 Definition determination method and apparatus, and device
CN114449343A (en) * 2022-01-28 2022-05-06 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN117041625A (en) * 2023-08-02 2023-11-10 成都梵辰科技有限公司 Method and system for constructing ultra-high definition video image quality detection network
CN117041625B (en) * 2023-08-02 2024-04-19 成都梵辰科技有限公司 Method and system for constructing ultra-high definition video image quality detection network

Similar Documents

Publication Publication Date Title
CN109831680A (en) A kind of evaluation method and device of video definition
CN108234870B (en) Image processing method, device, terminal and storage medium
US20210098024A1 (en) Short video synthesis method and apparatus, and device and storage medium
WO2021098831A1 (en) Target detection system suitable for embedded device
KR20180081101A (en) Method and apparatus for optimizing user credit score
CN108198177A (en) Image acquiring method, device, terminal and storage medium
CN104063686B (en) Crop leaf diseases image interactive diagnostic system and method
CN104506946B (en) A kind of TV programme recognition methods and system based on image recognition
CN108427713A (en) A kind of video summarization method and system for homemade video
CN109840559A (en) Method for screening images, device and electronic equipment
CN103019369A (en) Electronic device and method for playing documents based on facial expressions
CN114358445A (en) Business process residual time prediction model recommendation method and system
CN114625924B (en) Method and system for searching infringement video based on multi-vision expert knowledge distillation
CN109829364A (en) A kind of expression recognition method, device and recommended method, device
CN110536087A (en) Electronic equipment and its motion profile picture synthesis method, device and embedded equipment
CN116033259A (en) Method, device, computer equipment and storage medium for generating short video
CN116546304A (en) Parameter configuration method, device, equipment, storage medium and product
Sun et al. Automatic building age prediction from street view images
CN115114467B (en) Training method and device for picture neural network model
CN114119438A (en) Training method and device of image collage model and image collage method and device
CN111475674A (en) Deep learning model training data set construction method and system for violent behavior detection
CN106339654A (en) Semi-automatic character identification method and device
Liu et al. Mobile photo recommendation system of continuous shots based on aesthetic ranking
CN111143688B (en) Evaluation method and system based on mobile news client
CN109191173A (en) A kind of one-stop full frame accurate programming data management method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190531

RJ01 Rejection of invention patent application after publication