CN109462751B - Estimation method and device of prediction model - Google Patents

Estimation method and device of prediction model Download PDF

Info

Publication number
CN109462751B
CN109462751B CN201811224095.XA CN201811224095A CN109462751B CN 109462751 B CN109462751 B CN 109462751B CN 201811224095 A CN201811224095 A CN 201811224095A CN 109462751 B CN109462751 B CN 109462751B
Authority
CN
China
Prior art keywords
segment
highlight
prediction
video
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811224095.XA
Other languages
Chinese (zh)
Other versions
CN109462751A (en
Inventor
马龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201811224095.XA priority Critical patent/CN109462751B/en
Publication of CN109462751A publication Critical patent/CN109462751A/en
Application granted granted Critical
Publication of CN109462751B publication Critical patent/CN109462751B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides an evaluation method and device of a prediction model, and aims to solve the problems that the existing evaluation method is quite complex and inaccurate in evaluation. The method comprises the following steps: acquiring a plurality of videos for evaluating a prediction model; for each video, acquiring at least one highlight segment extracted from the current video; predicting the current video by using the prediction model to obtain a prediction curve corresponding to the current video, and extracting at least one high-fragment segment from the prediction curve; determining the prediction parameters of the prediction model to the current video according to the highlight segment and the high-fragment segment; and calculating the evaluation index of the prediction model according to the prediction parameters of the prediction model to each video. The invention greatly reduces the workload of the annotating personnel, simplifies the evaluation process, improves the evaluation efficiency, and ensures that the evaluation process is more objective, thereby ensuring that the evaluation result is more accurate.

Description

Estimation method and device of prediction model
Technical Field
The present invention relates to the field of model evaluation technologies, and in particular, to a prediction model evaluation method and a prediction model evaluation device.
Background
With the rapid development of internet technology, people increasingly rely on obtaining various information through the internet. For example, a large number of video websites are emerging to meet the viewing needs of users for videos. The video website refers to a network medium which enables internet users to smoothly publish, browse and share video works on line under the support of a perfect technical platform.
The video website generally performs various analyses on the online video, analyzes whether the online video is liked by the audience, analyzes the liked part of the video by the audience, and performs directional popularization, operation and the like so as to improve the user experience of the video website. After the video website is operated for a period of time, a lot of data of user watching videos, such as playing data, dragging data, barrage data and the like, can be accumulated, and online videos can be analyzed through various user data. After various video analyses are performed by the video website, the scores of the video in each second are calculated, the favorite change trend of the video can be reflected after the scores are plotted as curves, the video website can analyze the rising trend of the curves and the reasons of the peaks, and analyze the reasons of the falling curves and the reasons of the valleys, so that a targeted decision can be made.
A prediction model of the video can be obtained by training curves of a large number of videos, and the curves corresponding to the videos can be predicted through the prediction model. If accurate and effective directional popularization is to be made, the prediction accuracy of the prediction model is to be ensured, otherwise, the provided suggestion is invalid. The accuracy of the prediction model can be determined by evaluating the prediction model. In the prior art, a commonly adopted evaluation method is to find a batch of videos, mark a score for each second of the videos after the videos are manually seen in advance, use the marked scores as real scores of the videos, predict the scores for each second of the videos by using a prediction model, calculate an error according to the predicted scores and the real scores, and finally obtain the accuracy of the prediction model.
However, the biggest problem with this evaluation is that manually scoring each second of video is difficult to achieve. The reason is that the annotating personnel needs to firstly watch the whole video, have overall knowledge on the whole video and then watch the video again, judge each second in the video and give a score, the workload is very large, the annotating personnel objectively evaluates the wonderful degree of each second of the video when the score played for each second is relatively high and low, the annotating personnel can possibly watch the back and forget the front due to the long time of the video, and cannot objectively evaluate the video if the wonderful degree of the back video relative to the front video is forgotten, so that the prediction model cannot be accurately evaluated based on the annotation result. Therefore, the existing evaluation methods are rather complicated and the evaluation is inaccurate.
Disclosure of Invention
The embodiment of the invention provides a prediction model evaluation method and a prediction model evaluation device, and aims to solve the problems that the existing evaluation method is quite complex and inaccurate in evaluation.
In order to solve the above technical problem, an embodiment of the present invention provides an evaluation method of a prediction model, including:
acquiring a plurality of videos for evaluating a prediction model;
for each video, acquiring at least one highlight segment extracted from the current video;
predicting the current video by using the prediction model to obtain a prediction curve corresponding to the current video, and extracting at least one high-fragment segment from the prediction curve;
determining the prediction parameters of the prediction model to the current video according to the highlight segment and the high-fragment segment;
and calculating the evaluation index of the prediction model according to the prediction parameters of the prediction model to each video.
Preferably, the step of determining the prediction parameters of the prediction model for the current video according to the highlight segment and the high-segment includes: judging whether the current highlight is positioned in the high-segment or not aiming at each highlight segment; if yes, determining the current highlight as a highlight with accurate prediction; and calculating the prediction parameters of the prediction model to the current video according to the number of the highlight segments with accurate prediction.
Preferably, the step of judging whether the current highlight segment is located in the high segment includes: judging whether the current highlight segment has intersection with the high segment according to the corresponding start-stop time; if the intersection exists, calculating the intersection ratio of the current highlight segment and the high segment with the intersection; and if the intersection ratio is larger than a preset threshold value, determining that the current highlight segment is positioned in the high segment.
Preferably, the step of calculating the prediction parameters of the prediction model for the current video according to the number of the highlight segments with accurate prediction includes: calculating the ratio of the number of the accurately predicted wonderful segments to the total number of the high-segment segments as the accuracy; and/or calculating the ratio of the number of the accurately predicted wonderful segments to the total number of the wonderful segments as a recall rate.
On the other hand, an embodiment of the present invention further provides an evaluation apparatus for a prediction model, including:
an obtaining module, configured to obtain a plurality of videos for evaluating a prediction model;
the first extraction module is used for acquiring at least one highlight segment extracted from the current video aiming at each video;
the second extraction module is used for predicting the current video by using the prediction model to obtain a prediction curve corresponding to the current video and extracting at least one high-fragment segment from the prediction curve;
the determining module is used for determining the prediction parameters of the prediction model to the current video according to the highlight segment and the high-fragment segment;
and the calculation module is used for calculating the evaluation index of the prediction model according to the prediction parameters of the prediction model to each video.
Preferably, the determining module comprises: the segment judgment submodule is used for judging whether the current wonderful segment is positioned in the high-segment or not according to each wonderful segment; the segment determining submodule is used for determining the current highlight segment as a highlight segment with accurate prediction if the segment judging unit judges that the highlight segment is the highlight segment with accurate prediction; and the parameter calculation submodule is used for calculating the prediction parameters of the prediction model to the current video according to the number of the highlight segments with accurate prediction.
Preferably, the segment judgment sub-module includes: the intersection judging unit is used for judging whether the current highlight segment and the high-segment have intersection according to the corresponding start-stop time; the ratio calculation unit is used for calculating the intersection and the comparison between the current highlight segment and the high segment with the intersection if the intersection judgment subunit judges that the intersection exists; and the comparison determining unit is used for determining that the current highlight segment is positioned in the high segment if the intersection ratio is greater than a preset threshold value.
Preferably, the prediction parameters include accuracy and/or recall, and the parameter calculation sub-module includes: the first calculating unit is used for calculating the ratio of the number of the accurately predicted wonderful segments to the total number of the high-segment segments as the accuracy; and/or the second calculating unit is used for calculating the ratio of the number of the highlight segments with accurate prediction to the total number of the highlight segments as a recall rate.
In the embodiment of the invention, a plurality of videos for evaluating a prediction model are obtained firstly; then, aiming at each video, acquiring at least one wonderful segment extracted from the current video, predicting the current video by using a prediction model to obtain a prediction curve corresponding to the current video, extracting at least one high-fragment segment from the prediction curve, and determining a prediction parameter of the prediction model for the current video according to the wonderful segment and the high-fragment segment; and finally, calculating the evaluation index of the prediction model according to the prediction parameters of the prediction model to each video. Therefore, in the embodiment of the invention, the annotator is not required to score the video every second, the wonderful segment is simply extracted from the video, and the wonderful segment is not required to be scored, so that the workload of the annotator is greatly reduced, the evaluation process is simplified, the evaluation efficiency is improved, and the evaluation process is more objective, so that the evaluation result is more accurate.
Drawings
FIG. 1 is a flow chart of the steps of a method of evaluating a predictive model according to an embodiment of the invention;
FIG. 2 is a schematic diagram of trend and discretization of a prediction curve according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the distribution of highlight segments and highlight segments after time-axis sorting according to an embodiment of the present invention;
fig. 4 is a block diagram of a prediction model evaluation apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flow chart illustrating steps of a method for evaluating a predictive model according to an embodiment of the present invention is shown.
The evaluation method of the prediction model of the embodiment of the invention comprises the following steps:
step 101, a plurality of videos for evaluating a prediction model are obtained.
In practical application, a large number of videos on a video website can be analyzed in advance to obtain curves corresponding to the videos, and the curves are used for training to obtain a prediction model for video prediction. In order to ensure the accuracy of the prediction model, the prediction model can be evaluated, and the method for evaluating the prediction model is described in the embodiment of the invention.
In the embodiment of the invention, a plurality of videos can be acquired from the video website, and the videos have the function of evaluating the prediction model. For the specific number of videos obtained for evaluating the prediction model, those skilled in the art may select any suitable value according to practical experience, and the embodiment of the present invention is not limited thereto. The greater the number of videos acquired, the greater the accuracy of the assessment. For example, 100 videos may be obtained from a video website for evaluation of the predictive model.
Step 102, for each video, at least one highlight segment extracted from the current video is obtained.
In the embodiment of the invention, after a plurality of videos for evaluating the prediction model are acquired, each video can be labeled by a labeling person so as to extract at least one highlight segment from the videos. For example, a labeling rule may be preset to specify what segment should be used as a highlight segment, and then an organization labeling person labels the highlight segment in each video according to the set labeling rule, that is, extracts the highlight segment in each video. Therefore, the annotating personnel do not need to give specific scores for the quality of each second of the video, only need to find out a plurality of more wonderful segments from the video according to the quality of different segments of the video after finishing watching the video, and the segments do not need to be scored and do not need to give specific scores between 0 and 100. Therefore, the workload of the labeling personnel is greatly reduced, the time is also greatly saved, and the labeling cost is very low.
And respectively analyzing each video for evaluating the prediction model so as to obtain the prediction parameters of the prediction model for each video. The analysis process for each video is the same, so the embodiment of the present invention is mainly described by taking the analysis for one video as an example.
At least one highlight segment extracted from the current video is obtained for the current video. The highlight segments represent the more highlight segments in the current video marked by the marking personnel.
Wherein the current video represents the video being analyzed. For example, if the 1 st video of 100 videos is being analyzed, the current video is the 1 st video; if the 2 nd video of the 100 videos is being analyzed, then the current video is the 2 nd video, and so on.
And 103, predicting the current video by using the prediction model to obtain a prediction curve corresponding to the current video, and extracting at least one high-score segment from the prediction curve.
Inputting the current video into the prediction model, predicting the current video by using the prediction model, and obtaining a prediction curve corresponding to the current video, wherein the prediction curve is a prediction result of the prediction model on the current video.
In the prediction curve, the horizontal axis of the prediction curve represents time, and the vertical axis represents the highlight value, so that the highlight value represented by the prediction curve is different according to the difference of the height of the prediction curve on the vertical axis. In the embodiment of the invention, at least one high-score segment can be extracted from the corresponding prediction curve of the current video. The high-segment represents the segment predicted by the prediction model and having a higher highlight value in the current video.
In a specific implementation manner, the high-score segment can be extracted by discretizing the prediction curve. The integral average value of the prediction curve can be used as a reference standard, the integral average value is the average highlight value of the video, the part of the prediction curve higher than the average value is considered as a highlight part by the prediction model, and thus the prediction curve can be divided into two parts by the average value: the part above the average value and the part below the average value. Where fragments above the average are referred to as high scoring fragments and fragments below the average are referred to as low scoring fragments.
Specifically, calculating the average value of all the wonderful scores in the prediction curve; and extracting the segments with the highlight values higher than the average value in the prediction curve, wherein the extracted segments are high-score segments which are discretized and are considered as high-score parts by the prediction model.
Referring to fig. 2, a schematic diagram of trend and discretization of a prediction curve according to an embodiment of the present invention is shown. In fig. 2, the horizontal axis represents time, the vertical axis represents the highlight value, wherein the curve represents the highlight value trend of the prediction curve, the straight line between the highlight values 60 and 80 represents the average value of all the highlight values, and the block part represents the high-score segment of the prediction curve in which the highlight value is higher than the average value.
And 104, determining the prediction parameters of the prediction model to the current video according to the highlight segment and the high-segment.
Through the above steps 102 and 103, the highlight segment and the high-segment corresponding to the current video are obtained, and the prediction parameters of the prediction model for the current video can be determined according to the highlight segment and the high-segment.
In one specific implementation, the step 104 may include:
a1, determining the highlight segment with accurate prediction according to the highlight segment and the high-score segment.
The high-score segment is a segment with a high highlight value predicted by the prediction model. And determining whether the prediction of the prediction model aiming at each highlight segment is accurate or not according to the highlight segment and the high-segment.
The idea underlying the embodiment of the invention may be: in the prediction curve predicted by the prediction model for the current video, the peak part (i.e. high-score segment) of the prediction curve should be matched with the highlight segment, and if the peak part is matched with the highlight segment, the prediction of the highlight segment by the prediction model can be considered to be in accordance with the expectation. That is, it can be considered that the prediction of the highlight segment by the prediction model is accurate as long as the highlight segment is located in the high-segment.
Therefore, the step a1 may specifically include: judging whether the current highlight is positioned in the high-segment or not aiming at each highlight segment; and if so, determining the current highlight as the highlight with accurate prediction.
When judging whether the current highlight is located in the high score segment, the judgment can be made according to the overlap ratio of the current highlight and the high score segment. For example, the overlap ratio of the current highlight segment and the high-segment may be determined according to an Intersection-over-Union (IOU) of the current highlight segment and the high-segment. Therefore, the step of determining whether the current highlight segment is located in the high segment may include:
a11, judging whether the current highlight segment has intersection with the high segment according to the corresponding start-stop time.
First, the high scoring segment and the highlight segment may be intersected. When the annotator extracts a highlight from the video, the corresponding start-stop time of the highlight can be annotated, such as [20-120, 230-350, … ], indicating that the start-stop time of the first highlight is from 20 seconds to 120 seconds, the start-stop time of the second highlight is from 230 seconds to 350 seconds, and so on. When extracting a high score segment from the prediction curve, the start-stop time of the high score segment may be acquired from the horizontal axis of the prediction curve.
In the embodiment of the invention, whether the intersection exists between the current highlight segment and the high-segment can be judged according to the corresponding start-stop time.
In a specific implementation, the highlight segments and the highlight segments may be sorted on the time axis according to their respective start-stop times. Fig. 3 is a schematic diagram illustrating a distribution of highlight segments and high-segment segments after time axis sorting according to an embodiment of the present invention. In fig. 3, the curves represent the prediction curves corresponding to the current video, a, b, and c represent the extraction of 3 high-score segments from the prediction curves, and A, B, C, D represents the extraction of 4 highlight segments from the current video. As can be seen from fig. 3, the highlight segment a intersects with the high-fragment segment a, the highlight segment B intersects with the high-fragment segment B, and the highlight segment D intersects with the high-fragment segment D.
And A12, if the intersection exists, calculating the intersection ratio of the current highlight segment and the high segment with the intersection.
The numerator of the cross-over ratio is the intersection, namely the length of the intersection fragment of the high-score fragment and the wonderful fragment, and the denominator union is the length of the union fragment of the high-score fragment and the wonderful fragment, and the ratio of the two is the cross-over ratio.
Thus, the step of calculating the intersection ratio of the current highlight segment and the high segment having the intersection may include: determining the intersection time length of the current highlight and the high-grade fragments with intersection, and determining the union time length of the current highlight and the high-grade fragments with intersection; and calculating the ratio of the intersection time length to the union time length to be used as the intersection ratio of the current highlight segment and the high segment with the intersection.
For example, if the start-stop time of the highlight segment a is [30, 120], the start-stop time of the high-score segment a is [20, 40], the intersection time length of the highlight segment a and the high-score segment a is 10, the union time length is 100, and the intersection ratio of the highlight segment a and the high-score segment a is 0.1. The start-stop time of the highlight segment B is [160, 250], the start-stop time of the high-segment B is [170, 260], the intersection time length of the highlight segment B and the high-segment B is 80, the union time length is 100, and the intersection ratio of the highlight segment B and the high-segment B is 0.8. The start-stop time of the highlight segment D is [450, 530], the start-stop time of the high-segment D is [460, 550], the intersection time length of the highlight segment D and the high-segment D is 70, the union time length is 100, and the intersection ratio of the highlight segment D and the high-segment D is 0.7.
A13, if the intersection ratio is larger than a preset threshold value, determining that the current highlight segment is located in the high-segment.
The larger the value of the intersection ratio is, the higher the coincidence degree of the high-score segment and the highlight segment is, that is, the more consistent the high-score segment predicted by the prediction model and the real highlight segment is, so that the prediction model is more accurate. Therefore, a threshold value can be preset in the embodiment of the present invention, and if the intersection ratio corresponding to the current highlight is greater than the preset threshold value, it can be determined that the current highlight is located in the high-score segment, that is, the current highlight is the highlight with accurate prediction.
For the specific value of the preset threshold, a person skilled in the art may select any suitable value according to practical experience, and the embodiment of the present invention is not limited thereto.
For example, a preset threshold value of 0.5, an intersection ratio of the highlight segment a to the high-segment a of 0.1, an intersection ratio of the highlight segment B to the high-segment B of 0.8, and an intersection ratio of the highlight segment D to the high-segment D of 0.7 may be selected, so that the highlight segment B and the highlight segment D may be determined to be accurately predicted highlight segments.
And A2, calculating the prediction parameters of the prediction model to the current video according to the number of the highlight segments with accurate prediction.
In the embodiment of the invention, the prediction parameters of the prediction model for the current video may include accuracy acc and/or recall. Wherein "and/or" means that either or both of the two may be included.
Therefore, the step of calculating the prediction parameters of the prediction model for the current video according to the number of the highlight segments with accurate prediction may include:
and A21, calculating the ratio of the number of the accurately predicted wonderful segments to the total number of the high-segment segments as the accuracy.
And/or the presence of a gas in the gas,
and A22, calculating the ratio of the number of the highlight segments with accurate prediction to the total number of the highlight segments as a recall rate.
For example, according to the above example, there are A, B, C, D highlights, 3 high scores a, B, and c, and highlight B and highlight D are predicted accurate highlights, i.e. the number of predicted accurate highlights is 2. Therefore, the accuracy of the prediction model for the current video prediction is 2/3-66.7%, and the recall rate is 2/4-50%.
And 105, calculating the evaluation index of the prediction model according to the prediction parameters of the prediction model to each video.
According to the processes of the steps 102 to 104, the prediction parameters of the prediction model for each video can be calculated, and the evaluation index of the prediction model is calculated according to the prediction parameters of the prediction model for each video. The prediction effect of the prediction model can be obtained through the evaluation index of the prediction model.
In an embodiment of the present invention, the evaluation index of the prediction model may include at least one of: accuracy, recall, F1 score.
And the accuracy rate acc of the prediction model is the average value of the accuracy rates of the prediction model on all videos. And the recall rate recall of the prediction model is the average value of the recall rates of the prediction model on all the video predictions. The F1 fraction of the prediction model is F1-score ^ 2 ^ acc ^ recall/(acc ^ + recall ^).
In the embodiment of the invention, the marking personnel is not required to score the video every second, the wonderful segment is simply extracted from the video, and the scoring of the wonderful segment is also not required, so that the workload of the marking personnel is greatly reduced, the evaluation process is simplified, the evaluation efficiency is improved, and the evaluation process is more objective, so that the evaluation result is more accurate.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 4, a block diagram of an evaluation apparatus for a prediction model according to an embodiment of the present invention is shown.
The evaluation device of the prediction model of the embodiment of the invention comprises the following modules:
an obtaining module 401, configured to obtain a plurality of videos for evaluating a prediction model;
a first extraction module 402, configured to, for each video, obtain at least one highlight extracted from a current video;
a second extraction module 403, configured to predict a current video by using the prediction model to obtain a prediction curve corresponding to the current video, and extract at least one high-fragment segment from the prediction curve;
a determining module 404, configured to determine, according to the highlight segment and the high-segment, a prediction parameter of the prediction model for a current video;
a calculating module 405, configured to calculate an evaluation index of the prediction model according to the prediction parameter of each video of the prediction model.
In a preferred embodiment, the determining module comprises: the segment judgment submodule is used for judging whether the current wonderful segment is positioned in the high-segment or not according to each wonderful segment; the segment determining submodule is used for determining the current highlight segment as a highlight segment with accurate prediction if the segment judging unit judges that the highlight segment is the highlight segment with accurate prediction; and the parameter calculation submodule is used for calculating the prediction parameters of the prediction model to the current video according to the number of the highlight segments with accurate prediction.
In a preferred embodiment, each highlight segment corresponds to a start-stop time and each high segment corresponds to a start-stop time. The segment judgment submodule includes: the intersection judging unit is used for judging whether the current highlight segment and the high-segment have intersection according to the corresponding start-stop time; the ratio calculation unit is used for calculating the intersection and the comparison between the current highlight segment and the high segment with the intersection if the intersection judgment subunit judges that the intersection exists; and the comparison determining unit is used for determining that the current highlight segment is positioned in the high segment if the intersection ratio is greater than a preset threshold value.
In a preferred embodiment, the ratio calculation unit is specifically configured to determine an intersection time length between the current highlight and the high-score segment with the intersection, and determine a union time length between the current highlight and the high-score segment with the intersection; and calculating the ratio of the intersection time length to the union time length to be used as the intersection ratio of the current highlight segment and the high segment with the intersection.
In a preferred embodiment, the prediction parameters include accuracy and/or recall. The parameter calculation submodule includes: the first calculating unit is used for calculating the ratio of the number of the accurately predicted wonderful segments to the total number of the high-segment segments as the accuracy; and/or the second calculating unit is used for calculating the ratio of the number of the highlight segments with accurate prediction to the total number of the highlight segments as a recall rate.
In a preferred embodiment, the horizontal axis of the prediction curve represents time, and the vertical axis represents the highlight value, and the second extraction module includes: the score calculation submodule is used for calculating the average value of all the wonderful scores in the prediction curve; and the segment extraction submodule is used for extracting segments with the highlight values higher than the average value from the prediction curve as high-score segments.
In the embodiment of the invention, the marking personnel is not required to score the video every second, the wonderful segment is simply extracted from the video, and the scoring of the wonderful segment is also not required, so that the workload of the marking personnel is greatly reduced, the evaluation process is simplified, the evaluation efficiency is improved, and the evaluation process is more objective, so that the evaluation result is more accurate.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above detailed description is provided for the evaluation method of a prediction model and the evaluation device of a prediction model, and the principle and the implementation of the present invention are explained by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (4)

1. A method for evaluating a predictive model, comprising:
acquiring a plurality of videos for evaluating a prediction model;
for each video, acquiring at least one highlight segment extracted from the current video; the highlight segments are highlighted segments in the current video marked by the marking personnel;
predicting the current video by using the prediction model to obtain a prediction curve corresponding to the current video, and extracting at least one high-fragment segment from the prediction curve; the prediction model takes the whole video as input and takes a prediction curve corresponding to the video as an output model;
determining the prediction parameters of the prediction model to the current video according to the highlight segment and the high-fragment segment;
calculating an evaluation index of the prediction model according to the prediction parameters of the prediction model to each video;
wherein the step of determining the prediction parameters of the prediction model for the current video according to the highlight segment and the high-segment comprises:
judging whether the current highlight is positioned in the high-segment or not aiming at each highlight segment;
if yes, determining the current highlight as a highlight with accurate prediction;
calculating the prediction parameters of the prediction model to the current video according to the number of the accurately predicted highlight segments;
wherein the step of judging whether the current highlight segment is located in the high segment comprises:
judging whether the current highlight segment has intersection with the high segment according to the corresponding start-stop time;
if the intersection exists, calculating the intersection ratio of the current highlight segment and the high segment with the intersection;
and if the intersection ratio is larger than a preset threshold value, determining that the current highlight segment is positioned in the high segment.
2. The method according to claim 1, wherein the prediction parameters include accuracy and/or recall, and the step of calculating the prediction parameters of the prediction model for the current video according to the number of the highlight segments with accurate prediction comprises:
calculating the ratio of the number of the accurately predicted wonderful segments to the total number of the high-segment segments as the accuracy;
and/or the presence of a gas in the gas,
and calculating the ratio of the number of the highlight segments with accurate prediction to the total number of the highlight segments to serve as a recall rate.
3. An apparatus for evaluating a prediction model, comprising:
an obtaining module, configured to obtain a plurality of videos for evaluating a prediction model;
the first extraction module is used for acquiring at least one highlight segment extracted from the current video aiming at each video; the highlight segments are highlighted segments in the current video marked by the marking personnel;
the second extraction module is used for predicting the current video by using the prediction model to obtain a prediction curve corresponding to the current video and extracting at least one high-fragment segment from the prediction curve;
the determining module is used for determining the prediction parameters of the prediction model to the current video according to the highlight segment and the high-fragment segment; the prediction model takes the whole video as input and takes a prediction curve corresponding to the video as an output model;
the calculation module is used for calculating the evaluation index of the prediction model according to the prediction parameters of the prediction model to each video;
wherein the determining module comprises:
the segment judgment submodule is used for judging whether the current wonderful segment is positioned in the high-segment or not according to each wonderful segment;
the segment determining submodule is used for determining the current highlight segment as a highlight segment with accurate prediction if the segment judging unit judges that the highlight segment is the highlight segment with accurate prediction;
the parameter calculation submodule is used for calculating the prediction parameters of the prediction model on the current video according to the number of the highlight segments with accurate prediction;
wherein the segment judgment submodule includes:
the intersection judging unit is used for judging whether the current highlight segment and the high-segment have intersection according to the corresponding start-stop time;
the ratio calculation unit is used for calculating the intersection and the comparison between the current highlight segment and the high segment with the intersection if the intersection judgment subunit judges that the intersection exists;
and the comparison determining unit is used for determining that the current highlight segment is positioned in the high segment if the intersection ratio is greater than a preset threshold value.
4. The apparatus of claim 3, wherein the prediction parameters comprise accuracy and/or recall, and wherein the parameter calculation sub-module comprises:
the first calculating unit is used for calculating the ratio of the number of the accurately predicted wonderful segments to the total number of the high-segment segments as the accuracy;
and/or the presence of a gas in the gas,
and the second calculating unit is used for calculating the ratio of the number of the accurately predicted wonderful segments to the total number of the wonderful segments as a recall rate.
CN201811224095.XA 2018-10-19 2018-10-19 Estimation method and device of prediction model Active CN109462751B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811224095.XA CN109462751B (en) 2018-10-19 2018-10-19 Estimation method and device of prediction model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811224095.XA CN109462751B (en) 2018-10-19 2018-10-19 Estimation method and device of prediction model

Publications (2)

Publication Number Publication Date
CN109462751A CN109462751A (en) 2019-03-12
CN109462751B true CN109462751B (en) 2020-07-21

Family

ID=65607953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811224095.XA Active CN109462751B (en) 2018-10-19 2018-10-19 Estimation method and device of prediction model

Country Status (1)

Country Link
CN (1) CN109462751B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685144B (en) * 2018-12-26 2021-02-12 上海众源网络有限公司 Method and device for evaluating video model and electronic equipment
CN110505519B (en) * 2019-08-14 2021-12-03 咪咕文化科技有限公司 Video editing method, electronic equipment and storage medium
CN113497977A (en) * 2020-03-18 2021-10-12 阿里巴巴集团控股有限公司 Video processing method, model training method, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9268996B1 (en) * 2011-01-20 2016-02-23 Verint Systems Inc. Evaluation of models generated from objects in video
CN104346372B (en) * 2013-07-31 2018-03-27 国际商业机器公司 Method and apparatus for assessment prediction model
CN108537139B (en) * 2018-03-20 2021-02-19 校宝在线(杭州)科技股份有限公司 Online video highlight analysis method based on bullet screen information

Also Published As

Publication number Publication date
CN109462751A (en) 2019-03-12

Similar Documents

Publication Publication Date Title
CN110825957B (en) Deep learning-based information recommendation method, device, equipment and storage medium
CN106951925B (en) Data processing method, device, server and system
CN110149540B (en) Recommendation processing method and device for multimedia resources, terminal and readable medium
CN109462751B (en) Estimation method and device of prediction model
EP3239855A1 (en) Analysis and collection system for user interest data and method therefor
WO2017096877A1 (en) Recommendation method and device
US20140172415A1 (en) Apparatus, system, and method of providing sentiment analysis result based on text
KR101804170B1 (en) Item recommendation method and apparatus thereof utilizing uninteresting item and apparatus
WO2014057963A1 (en) Forensic system, forensic method, and forensic program
CN107153656B (en) Information searching method and device
CN111680165B (en) Information matching method and device, readable storage medium and electronic equipment
CN112765974B (en) Service assistance method, electronic equipment and readable storage medium
CN106156098B (en) Error correction pair mining method and system
CN102063456A (en) Method for positioning to optic center of webpage automatically and device
CN112699295A (en) Webpage content recommendation method and device and computer readable storage medium
CN105488599B (en) Method and device for predicting article popularity
CN110909005B (en) Model feature analysis method, device, equipment and medium
JP5986687B2 (en) Data separation system, data separation method, program for data separation, and recording medium for the program
US20150088876A1 (en) Forensic system, forensic method, and forensic program
CN111177500A (en) Data object classification method and device, computer equipment and storage medium
CN112464036B (en) Method and device for auditing violation data
JP2012008899A (en) Retrieval query recommendation method, retrieval query recommendation device and retrieval query recommendation program
CN105809488B (en) Information processing method and electronic equipment
US20160155207A1 (en) Document identification and inspection system, document identification and inspection method, and document identification and inspection program
KR101614843B1 (en) The method and judgement apparatus for detecting concealment of social issue

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant