CN108495107A - A kind of method for processing video frequency and device - Google Patents

A kind of method for processing video frequency and device Download PDF

Info

Publication number
CN108495107A
CN108495107A CN201810085226.4A CN201810085226A CN108495107A CN 108495107 A CN108495107 A CN 108495107A CN 201810085226 A CN201810085226 A CN 201810085226A CN 108495107 A CN108495107 A CN 108495107A
Authority
CN
China
Prior art keywords
designated
video
toning
configuration file
image scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810085226.4A
Other languages
Chinese (zh)
Inventor
陈杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201810085226.4A priority Critical patent/CN108495107A/en
Publication of CN108495107A publication Critical patent/CN108495107A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Abstract

The invention discloses a kind of method for processing video frequency, device, electronic equipment and computer readable storage mediums.This method includes:Identify the picture material of designated;The configuration file of the picture material of description designated is generated according to recognition result;When client request plays the video, the configuration file that the video is provided to client carries out toning processing according to the configuration file to realize in video playing to designated, in this way, when user plays the video, it is exactly toning treated video, compares and do not carry out the video of toning processing, the color matching of toning treated video is distincter, customer satisfaction system effect can be reached, can more meet the broadcasting demand of user, enhance the usage experience of user.

Description

A kind of method for processing video frequency and device
Technical field
The present invention relates to field of computer technology, and in particular to a kind of method for processing video frequency, device, electronic equipment and calculating Machine readable storage medium storing program for executing.
Background technology
With electronic equipment function it is increasing, by electronic equipment play video function it is also more and more perfect, For example, broadcasting or network direct broadcasting etc. to designated, for user when playing corresponding video, often what video resource is Effect, when playing, which type of effect video will show.But it is inevitable, the bandwagon effect of some videos is simultaneously It is undesirable, it is unable to reach customer satisfaction system effect, cannot be met the needs of users.
Invention content
In view of the above problems, it is proposed that the present invention overcoming the above problem in order to provide one kind or solves at least partly State method for processing video frequency, device, electronic equipment and the computer readable storage medium of problem.
According to an aspect of the invention, there is provided a kind of method for processing video frequency, wherein this method includes:
Identify the picture material of designated;
The configuration file of the picture material of description designated is generated according to recognition result;
The configuration file is provided, toning processing is carried out to the video according to the configuration file to realize.
Optionally, the picture material of the identification designated includes:
Default different classes of video image scene;
It identifies the video image scene classification belonging to the picture material of designated, and records the video image of respective classes Beginning and ending time of the scene in the designated.
Optionally, the video image scene classification belonging to the picture material of the identification designated includes:
Each machine for the different classes of video image scene that designated is sequentially inputted to be respectively used in identification video In device learning model;
Obtain the recognition result of each machine learning model output.
Optionally, this method further comprises:
For a kind of video image scene of classification, the video for the video image scene for belonging to the category is obtained, will be obtained Video be input to machine learning model as training data and be trained study, obtain the category in video for identification The machine learning model of video image scene;
And so on, obtain the corresponding machine learning model of video image scene of all categories.
Optionally, the different classes of video image scene includes:
Video image scene containing face;
Landscape class video image scene.
Optionally, the configuration file of the picture material that description designated is generated according to recognition result further wraps It includes:
When designated includes the video image scene containing face, further according to the size of face therein Ratio judges whether that video image scene containing face carries out toning processing to this, and will determine that in result write-in configuration file.
Optionally, this method further comprises:
Tune corresponding with the picture material of the designated described in the configuration file is written in the configuration file Color processing scheme.
Optionally, this method further comprises:
When the picture material of designated is not belonging to preset any type video image scene classification, specified regard at this The applicable toning processing general rule of the label designated in the configuration file of frequency.
Optionally, this method further comprises:
When the picture material of designated is not belonging to preset any type video image scene classification, judge that this is specified Whether video is applicable in toning processing general rule;
If applicable, then the designated of the label in the configuration file of the designated is applicable in toning processing Universal gauge Then;
If not applicable, the designated is marked to be not suitable for doing at any toning in the configuration file of the designated Reason.
Optionally, described to judge whether the designated is applicable in toning processing general rule and includes:
The histogram of the designated is input in the machine learning model of video for being not suitable for toning for identification;
If the output of the machine learning model confirms that the designated is not suitable for the result of toning, it is determined that this is specified and regards The not applicable toning processing general rule of frequency;Conversely, determining that the designated is applicable in toning processing general rule.
Optionally, this method further comprises:
Obtain the video of a certain number of unsuitable tonings;
The histogram of these videos for being not suitable for toning is input to as training data in machine learning model and is instructed Practice study, is not suitable for the machine learning model of the video of toning for identification.
According to another aspect of the present invention, a kind of video process apparatus is provided, wherein the device includes:
Recognition unit is suitable for identifying the picture material of designated;
Configuration file generation unit is suitable for generating the configuration text of the picture material of description designated according to recognition result Part;
Unit is provided, the configuration file is adapted to provide for, the video is adjusted according to the configuration file with realizing Color processing.
Optionally,
The recognition unit is suitable for default different classes of video image scene;Identify the picture material institute of designated The video image scene classification of category, and record beginning and ending time of the video image scene of respective classes in the designated.
Optionally,
The recognition unit different classes of is regarded suitable for be sequentially inputted to be respectively used in identification video by designated In each machine learning model of frequency image scene;Obtain the recognition result of each machine learning model output.
Optionally, which further comprises:
First machine learning model acquiring unit, is suitable for the video image scene for a kind of classification, and acquisition belongs to such The video of acquisition is input to machine learning model as training data and is trained by the video of other video image scene It practises, obtains the machine learning model of the video image scene of the category in video for identification;And so on, it obtains of all categories The corresponding machine learning model of video image scene.
Optionally, the different classes of video image scene includes:
Video image scene containing face;
Landscape class video image scene.
Optionally,
The configuration file generation unit, when suitable for including the video image scene containing face when designated, into one Step judges whether that video image scene containing face carries out toning processing to this according to the size ratio of face therein, and It will determine that in result write-in configuration file.
Optionally, which further comprises:
Toning processing scheme writing unit is suitable for write-in and the finger described in the configuration file in the configuration file Determine the corresponding toning processing scheme of picture material of video.
Optionally, which further comprises:
Marking unit, suitable for being not belonging to preset any type video image scene classification when the picture material of designated When, mark the designated to be applicable in toning processing general rule in the configuration file of the designated.
Optionally, which further comprises:
Judging unit, suitable for being not belonging to preset any type video image scene classification when the picture material of designated When, judge whether the designated is applicable in toning processing general rule;
The marking unit is suitable for if applicable, then the designated of the label in the configuration file of the designated It is applicable in toning processing general rule;If not applicable, the label designated in the configuration file of the designated is not It is suitble to do any toning processing.
Optionally,
The judging unit, suitable for the histogram of the designated is input to the video for being not suitable for mixing colours for identification In machine learning model;If the output of the machine learning model confirms that the designated is not suitable for the result of toning, it is determined that The not applicable toning processing general rule of the designated;Conversely, determining that the designated is applicable in toning processing general rule.
Optionally, which further comprises:
Second machine learning model acquiring unit is suitable for obtaining the video of a certain number of unsuitable tonings;Not by these It is suitble to the histogram of the video of toning to be input in machine learning model as training data and is trained study, obtains for knowing The machine learning model for the video that Bu Shihe do not mix colours.
According to another aspect of the invention, a kind of electronic equipment is provided, wherein the electronic equipment includes:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the place when executed Device is managed to execute according to method above-mentioned.
In accordance with a further aspect of the present invention, a kind of computer readable storage medium is provided, wherein described computer-readable The one or more programs of storage medium storage, one or more of programs when being executed by a processor, realize method above-mentioned.
According to the technique and scheme of the present invention, the picture material of designated is identified;It is specified that description is generated according to recognition result The configuration file of the picture material of video;When client request plays the video, the configuration text of the video is provided to client Part carries out toning processing according to the configuration file to realize in video playing to designated, in this way, being played in user It is exactly toning treated video when the video, compares and do not carry out the video of toning processing, toning treated video is matched Color is distincter, improves the bandwagon effect of video, can reach customer satisfaction system effect, can more meet the broadcasting demand of user, Enhance the usage experience of user.
Above description is only the general introduction of technical solution of the present invention, in order to better understand the technical means of the present invention, And can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, below the special specific implementation mode for lifting the present invention.
Description of the drawings
By reading the detailed description of hereafter preferred embodiment, various other advantages and benefit are common for this field Technical staff will become clear.Attached drawing only for the purpose of illustrating preferred embodiments, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows the flow diagram of method for processing video frequency according to an embodiment of the invention;
Fig. 2 shows the structural schematic diagrams of the device of video according to an embodiment of the invention processing;
Fig. 3 shows the structural schematic diagram of electronic equipment according to an embodiment of the invention;
Fig. 4 shows the structural schematic diagram of computer readable storage medium according to an embodiment of the invention.
Specific implementation mode
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
Fig. 1 shows the flow diagram of method for processing video frequency according to an embodiment of the invention.As shown in Figure 1, should Method includes:
Step S110 identifies the picture material of designated.
Step S120 generates the configuration file of the picture material of description designated according to recognition result.
Step S130, provides configuration file, and toning processing is carried out to video according to configuration file to realize.
In the present embodiment, it is to be illustrated from server side to the technical program, server can be to the figure of designated As content is identified, the configuration file for the picture material for describing the designated is then generated, when user end to server is sent out When sending the request for playing the designated, the designated file and corresponding configuration file are supplied to client, client End side can carry out toning processing according to configuration file to designated first, then will toning treated that designated is broadcast It puts.For example, personage is identified from picture material, then including just to change specified regard in the configuration file of the designated generated The description for having personage in the picture material of frequency, before client carries out the designated broadcasting, according to there is personage in configuration file Description, to the designated carry out personage toning handle, then play.
In the present embodiment, toning processing is carried out to designated, can is that U.S. face is carried out to the face in designated Processing or toning processing is carried out to the color of the landscape in designated so that the saturation degree of the color in designated or The effect of facial image is more perfect.
In a specific example, identify in the picture material of designated there is fresh flower and meadow, according to the identification knot Fruit generates configuration file, when client sends the request for playing the designated, provides the configuration file, client is according to this There are fresh flower and meadow in the designated described in configuration file, will mix colours the color of fresh flower and the color on meadow Processing so that the color of fresh flower is more bright-coloured, and the green on meadow is brighter, improves the bandwagon effect of the designated.
As it can be seen that through this embodiment, playing the designated after toning, compare and do not carry out the video of toning processing, adjusts The color matching of color treated video is distincter, improves the bandwagon effect of video, can reach customer satisfaction system effect, can more expire The broadcasting demand of sufficient user, enhances the usage experience of user.
In one embodiment of the invention, the picture material of the identification designated in step S110 includes:It presets not Generic video image scene;It identifies the video image scene classification belonging to the picture material of designated, and records corresponding Beginning and ending time of the video image scene of classification in the designated.
In order to realize the corresponding different toning processing of different classes of progress to designated, to reach better exhibition Show effect, in the present embodiment, first preset different classes of video image scene, such as personage, landscape, then identification is specified regards Whether the scene in the picture material of frequency belongs to the one or more of preset video image scene, identifies affiliated classification Afterwards, also to record beginning and ending time of the video image scene of respective classes in the designated, so as in client in basis When configuration file is mixed colours, the beginning and ending time specifically mixed colours is obtained, the image in not corresponding video image scene is prevented Also corresponding toning has been carried out in content, influences the bandwagon effect of video instead.Then regarding described in the picture material by record Frequency image scene and initial time are as recognition result.For example, preset video image scene includes face, sky, one In designated, the video image scene for having face in the period of the 0-5s of video is recognized, then records the designated In 0-5s picture material in have the scene of face, as recognition result, generate in configuration file;3-10s when Between have the video image scene of sky in picture material in section, then record in the designated in the picture material of 3-10s The video image scene for having sky generates configuration file as recognition result.When client carries out toning processing to designated When, toning processing is carried out to the face in the picture material of the 0-5s of video, in the picture material of the 3-10s of video Sky carries out toning processing.That is, in the present embodiment, targetedly toning can be carried out to designated and handled, The efficiency of toning processing can be further increased.
Specifically, the video image scene classification belonging to the picture material of above-mentioned identification designated includes:It will specify Video is sequentially inputted in each machine learning model for the different classes of video image scene being respectively used in identification video;It obtains Take the recognition result of each machine learning model output.
In the present embodiment, it is the identification that all kinds of video image scene classifications is carried out by machine learning model.At this In, the video image scene of each classification corresponds to a kind of machine learning model.For example, have the video image scene of face class The engineering of the video image scene of machine learning model, the machine learning model of the video image scene of day empty class and fresh flower class Model is practised, a designated is separately input in three kinds of above-mentioned machine learning models, is as a result identified in the designated Picture material in have face and fresh flower, that is to say, that in the machine learning model and fresh flower of the video image scene of face class The machine learning model output of the video image scene of class is existing recognition result, and the video image scene of day empty class Machine learning model output is the recognition result being not present.Different classes of video image scene can thus be carried out accurate True identification.
Further, on the basis of the above embodiments, method shown in FIG. 1 further comprises:For a kind of classification Video image scene obtains the video for the video image scene for belonging to the category, is inputted the video of acquisition as training data It is trained study to machine learning model, obtains the machine learning of the video image scene of the category in video for identification Model;And so on, obtain the corresponding machine learning model of video image scene of all categories.
In order to realize the identification of different classes of video image scene, the machine for obtaining video image scene of all categories is needed Device learning model, in the present embodiment, the video image scene of each classification is to obtain the video image for belonging to the category first Video sample of the video of scene as the category, carries out the training of machine learning, then obtains the video picture field of the category The machine learning model of scape.For example, for the video image scene of face classification, the video picture field with face is obtained first The video of scape is input to training in machine learning model, obtains the machine learning model of the video image scene of face classification;It is right In the video image scene of sky classification, the video of the video image scene with sky is obtained first, is input to machine learning Training, obtains the machine learning model of the video image scene of sky classification in model.
In one embodiment of the invention, above-mentioned different classes of video image scene includes:Video containing face Image scene;Landscape class video image scene.
In this embodiment, it is preferred that video image scene includes regarding for video image scene containing face and landscape class Frequency image scene.That is, it is preferable that include in picture material in identification designated face and/or include wind The screen image scene of scape.It, can be to face part when being mixed colours when picture material includes face in designated Emphasis toning is carried out, for example, carrying out U.S. face processing (mill skin, whitening etc.), the personage people in video after playing in this way to face The effect of face can be more preferable.It, can be to landscape portion when being mixed colours when including landscape in the picture material in designated It point mixes colours, landscape class video image scene here may include a variety of, such as sky, meadow, fresh flower landscape, for example, When being mixed colours, can by the more blue of day air-conditioning, meadow is greener, fresh flower is redder, in this way, designated play when, displaying Effect can be more.
Further, on the basis of the above embodiments, description is specified to be regarded being generated according to recognition result in step S120 The configuration file of the picture material of frequency further comprises:When designated includes the video image scene containing face, into one Step judges whether that video image scene containing face carries out toning processing to this according to the size ratio of face therein, and It will determine that in result write-in configuration file.
In the present embodiment, when in designated including the screen image scene of face, can have there are many situation Face in video is more or smaller, and size ratio is smaller, even if having carried out face toning processing, the exhibition to designated Show that the raising of effect will not play too big effect, toning processing can increase the time that video load plays, reduce user instead Usage experience, there is no need to carry out toning processing for such case;If the scale for the face for including in designated compared with Greatly, then after mixing colours face, the bandwagon effect of video playing can be significantly improved, then just needs to mix colours to face. So when carrying out toning processing, it is not that toning processing is carried out to all videos comprising face, can be directed to different The toning of situation adaptability further increases the efficiency of toning processing.Such as, video when main broadcaster is broadcast live, only includes main broadcaster one At this moment the face of people, and large percentage of the face in video image just need the toning processing for carrying out face, that is, judge to need Toning processing is carried out, which is written in configuration file, client can carry out face toning processing according to configuration file; For another example, in the video at ball match scene, including face it is more, and each face very little, at this moment there is no need to carry out at toning Reason, then the judging result that need not just carry out toning processing is written in configuration file, it, would not when client is mixed colours It mixes colours the face in the designated.
In one embodiment of the invention, method shown in Fig. 1 further comprises:
It is written at toning corresponding with the picture material of the designated described in the configuration file in configuration file Reason scheme.
In actual use, client can carry out toning processing according to configuration file after server provides configuration file, visitor Family end can be carried out when how selection is specifically mixed colours according to the toning processing scheme of client itself.In this implementation In example, corresponding toning processing scheme is written in configuration file server, then client can be according to configuration file Processing scheme of mixing colours carries out specific toning processing, carries out the selection of toning processing scheme again without client, it is possible to reduce The process step of client-side toning processing procedure.
For example, the recognition result of server, which is the 0-3s of the designated, includes the video image scene of face, client End then can to the video image of the 0-3s of the designated carry out face toning handle, when being handled selection to face into The processing such as row whitening, mill skin.If server presets the toning processing scheme that the 0-3s includes the video image scene of face It is whitening processing to be carried out to face, and be written in configuration file, then client is according to configuration file, only to the designated The video image of 0-3s carries out the whitening processing of face.For another example, it is written in the configuration file of designated to fresh flower class When the video of video image scene is mixed colours, the saturation degree of colored color is improved 20%, then client is to carry out this specified When the toning of video, the saturation degree of the color of fresh flower part is just improved 20%.
In one embodiment of the invention, method shown in FIG. 1 further comprises:
When the picture material of designated is not belonging to preset any type video image scene classification, specified regard at this The applicable toning processing general rule of the label designated in the configuration file of frequency.
In the above-described embodiment, when carrying out the identification of video image scene classification to the picture material of designated, it is According to the machine learning model of different classes of screen image scene, inevitable machine learning model is to need constantly to tire out Meter, the video image scene of all categories can not be covered.So, in the present embodiment, also it is set with toning processing Universal gauge Then, it when the picture material of designated is not belonging to preset any type video image scene classification, can utilize at toning Reason general rule carries out toning processing, can ensure that the toning of designated handles more abundant in this way, and is not only to packet The video of screen image scene containing pre-set categories carries out toning processing, further increases the usage experience of user.
In the present embodiment, when designated meets toning processing general rule, client-side can utilize general regard Frequency modulation color processing model mixes colours to designated, and video toning processing model here is to can also be in client-side In configuration file, specifically utilize machine learning method, using the existing video image with certain Color Scheme as Training data generates corresponding generic video toning processing model, processing model is mixed colours to being applicable in toning using the generic video The video of processing general rule carries out toning processing.For example, selection color matching is more conform with several films of requirement, learn its color matching Method generates corresponding generic video toning processing model, then after video input generic video toning processing model, tone Also it will present out the tone consistent with the film chosen.
Further, on the basis of the above embodiments, method shown in FIG. 1 further comprises:When the figure of designated When being not belonging to preset any type video image scene classification as content, it is logical to judge whether the designated is applicable in toning processing With rule;If applicable, then the designated of the label in the configuration file of the designated is applicable in toning processing Universal gauge Then;If not applicable, the designated is marked to be not suitable for doing any toning processing in the configuration file of the designated.
In practical applications, the toning that video can be carried out using toning processing general rule is handled, so that in video Color matching it is distincter.But be not suitable for carrying out any toning processing there are also video, for example, art form is song-and-dance duet Legitimate drama in, fancy is compared in wearing for personage in itself, and curtain color also can be more beautiful, if carry out again toning processing be easy There is the case where overexposure, instead so that the bandwagon effect of video is worse, such as the clothing on clothes is buckled and can not can normally be shown.Institute With in the present embodiment, when the picture material of designated is not belonging to preset any type video image scene classification, also It needs first to judge whether the designated is applicable in toning processing general rule, only when applicable, just can specify and regard at this The applicable toning processing general rule of the label designated in the configuration file of frequency, prevents the displaying of the video after toning The worse situation of effect avoids the operation of the toning processing for the bandwagon effect for being unfavorable for improving designated.
Specifically, the above-mentioned whether applicable toning processing general rule of the designated that judges includes:By the designated Histogram be input to for identification be not suitable for toning video machine learning model in;If the machine learning model is defeated Go out and confirm that the designated is not suitable for the result of toning, it is determined that the not applicable toning processing general rule of the designated;Conversely, Determine that the designated is applicable in toning processing general rule.
In the present embodiment, the histogram of video is the exposed feature of image or the figure of color characteristic in description video Spectrum.For example, being different color ratio shared in entire image described in color histogram.
In the present embodiment, judge designated whether using toning processing general rule using machine learning model. Specifically by the histogram input of the designated for identification machine learning model of the video of unsuitable toning, according to The result of model output determines whether the designated is suitable for toning processing general rule.
Specifically, above-mentioned method further comprises:Obtain the video of a certain number of unsuitable tonings;By these discomforts The histogram of the video of conjunction toning, which is input to as training data in machine learning model, is trained study, obtains for identification Be not suitable for the machine learning model of the video of toning.
In order to judge whether the designated is applicable in toning processing general rule using machine learning model, need to obtain The machine learning model is obtaining video because the machine learning model is the video of unsuitable toning for identification When sample is trained, acquisition be not suitable for toning video histogram.The then video for being not suitable for toning for identification Machine learning model output result when being to determine, then the designated is not suitable for toning, that is, it is general to be not suitable for toning processing Rule;When the result of output whether when, then the designated is suitble to mix colours, that is, be suitble to toning processing general rule.
Further, above-mentioned method further comprises:Obtain a certain number of videos for being confirmed as being suitble to toning;It will These are suitble in the video input to the machine learning model of video for being not suitable for toning for identification of toning, to the machine learning Model is verified.
In the present embodiment, in order to verify above-described embodiment acquisition for identification be not suitable for toning video engineering The accuracy for practising model can obtain a certain number of videos for being known to be and being suitble to toning, be input to acquisition for identification not It is suitble in the machine learning model of the video of toning, because the accurate recognition result of the video of input is known, if machine Whether the result of device learning model output, then the machine learning model is accurate, if what the result of input was to determine, it should The accuracy needs of machine learning model further increase, and are verified using the video for being confirmed as suitable toning, Ke Yibao Demonstrate,prove the accuracy of the recognition result of the machine learning model.
In a specific example, for a designated, which is input to video image of all categories In the machine learning model of scene, the video image scene classification belonging to the picture material of designated is identified, when identifying After the video image scene classification of category, configuration file is generated according to recognition result;When the picture material of designated be not belonging to it is pre- If any type video image scene classification when, by the designated be input to for identification be not suitable for toning video machine In device learning model, judge the designated whether be applicable in toning processing general rule, if output the result is that be applicable in, at this The applicable toning processing general rule of the label designated in the configuration file of designated;If exporting the result is that uncomfortable With then marking the designated to be not suitable for doing any toning processing in the configuration file of the designated.
In one embodiment of invention, the offer configuration file in step S130 includes:It is sent out when receiving intelligent terminal When that send asks the request message of the configuration file of the designated, which is sent to intelligent terminal, or should The download address of configuration file is sent to the intelligent terminal.
In the technical scheme, server side can provide the configuration file of designated, specifically mix colours video The step of processing, is carried out in client-side, and therefore, server needs to provide configuration file to client, is particularly taking When business device receives the request message of the configuration file for asking the designated of intelligent terminal transmission, which is sent It is sent to the intelligent terminal to intelligent terminal, or by the download address of the configuration file, so that designated terminal is according to download ground Download the configuration file of the designated in location.
The present invention also provides a kind of method for processing video frequency, are illustrated from client-side.This method includes:
Step S21 obtains the configuration file of designated to be played;The figure of the designated is described in configuration file As content.
In the present embodiment, the request for playing designated can be sent to server when playing designated, when getting When designated, in order to realize the toning processing to designated, the configuration file for obtaining the designated is needed.Matching here It is the picture material for describing the designated that server side provides to set file, for example, the picture material of the designated In video image scene (the video image scene such as containing face or landscape class video image scene) and the video picture field The initial time of scape in video.
Step S22 carries out toning processing, to broadcast according to configuration file when playing designated to the designated Put toning treated video.
In the present embodiment, when playing designated, toning processing is carried out according to configuration file first, is then played again Toning treated video, that is to say, that the video of user's perception is handled by toning.When being mixed colours, it is It is mixed colours according to the configuration file of designated, for example, based in above-mentioned example, designated in being recorded in configuration file The initial time of facial video image scene in picture material and the picture material comprising face, then when being mixed colours, According to the foregoing description in configuration file, then U.S. face toning processing is carried out to corresponding face in the designated.
As it can be seen that through this embodiment, toning processing is carried out to video, compares and do not carry out the video of toning processing, toning The color matching of treated video is distincter, plays the toning treated video, can reach customer satisfaction system effect, more can The broadcasting demand for meeting user, enhances the usage experience of user.
In the above description, it in designated may include different video image scene, carry out toning processing When, rule can be handled according to default toning to including that the picture material of different video image scenes is mixed colours.Below It is illustrated by preferred embodiment.
In one embodiment of the invention, step S22 according to configuration file carries out toning processing to the designated Including:When the designated described in the configuration file includes picture material and its beginning and ending time of scenery class, this is specified The saturation degree of three color of red, green, blue in scenery class picture material in video increases default value.
In the present embodiment, it is to mix colours the picture material of the scenery class in designated, is specifically carrying out scape When color class picture material is mixed colours, it is adjusted to the saturation degree of three color of red, green, blue in picture material.
For example, including fresh flower, meadow and blue sky in the picture material of scenery class in designated, according to configuration text The beginning and ending time of the description of part, the picture material of the scenery class is 10s-25s, then when being mixed colours, by the of video The saturation degree of red, green, blue three in the picture material of 10s-25s increases by 20%, then the fresh flower in video, meadow and blue sky Color can be more beautiful, improves the bandwagon effect of video.
In one embodiment of the invention, step S22 according to configuration file carries out toning processing to the designated Including:Include specified being regarded to this containing the picture material of face and when its beginning and ending time when configuration file describes the designated The picture material containing face in frequency carries out beautifying faces processing.
In the present embodiment, it is to mix colours the picture material comprising face in designated.Specifically to comprising Face in the picture material of face carries out beautifying faces processing, such as U.S. face, mill skin, whitening processing, or using default The processing operation of key U.S. face face is handled.
For example, include face in the picture material of scenery class in designated, it, should according to the description of configuration file Including the beginning and ending time of the picture material of face is 0s-20s, then when being mixed colours, by the image of the 0s-20s of video Face in content carries out whitening processing so that the bandwagon effect of the face in video is more preferable.
In the above embodiments, record has start-stop of the picture material of respective classes in the designated in configuration file Time, when being mixed colours according to configuration file, to obtain the beginning and ending time specifically mixed colours in client, prevent in no phase Also corresponding toning has been carried out in the picture material for the video image scene answered, and influences the bandwagon effect of video instead.
In one embodiment of the invention, step S22 according to configuration file carries out toning processing to the designated Including:When configuration file, which describes the designated, is applicable in video processing general rule, which is input to general regard Toning processing is carried out in frequency modulation color processing model.
Generic video toning processing model in the present embodiment, the method that can utilize machine learning have existing The video image of certain Color Scheme generates corresponding generic video toning processing model as training data, general using this Video toning processing model to configuration file description be applicable in toning handle general rule video carry out toning processing.
Here video toning processing model is that have server to issue in client-side, can also be in configuration file In, the method for specifically utilizing machine learning is raw using the existing video image with certain Color Scheme as training data It mixes colours at corresponding generic video and handles model, processing model is mixed colours to the processing general rule that is suitable for mixing colours using the generic video Video carry out toning processing.Satisfactory several films for example, selection is matched colors, learn its color matching method, generate corresponding Generic video toning processing model, then after video input generic video toning processing model, tone also will present out and select The consistent tone of the film that takes.
The above-mentioned embodiment for carrying out toning processing to the designated according to configuration file can be combined in real time.Also It is to say, when handling designated, after corresponding toning processing can be carried out to picture material according to configuration file, also may be used It is inputted again with the video that will mix colours that treated and carries out carrying out toning processing in generic video toning processing model.
In one embodiment of the invention, the designated described in the configuration file includes the picture material of scenery class And when its beginning and ending time, the saturation degree of three color of red, green, blue in the scenery class picture material in the designated is increased pre- If numerical value;And when configuration file describes the designated include containing the picture material of face and when its beginning and ending time, to this The picture material containing face in designated carries out beautifying faces processing;Then, then by the designated it is input to general regard Toning processing is carried out in frequency modulation color processing model.
In the present embodiment, configuration file is the description to the picture material of entire designated, in the designated The picture material of scenery class can be included, it is also possible to include the picture material of face, then when being mixed colours, be required into Row toning is handled.
It should be noted that the time of the picture material comprising scenery class and the picture material comprising face in video Can overlap, for example, the beginning and ending time of the picture material comprising scenery class is 10s-25s, including in the image of face The beginning and ending time of appearance is 0s-20s, that is to say, that had both included scape in the picture material of the 10s-20s in designated Color class also includes face, then the toning processing that will carry out scenery class to the picture material of the period content also carries out Beautifying faces processing.
In the present embodiment, after carrying out inhomogeneous toning processing to the picture material of designated according to configuration file, The processing of toning again will be carried out in toning treated the designated is input to generic video toning processing model again, that is, It says, secondary toning processing is carried out to designated, the bandwagon effect of designated can be made to further increase.
In one embodiment of the invention, above-mentioned method further comprises:Acquisition is confirmed as handling result of mixing colours Meet a certain number of videos of preset standard;The toning handling machine learning model that these videos are inputted as training data In be trained study, obtain generic video toning processing model.
Point out in the above description, generic video toning processing model can be that server side issues, can also be In configuration file, it can also be what client color generated.In the present embodiment, it is that client obtains generic video toning processing Model.The existing toning handling result that is confirmed as is met the certain of preset standard by the method for specifically utilizing machine learning The video of quantity is generated corresponding generic video toning processing model, is handled using the generic video toning as training data Model suitable for the video for processing general rule of mixing colours to carrying out toning processing.For example, selection is confirmed as handling result symbol of mixing colours The a certain number of films for closing preset standard, learn its color matching method, generate corresponding generic video toning processing model, then regard After frequency inputs generic video toning processing model, tone also will present out the tone consistent with the film chosen.
In one embodiment of the invention, toning processing is carried out to the designated according to configuration file in step S22 Before, above-mentioned method further comprises:Detect the parameter information of intelligent terminal;Believed according to the parameter of the intelligent terminal detected Breath, judges whether to carry out toning processing to the designated;If the judgment is Yes, then it executes to specify this according to configuration file and regard Frequency carries out the step of toning processing;If the judgment is No, then toning processing is not carried out to the designated.
Although the toning that can carry out designated according to configuration file is handled so that the color matching in designated more highlights It is beautiful, to improve the bandwagon effect of designated, meet the broadcasting demand of user.But the beautiful video of color is not appropriate for institute Some scenes play, and the scene having in other words is suitble to mix colours, and some scenes are not suitable for toning.For example, pair of intelligent terminal itself State higher than degree or that user is current is unsuitable for seeing the video that color saturation is too big, or in more dim scene Under be unsuitable for seeing the video that contrast is big, etc., at this moment if again to designated carry out toning processing can run counter to desire instead, give User brings certain puzzlement, reduces the usage experience of user;For another example, user is in high-speed mobile or intelligent terminal itself Contrast it is smaller, then need to be adjusted.
In the present embodiment, before carrying out toning processing to designated, the parameter information of intelligent terminal is first obtained, is judged The case where carrying out toning processing if appropriate for designated, on the one hand can excluding to be not suitable for toning, can also get suitable The case where closing toning.
Specifically, one or more during the parameter information of above-mentioned intelligent terminal includes as follows:Temporal information;Show mould Formula;Posture information.
In the present embodiment, temporal information can reflect the period residing for current intelligent terminal, for example, daytime or night Evening, when during the day, its bandwagon effect can be improved by handling designated memory toning, but if be mixed colours at night Processing since the saturation degree of designated is larger, causes the uncomfortable of eyes of user when user is when browse designated, drop The usage experience of low user.Therefore, in the present embodiment, judged by the temporal information of intelligent terminal.For example, obtain The present system time of intelligent terminal is 9:30, then judge that the designated can be carried out toning processing, plays toning processing Screen afterwards;If the present system time of the intelligent terminal obtained is 22:00, then judge not mix colours to the designated Processing plays original video.
In this example it is shown that model can describe the current display effect of intelligent terminal, e.g., pair in display pattern Than degree, if contrast is larger, then carries out toning processing to designated, it can make the double increase of the contrast of designated, In the case of strong contrast, the bandwagon effect of designated can be distorted, and influence the effect of designated instead.Therefore, exist In the present embodiment, the display model for obtaining intelligent terminal is judged.For example, the contrast when the intelligent terminal obtained is more than the When one preset value, it is judged as not carrying out toning processing to the designated;Conversely, when obtain intelligent terminal contrast When less than the second preset value, judgement can carry out toning processing to the designated.
Here display pattern can also be brightness, cold light or warm light isotype etc..
In the present embodiment, the posture information of intelligent terminal can directly reflect the posture residing for user, if user is place In the posture of recumbency, then illustrate that user may rest, it, may be to user at this moment if carrying out toning processing to designated Eyes cause certain pressure, or the excitement levels of user can be improved, be unfavorable for the current resting state of user, therefore, In the present embodiment, judge whether to carry out toning processing to designated by the posture information of intelligent terminal.Here intelligence is eventually The posture information at end can be obtained by the attitude transducer in intelligent terminal, for example, gyroscope, accelerometer etc..
For example, getting the state that active user is in recumbency by sensor, it can be determined that do not carried out to designated Toning is handled, alternatively, getting active user by sensor is in high-speed mobile, then illustrates that user can be in train or vapour Che Shang, video are susceptible to the state of virtualization, then judge to carry out toning processing to designated at this time.
Fig. 2 shows the structural schematic diagrams of the device of video according to an embodiment of the invention processing.As shown in Fig. 2, The video process apparatus 200 includes:
Recognition unit 210 is suitable for identifying the picture material of designated.
Configuration file generation unit 220 is suitable for generating the configuration of the picture material of description designated according to recognition result File.
Unit 230 is provided, configuration file is adapted to provide for, toning processing is carried out to video according to configuration file to realize.
In the present embodiment, it is to be carried out from server side (server side includes video process apparatus) to the technical program Illustrate, the picture material of designated can be identified in server, then generate the picture material for describing the designated Configuration file, when user end to server sends the request for playing the designated, by the designated file and correspondence Configuration file be supplied to client, client-side that can carry out toning processing to designated according to configuration file first, then Will toning treated that designated plays out.For example, personage is identified from picture material, then the designated generated Configuration file in include just the description for having personage in the picture material for change designated, when client carries out the designated Before broadcasting, according to the description for having personage in configuration file, the toning that personage is carried out to the designated is handled, is then played.
In the present embodiment, toning processing is carried out to designated, can is that U.S. face is carried out to the face in designated Processing or toning processing is carried out to the color of the landscape in designated so that the saturation degree of the color in designated or The effect of facial image is more perfect.
In a specific example, identify in the picture material of designated there is fresh flower and meadow, according to the identification knot Fruit generates configuration file, when client sends the request for playing the designated, provides the configuration file, client is according to this There are fresh flower and meadow in the designated described in configuration file, will mix colours the color of fresh flower and the color on meadow Processing so that the color of fresh flower is more bright-coloured, and the green on meadow is brighter, improves the bandwagon effect of the designated.
As it can be seen that through this embodiment, playing the designated after toning, compare and do not carry out the video of toning processing, adjusts The color matching of color treated video is distincter, improves the bandwagon effect of video, can reach customer satisfaction system effect, can more expire The broadcasting demand of sufficient user, enhances the usage experience of user.
In one embodiment of the invention, recognition unit 210 are suitable for default different classes of video image scene;Know Video image scene classification belonging to the picture material of other designated, and record the video image scenes of respective classes and refer at this Determine the beginning and ending time in video.
In order to realize the corresponding different toning processing of different classes of progress to designated, to reach better exhibition Show effect, in the present embodiment, first preset different classes of video image scene, such as personage, landscape, then identification is specified regards Whether the scene in the picture material of frequency belongs to the one or more of preset video image scene, identifies affiliated classification Afterwards, also to record beginning and ending time of the video image scene of respective classes in the designated, so as in client in basis When configuration file is mixed colours, the beginning and ending time specifically mixed colours is obtained, the image in not corresponding video image scene is prevented Also corresponding toning has been carried out in content, influences the bandwagon effect of video instead.Then regarding described in the picture material by record Frequency image scene and initial time are as recognition result.For example, preset video image scene includes face, sky, one In designated, the video image scene for having face in the period of the 0-5s of video is recognized, then records the designated In 0-5s picture material in have the scene of face, as recognition result, generate in configuration file;3-10s when Between have the video image scene of sky in picture material in section, then record in the designated in the picture material of 3-10s The video image scene for having sky generates configuration file as recognition result.When client carries out toning processing to designated When, toning processing is carried out to the face in the picture material of the 0-5s of video, in the picture material of the 3-10s of video Sky carries out toning processing.That is, in the present embodiment, targetedly toning can be carried out to designated and handled, The efficiency of toning processing can be further increased.
Specifically, recognition unit 210, suitable for the inhomogeneity for being sequentially inputted to be respectively used in identification video by designated In each machine learning model of other video image scene;Obtain the recognition result of each machine learning model output.
In the present embodiment, it is the identification that all kinds of video image scene classifications is carried out by machine learning model.At this In, the video image scene of each classification corresponds to a kind of machine learning model.For example, have the video image scene of face class The engineering of the video image scene of machine learning model, the machine learning model of the video image scene of day empty class and fresh flower class Model is practised, a designated is separately input in three kinds of above-mentioned machine learning models, is as a result identified in the designated Picture material in have face and fresh flower, that is to say, that in the machine learning model and fresh flower of the video image scene of face class The machine learning model output of the video image scene of class is existing recognition result, and the video image scene of day empty class Machine learning model output is the recognition result being not present.Different classes of video image scene can thus be carried out accurate True identification.
Further, on the basis of the above embodiments, device shown in Fig. 2 further comprises:
First machine learning model acquiring unit, is suitable for the video image scene for a kind of classification, and acquisition belongs to such The video of acquisition is input to machine learning model as training data and is trained by the video of other video image scene It practises, obtains the machine learning model of the video image scene of the category in video for identification;And so on, it obtains of all categories The corresponding machine learning model of video image scene.
In order to realize the identification of different classes of video image scene, the machine for obtaining video image scene of all categories is needed Device learning model, in the present embodiment, the video image scene of each classification is to obtain the video image for belonging to the category first Video sample of the video of scene as the category, carries out the training of machine learning, then obtains the video picture field of the category The machine learning model of scape.For example, for the video image scene of face classification, the video picture field with face is obtained first The video of scape is input to training in machine learning model, obtains the machine learning model of the video image scene of face classification;It is right In the video image scene of sky classification, the video of the video image scene with sky is obtained first, is input to machine learning Training, obtains the machine learning model of the video image scene of sky classification in model.
In one embodiment of the invention, above-mentioned different classes of video image scene includes:Video containing face Image scene;Landscape class video image scene.
In this embodiment, it is preferred that video image scene includes regarding for video image scene containing face and landscape class Frequency image scene.That is, it is preferable that include in picture material in identification designated face and/or include wind The screen image scene of scape.It, can be to face part when being mixed colours when picture material includes face in designated Emphasis toning is carried out, for example, carrying out U.S. face processing (mill skin, whitening etc.), the personage people in video after playing in this way to face The effect of face can be more preferable.It, can be to landscape portion when being mixed colours when including landscape in the picture material in designated It point mixes colours, landscape class video image scene here may include a variety of, such as sky, meadow, fresh flower landscape, for example, When being mixed colours, can by the more blue of day air-conditioning, meadow is greener, fresh flower is redder, in this way, designated play when, displaying Effect can be more.
Further, on the basis of the above embodiments, configuration file generation unit 220, is wrapped suitable for working as in designated When including the video image scene containing face, further judged whether to contain face to this according to the size ratio of face therein Video image scene carry out toning processing, and will determine that result write-in configuration file in.
In the present embodiment, when in designated including the screen image scene of face, can have there are many situation Face in video is more or smaller, and size ratio is smaller, even if having carried out face toning processing, the exhibition to designated Show that the raising of effect will not play too big effect, toning processing can increase the time that video load plays, reduce user instead Usage experience, there is no need to carry out toning processing for such case;If the scale for the face for including in designated compared with Greatly, then after mixing colours face, the bandwagon effect of video playing can be significantly improved, then just needs to mix colours to face. So when carrying out toning processing, it is not that toning processing is carried out to all videos comprising face, can be directed to different The toning of situation adaptability further increases the efficiency of toning processing.Such as, video when main broadcaster is broadcast live, only includes main broadcaster one At this moment the face of people, and large percentage of the face in video image just need the toning processing for carrying out face, that is, judge to need Toning processing is carried out, which is written in configuration file, client can carry out face toning processing according to configuration file; For another example, in the video at ball match scene, including face it is more, and each face very little, at this moment there is no need to carry out at toning Reason, then the judging result that need not just carry out toning processing is written in configuration file, it, would not when client is mixed colours It mixes colours the face in the designated.
In one embodiment of the invention, device shown in Fig. 2 further comprises:
Mix colours processing scheme writing unit, suitable in configuration file write-in regarded with specified described in the configuration file The corresponding toning processing scheme of picture material of frequency.
In actual use, client can carry out toning processing according to configuration file after server provides configuration file, visitor Family end can be carried out when how selection is specifically mixed colours according to the toning processing scheme of client itself.In this implementation In example, corresponding toning processing scheme is written in configuration file server, then client can be according to configuration file Processing scheme of mixing colours carries out specific toning processing, carries out the selection of toning processing scheme again without client, it is possible to reduce The process step of client-side toning processing procedure.
For example, the recognition result of server, which is the 0-3s of the designated, includes the video image scene of face, client End then can to the video image of the 0-3s of the designated carry out face toning handle, when being handled selection to face into The processing such as row whitening, mill skin.If server presets the toning processing scheme that the 0-3s includes the video image scene of face It is whitening processing to be carried out to face, and be written in configuration file, then client is according to configuration file, only to the designated The video image of 0-3s carries out the whitening processing of face.For another example, it is written in the configuration file of designated to fresh flower class When the video of video image scene is mixed colours, the saturation degree of colored color is improved 20%, then client is to carry out this specified When the toning of video, the saturation degree of the color of fresh flower part is just improved 20%.
In one embodiment of the invention, device shown in Fig. 2 further comprises:
Marking unit, suitable for being not belonging to preset any type video image scene classification when the picture material of designated When, mark the designated to be applicable in toning processing general rule in the configuration file of the designated.
In the above-described embodiment, when carrying out the identification of video image scene classification to the picture material of designated, it is According to the machine learning model of different classes of screen image scene, inevitable machine learning model is to need constantly to tire out Meter, the video image scene of all categories can not be covered.So, in the present embodiment, also it is set with toning processing Universal gauge Then, it when the picture material of designated is not belonging to preset any type video image scene classification, can utilize at toning Reason general rule carries out toning processing, can ensure that the toning of designated handles more abundant in this way, and is not only to packet The video of screen image scene containing pre-set categories carries out toning processing, further increases the usage experience of user.
In the present embodiment, when designated meets toning processing general rule, client-side can utilize general regard Frequency modulation color processing model mixes colours to designated, and video toning processing model here is to can also be in client-side In configuration file, specifically utilize machine learning method, using the existing video image with certain Color Scheme as Training data generates corresponding generic video toning processing model, processing model is mixed colours to being applicable in toning using the generic video The video of processing general rule carries out toning processing.For example, selection color matching is more conform with several films of requirement, learn its color matching Method generates corresponding generic video toning processing model, then after video input generic video toning processing model, tone Also it will present out the tone consistent with the film chosen.
Further, on the basis of the above embodiments, device shown in Fig. 2 further comprises:
Judging unit, suitable for being not belonging to preset any type video image scene classification when the picture material of designated When, judge whether the designated is applicable in toning processing general rule.
Then above-mentioned marking unit, is suitable for if applicable, then this is specified for the label in the configuration file of the designated Video is applicable in toning processing general rule;If not applicable, label in the configuration file of the designated this specified regard Frequency is not suitable for doing any toning processing.
In practical applications, the toning that video can be carried out using toning processing general rule is handled, so that in video Color matching it is distincter.But be not suitable for carrying out any toning processing there are also video, for example, art form is song-and-dance duet Legitimate drama in, fancy is compared in wearing for personage in itself, and curtain color also can be more beautiful, if carry out again toning processing be easy There is the case where overexposure, instead so that the bandwagon effect of video is worse, such as the clothing on clothes is buckled and can not can normally be shown.Institute With in the present embodiment, when the picture material of designated is not belonging to preset any type video image scene classification, also It needs first to judge whether the designated is applicable in toning processing general rule, only when applicable, just can specify and regard at this The applicable toning processing general rule of the label designated in the configuration file of frequency, prevents the displaying of the video after toning The worse situation of effect avoids the operation of the toning processing for the bandwagon effect for being unfavorable for improving designated.
Specifically, above-mentioned judging unit is not suitable for tune for identification suitable for the histogram of the designated to be input to In the machine learning model of the video of color;If the output of the machine learning model confirms that the designated is not suitable for the knot of toning Fruit, it is determined that the not applicable toning processing general rule of the designated;Conversely, it is general to determine that the designated is applicable in toning processing Rule.
In the present embodiment, the histogram of video is the exposed feature of image or the figure of color characteristic in description video Spectrum.For example, being different color ratio shared in entire image described in color histogram.
In the present embodiment, judge designated whether using toning processing general rule using machine learning model. Specifically by the histogram input of the designated for identification machine learning model of the video of unsuitable toning, according to The result of model output determines whether the designated is suitable for toning processing general rule.
Specifically, on the basis of the above embodiments, device shown in Fig. 2 further comprises:
Second machine learning model acquiring unit is suitable for obtaining the video of a certain number of unsuitable tonings;Not by these It is suitble to the histogram of the video of toning to be input in machine learning model as training data and is trained study, obtains for knowing The machine learning model for the video that Bu Shihe do not mix colours.
In order to judge whether the designated is applicable in toning processing general rule using machine learning model, need to obtain The machine learning model is obtaining video because the machine learning model is the video of unsuitable toning for identification When sample is trained, acquisition be not suitable for toning video histogram.The then video for being not suitable for toning for identification Machine learning model output result when being to determine, then the designated is not suitable for toning, that is, it is general to be not suitable for toning processing Rule;When the result of output whether when, then the designated is suitble to mix colours, that is, be suitble to toning processing general rule.
Further, the second machine learning model acquiring unit, suitable for obtain it is a certain number of be confirmed as be suitble to toning Video;These are suitble in the video input to the machine learning model of video for being not suitable for toning for identification of toning, it is right The machine learning model is verified.
In the present embodiment, in order to verify above-described embodiment acquisition for identification be not suitable for toning video engineering The accuracy for practising model can obtain a certain number of videos for being known to be and being suitble to toning, be input to acquisition for identification not It is suitble in the machine learning model of the video of toning, because the accurate recognition result of the video of input is known, if machine Whether the result of device learning model output, then the machine learning model is accurate, if what the result of input was to determine, it should The accuracy needs of machine learning model further increase, and are verified using the video for being confirmed as suitable toning, Ke Yibao Demonstrate,prove the accuracy of the recognition result of the machine learning model.
In one embodiment of invention, unit 230 is provided, suitable for this is specified when the request that receive intelligent terminal transmission When the request message of the configuration file of video, which is sent to intelligent terminal, or by the download of the configuration file Address is sent to the intelligent terminal.
In the technical scheme, the configuration file that unit can provide designated is provided, is specifically mixed colours video The step of processing, is carried out in client-side, and therefore, server needs to provide configuration file to client, is particularly taking When business device receives the request message of the configuration file for asking the designated of intelligent terminal transmission, which is sent It is sent to the intelligent terminal to intelligent terminal, or by the download address of the configuration file, so that designated terminal is according to download ground Download the configuration file of the designated in location.
The present invention also provides a kind of video process apparatus, which includes:
Acquiring unit is suitable for obtaining the configuration file of designated to be played;This is described in configuration file specified to regard The picture material of frequency.
In the present embodiment, the request for playing designated can be sent to server when playing designated, when getting When designated, in order to realize the toning processing to designated, the configuration file for obtaining the designated is needed.Matching here It is the picture material for describing the designated that server side provides to set file, for example, the picture material of the designated In video image scene (the video image scene such as containing face or landscape class video image scene) and the video picture field The initial time of scape in video.
Toning processing unit, is suitable for when playing designated, is carried out at toning to the designated according to configuration file Reason, to play toning treated video.
In the present embodiment, when playing designated, toning processing is carried out according to configuration file first, is then played again Toning treated video, that is to say, that the video of user's perception is handled by toning.When being mixed colours, it is It is mixed colours according to the configuration file of designated, for example, based in above-mentioned example, designated in being recorded in configuration file The initial time of facial video image scene in picture material and the picture material comprising face, then when being mixed colours, According to the foregoing description in configuration file, then U.S. face toning processing is carried out to corresponding face in the designated.
As it can be seen that through this embodiment, toning processing is carried out to video, compares and do not carry out the video of toning processing, toning The color matching of treated video is distincter, plays the toning treated video, can reach customer satisfaction system effect, more can The broadcasting demand for meeting user, enhances the usage experience of user.
In the above description, it in designated may include different video image scene, carry out toning processing When, rule can be handled according to default toning to including that the picture material of different video image scenes is mixed colours.Below It is illustrated by preferred embodiment.
In one embodiment of the invention, toning processing unit is suitable for working as the designated packet described in configuration file When including picture material and its beginning and ending time of scenery class, by the red, green, blue in the scenery class picture material in the designated The saturation degree of three colors increases default value.
In the present embodiment, it is to mix colours the picture material of the scenery class in designated, is specifically carrying out scape When color class picture material is mixed colours, it is adjusted to the saturation degree of three color of red, green, blue in picture material.
For example, including fresh flower, meadow and blue sky in the picture material of scenery class in designated, according to configuration text The beginning and ending time of the description of part, the picture material of the scenery class is 10s-25s, then when being mixed colours, by the of video The saturation degree of red, green, blue three in the picture material of 10s-25s increases by 20%, then the fresh flower in video, meadow and blue sky Color can be more beautiful, improves the bandwagon effect of video.
In one embodiment of the invention, it mixes colours processing unit, describing the designated suitable for working as configuration file includes Containing the picture material of face and when its beginning and ending time, beautifying faces are carried out to the picture material containing face in the designated Processing.
In the present embodiment, it is to mix colours the picture material comprising face in designated.Specifically to comprising Face in the picture material of face carries out beautifying faces processing, such as U.S. face, mill skin, whitening processing, or using default The processing operation of key U.S. face face is handled.
For example, include face in the picture material of scenery class in designated, it, should according to the description of configuration file Including the beginning and ending time of the picture material of face is 0s-20s, then when being mixed colours, by the image of the 0s-20s of video Face in content carries out whitening processing so that the bandwagon effect of the face in video is more preferable.
In the above embodiments, record has start-stop of the picture material of respective classes in the designated in configuration file Time, when being mixed colours according to configuration file, to obtain the beginning and ending time specifically mixed colours in client, prevent in no phase Also corresponding toning has been carried out in the picture material for the video image scene answered, and influences the bandwagon effect of video instead.
In one embodiment of the invention, mix colours processing unit, suitable for work as configuration file describe the designated be applicable in When video handles general rule, which is input in generic video toning processing model and carries out toning processing.
Generic video toning processing model in the present embodiment, the method that can utilize machine learning have existing The video image of certain Color Scheme generates corresponding generic video toning processing model as training data, general using this Video toning processing model to configuration file description be applicable in toning handle general rule video carry out toning processing.
Here video toning processing model is that have server to issue in client-side, can also be in configuration file In, the method for specifically utilizing machine learning is raw using the existing video image with certain Color Scheme as training data It mixes colours at corresponding generic video and handles model, processing model is mixed colours to the processing general rule that is suitable for mixing colours using the generic video Video carry out toning processing.Satisfactory several films for example, selection is matched colors, learn its color matching method, generate corresponding Generic video toning processing model, then after video input generic video toning processing model, tone also will present out and select The consistent tone of the film that takes.
The above-mentioned embodiment for carrying out toning processing to the designated according to configuration file can be combined in real time.Also It is to say, when handling designated, after corresponding toning processing can be carried out to picture material according to configuration file, also may be used It is inputted again with the video that will mix colours that treated and carries out carrying out toning processing in generic video toning processing model.
In one embodiment of the invention, toning processing unit is suitable for working as the designated packet described in configuration file When including picture material and its beginning and ending time of scenery class, by the red, green, blue in the scenery class picture material in the designated The saturation degree of three colors increases default value;And when configuration file describe the designated include picture material containing face and When its beginning and ending time, beautifying faces processing is carried out to the picture material containing face in the designated;Then, then by this specify Video input carries out toning processing in mixing colours processing model to generic video.
In the present embodiment, configuration file is the description to the picture material of entire designated, in the designated The picture material of scenery class can be included, it is also possible to include the picture material of face, then when being mixed colours, be required into Row toning is handled.
It should be noted that the time of the picture material comprising scenery class and the picture material comprising face in video Can overlap, for example, the beginning and ending time of the picture material comprising scenery class is 10s-25s, including in the image of face The beginning and ending time of appearance is 0s-20s, that is to say, that had both included scape in the picture material of the 10s-20s in designated Color class also includes face, then the toning processing that will carry out scenery class to the picture material of the period content also carries out Beautifying faces processing.
In the present embodiment, after carrying out inhomogeneous toning processing to the picture material of designated according to configuration file, The processing of toning again will be carried out in toning treated the designated is input to generic video toning processing model again, that is, It says, secondary toning processing is carried out to designated, the bandwagon effect of designated can be made to further increase.
In one embodiment of the invention, above-mentioned device further comprises:Training unit is suitable for obtaining true It is set to a certain number of videos that toning handling result meets preset standard;The toning that these videos are inputted as training data It is trained study in handling machine learning model, obtains generic video toning processing model.
Point out in the above description, generic video toning processing model can be that server side issues, can also be In configuration file, it can also be what client color generated.In the present embodiment, it is that client obtains generic video toning processing Model.The existing toning handling result that is confirmed as is met the certain of preset standard by the method for specifically utilizing machine learning The video of quantity is generated corresponding generic video toning processing model, is handled using the generic video toning as training data Model suitable for the video for processing general rule of mixing colours to carrying out toning processing.For example, selection is confirmed as handling result symbol of mixing colours The a certain number of films for closing preset standard, learn its color matching method, generate corresponding generic video toning processing model, then regard After frequency inputs generic video toning processing model, tone also will present out the tone consistent with the film chosen.
In one embodiment of the invention, it is mixed colours the designated according to configuration file in toning processing unit Before processing, above-mentioned device further comprises:Parameter detecting unit is adapted to detect for the parameter information of intelligent terminal;According to inspection The parameter information of the intelligent terminal measured judges whether to carry out toning processing to the designated;If the judgment is Yes, then it executes The step of toning processing is carried out to the designated according to configuration file;If the judgment is No, then the designated is not carried out Toning is handled.
Although the toning that can carry out designated according to configuration file is handled so that the color matching in designated more highlights It is beautiful, to improve the bandwagon effect of designated, meet the broadcasting demand of user.But the beautiful video of color is not appropriate for institute Some scenes play, and the scene having in other words is suitble to mix colours, and some scenes are not suitable for toning.For example, pair of intelligent terminal itself State higher than degree or that user is current is unsuitable for seeing the video that color saturation is too big, or in more dim scene Under be unsuitable for seeing the video that contrast is big, etc., at this moment if again to designated carry out toning processing can run counter to desire instead, give User brings certain puzzlement, reduces the usage experience of user;For another example, user is in high-speed mobile or intelligent terminal itself Contrast it is smaller, then need to be adjusted.
In the present embodiment, before carrying out toning processing to designated, the parameter information of intelligent terminal is first obtained, is judged The case where carrying out toning processing if appropriate for designated, on the one hand can excluding to be not suitable for toning, can also get suitable The case where closing toning.
Specifically, one or more during the parameter information of above-mentioned intelligent terminal includes as follows:Temporal information;Show mould Formula;Posture information.
In the present embodiment, temporal information can reflect the period residing for current intelligent terminal, for example, daytime or night Evening, when during the day, its bandwagon effect can be improved by handling designated memory toning, but if be mixed colours at night Processing since the saturation degree of designated is larger, causes the uncomfortable of eyes of user when user is when browse designated, drop The usage experience of low user.Therefore, in the present embodiment, judged by the temporal information of intelligent terminal.For example, obtain The present system time of intelligent terminal is 9:30, then judge that the designated can be carried out toning processing, plays toning processing Screen afterwards;If the present system time of the intelligent terminal obtained is 22:00, then judge not mix colours to the designated Processing plays original video.
In this example it is shown that model can describe the current display effect of intelligent terminal, e.g., pair in display pattern Than degree, if contrast is larger, then carries out toning processing to designated, it can make the double increase of the contrast of designated, In the case of strong contrast, the bandwagon effect of designated can be distorted, and influence the effect of designated instead.Therefore, exist In the present embodiment, the display model for obtaining intelligent terminal is judged.For example, the contrast when the intelligent terminal obtained is more than the When one preset value, it is judged as not carrying out toning processing to the designated;Conversely, when obtain intelligent terminal contrast When less than the second preset value, judgement can carry out toning processing to the designated.
Here display pattern can also be brightness, cold light or warm light isotype etc..
In the present embodiment, the posture information of intelligent terminal can directly reflect the posture residing for user, if user is place In the posture of recumbency, then illustrate that user may rest, it, may be to user at this moment if carrying out toning processing to designated Eyes cause certain pressure, or the excitement levels of user can be improved, be unfavorable for the current resting state of user, therefore, In the present embodiment, judge whether to carry out toning processing to designated by the posture information of intelligent terminal.Here intelligence is eventually The posture information at end can be obtained by the attitude transducer in intelligent terminal, for example, gyroscope, accelerometer etc..
For example, getting the state that active user is in recumbency by sensor, it can be determined that do not carried out to designated Toning is handled, alternatively, getting active user by sensor is in high-speed mobile, then illustrates that user can be in train or vapour Che Shang, video are susceptible to the state of virtualization, then judge to carry out toning processing to designated at this time.
The present invention also provides a kind of electronic equipment, wherein the electronic equipment includes:
Processor;And it is arranged to the memory of storage computer executable instructions, executable instruction is when executed Processor is set to execute the method that the video according to figure 1 and its each embodiment is handled.
Fig. 3 shows the structural schematic diagram of electronic equipment according to an embodiment of the invention.As shown in figure 3, the electronics Equipment 300 includes:
Processor 310;And it is arranged to the memory 320 of storage computer executable instructions (program code), it is depositing In reservoir 320, there are the memory space 330 of storage program code, the program code for executing steps of a method in accordance with the invention 330 are stored in memory space 330, and it is according to figure 1 and its each which when executed execute processor 310 The method of video processing in embodiment.
Fig. 4 shows the structural schematic diagram of computer readable storage medium according to an embodiment of the invention.Such as Fig. 4 institutes Show, the computer readable storage medium 400, stores one or more programs (program code) 410, one or more program (journeys Sequence code) 410 when being executed by a processor, for executing steps of a method in accordance with the invention, i.e., shown in FIG. 1 and its each reality The method for applying the video processing in example.
It should be noted that each embodiment of electronic equipment shown in Fig. 3 and computer readable storage medium shown in Fig. 4 It is corresponding identical as each embodiment of method shown in FIG. 1, it has been described in detail above, details are not described herein.
In conclusion according to the technique and scheme of the present invention, identifying the picture material of designated;It is generated according to recognition result The configuration file of the picture material of designated is described;When client request plays the video, the video is provided to client Configuration file, with realize in video playing, toning processing is carried out to designated according to the configuration file, in this way, It is exactly toning treated video when user plays the video, compares and do not carry out the video of toning processing, treated for toning The color matching of video is distincter, improves the bandwagon effect of video, can reach customer satisfaction system effect, can more meet broadcasting for user Demand is put, the usage experience of user is enhanced.
It should be noted that:
Algorithm and display be not inherently related to any certain computer, virtual bench or miscellaneous equipment provided herein. Various fexible units can also be used together with teaching based on this.As described above, it constructs required by this kind of device Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that can utilize various Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair Bright preferred forms.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention Example can be put into practice without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this description.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of each inventive aspect, Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:It is i.e. required to protect Shield the present invention claims the more features of feature than being expressly recited in each claim.More precisely, as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following specific implementation mode are expressly incorporated in the specific implementation mode, wherein each claim itself All as a separate embodiment of the present invention.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodiment Change and they are arranged in the one or more equipment different from the embodiment.It can be the module or list in embodiment Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it may be used any Combination is disclosed to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, abstract and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be with hardware realization, or to run on one or more processors Software module realize, or realized with combination thereof.It will be understood by those of skill in the art that can use in practice Microprocessor or digital signal processor (DSP) realize video process apparatus according to the ... of the embodiment of the present invention, electronic equipment With some or all functions of some or all components in computer readable storage medium.The present invention is also implemented as For executing method as described herein some or all equipment or program of device (for example, computer program and Computer program product).It is such to realize that the program of the present invention may be stored on the computer-readable medium, or can have The form of one or more signal.Such signal can be downloaded from internet website and be obtained, or on carrier signal It provides, or provides in any other forms.
For example, Fig. 3 shows the structural schematic diagram of electronic equipment according to an embodiment of the invention.The electronic equipment 300 conventionally comprise processor 310 and are arranged to the memory 320 of storage computer executable instructions (program code).It deposits Reservoir 320 can be such as flash memory, EEPROM (electrically erasable programmable read-only memory), EPROM, hard disk or ROM etc Electronic memory.Memory 320 has storage for executing any method and step in shown in FIG. 1 and each embodiment The memory space 330 of program code 340.For example, the memory space 330 for program code may include being respectively used to realization Each program code 340 of various steps in the method in face.These program codes can be from one or more computer journey It reads or is written in sequence product in this one or more computer program product.These computer program products include all Such as hard disk, the program code carrier of compact-disc (CD), storage card or floppy disk etc.Such computer program product is usually Such as the computer readable storage medium 400 described in Fig. 4.The computer readable storage medium 400 can have the electronics with Fig. 3 Memory paragraph, the memory space etc. of 320 similar arrangement of memory in equipment.Program code can be pressed for example in a suitable form Contracting.In general, storage unit is stored with the program code 410 for executing steps of a method in accordance with the invention, you can with by such as The program code that 310 etc processor is read causes the electronic equipment to be held when these program codes are run by electronic equipment Each step in row method described above.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference mark between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.In the unit claims listing several devices, several in these devices can be by the same hardware branch To embody.The use of word first, second, and third does not indicate that any sequence.These words can be explained and be run after fame Claim.
The invention discloses A1, a kind of method for processing video frequency, wherein this method includes:
Identify the picture material of designated;
The configuration file of the picture material of description designated is generated according to recognition result;
The configuration file is provided, toning processing is carried out to the video according to the configuration file to realize.
A2, the method as described in A1, wherein described to identify that the picture material of designated includes:
Default different classes of video image scene;
It identifies the video image scene classification belonging to the picture material of designated, and records the video image of respective classes Beginning and ending time of the scene in the designated.
A3, the method as described in A2, wherein the video image scene class belonging to the picture material of the identification designated Do not include:
Each machine for the different classes of video image scene that designated is sequentially inputted to be respectively used in identification video In device learning model;
Obtain the recognition result of each machine learning model output.
A4, the method as described in A3, wherein this method further comprises:
For a kind of video image scene of classification, the video for the video image scene for belonging to the category is obtained, will be obtained Video be input to machine learning model as training data and be trained study, obtain the category in video for identification The machine learning model of video image scene;
And so on, obtain the corresponding machine learning model of video image scene of all categories.
A5, the method as described in A2, wherein the different classes of video image scene includes:
Video image scene containing face;
Landscape class video image scene.
A6, the method as described in A5, wherein the picture material that description designated is generated according to recognition result is matched File is set to further comprise:
When designated includes the video image scene containing face, further according to the size of face therein Ratio judges whether that video image scene containing face carries out toning processing to this, and will determine that in result write-in configuration file.
A7, the method as described in any one of A1-A6, wherein this method further comprises:
Tune corresponding with the picture material of the designated described in the configuration file is written in the configuration file Color processing scheme.
A8, the method as described in A2, wherein this method further comprises:
When the picture material of designated is not belonging to preset any type video image scene classification, specified regard at this The applicable toning processing general rule of the label designated in the configuration file of frequency.
A9, the method as described in A8, wherein this method further comprises:
When the picture material of designated is not belonging to preset any type video image scene classification, judge that this is specified Whether video is applicable in toning processing general rule;
If applicable, then the designated of the label in the configuration file of the designated is applicable in toning processing Universal gauge Then;
If not applicable, the designated is marked to be not suitable for doing at any toning in the configuration file of the designated Reason.
A10, the method as described in A9, wherein described to judge whether the designated is applicable in toning processing general rule packet It includes:
The histogram of the designated is input in the machine learning model of video for being not suitable for toning for identification;
If the output of the machine learning model confirms that the designated is not suitable for the result of toning, it is determined that this is specified and regards The not applicable toning processing general rule of frequency;Conversely, determining that the designated is applicable in toning processing general rule.
A11, the method as described in A10, wherein this method further comprises:
Obtain the video of a certain number of unsuitable tonings;
The histogram of these videos for being not suitable for toning is input to as training data in machine learning model and is instructed Practice study, is not suitable for the machine learning model of the video of toning for identification.
The invention also discloses B12, a kind of video process apparatus, wherein the device includes:
Recognition unit is suitable for identifying the picture material of designated;
Configuration file generation unit is suitable for generating the configuration text of the picture material of description designated according to recognition result Part;
Unit is provided, the configuration file is adapted to provide for, the video is adjusted according to the configuration file with realizing Color processing.
B13, the device as described in B12, wherein
The recognition unit is suitable for default different classes of video image scene;Identify the picture material institute of designated The video image scene classification of category, and record beginning and ending time of the video image scene of respective classes in the designated.
B14, the device as described in B13, wherein
The recognition unit different classes of is regarded suitable for be sequentially inputted to be respectively used in identification video by designated In each machine learning model of frequency image scene;Obtain the recognition result of each machine learning model output.
B15, the device as described in B14, wherein the device further comprises:
First machine learning model acquiring unit, is suitable for the video image scene for a kind of classification, and acquisition belongs to such The video of acquisition is input to machine learning model as training data and is trained by the video of other video image scene It practises, obtains the machine learning model of the video image scene of the category in video for identification;And so on, it obtains of all categories The corresponding machine learning model of video image scene.
B16, the device as described in B13, wherein the different classes of video image scene includes:
Video image scene containing face;
Landscape class video image scene.
B17, the device as described in B16, wherein
The configuration file generation unit, when suitable for including the video image scene containing face when designated, into one Step judges whether that video image scene containing face carries out toning processing to this according to the size ratio of face therein, and It will determine that in result write-in configuration file.
B18, the device as described in any one of B12-B17, wherein the device further comprises:
Toning processing scheme writing unit is suitable for write-in and the finger described in the configuration file in the configuration file Determine the corresponding toning processing scheme of picture material of video.
B19, the device as described in B13, wherein the device further comprises:
Marking unit, suitable for being not belonging to preset any type video image scene classification when the picture material of designated When, mark the designated to be applicable in toning processing general rule in the configuration file of the designated.
B20, the device as described in B19, wherein the device further comprises:
Judging unit, suitable for being not belonging to preset any type video image scene classification when the picture material of designated When, judge whether the designated is applicable in toning processing general rule;
The marking unit is suitable for if applicable, then the designated of the label in the configuration file of the designated It is applicable in toning processing general rule;If not applicable, the label designated in the configuration file of the designated is not It is suitble to do any toning processing.
B21, the device as described in B20, wherein
The judging unit, suitable for the histogram of the designated is input to the video for being not suitable for mixing colours for identification In machine learning model;If the output of the machine learning model confirms that the designated is not suitable for the result of toning, it is determined that The not applicable toning processing general rule of the designated;Conversely, determining that the designated is applicable in toning processing general rule.
B22, the device as described in B21, wherein the device further comprises:
Second machine learning model acquiring unit is suitable for obtaining the video of a certain number of unsuitable tonings;Not by these It is suitble to the histogram of the video of toning to be input in machine learning model as training data and is trained study, obtains for knowing The machine learning model for the video that Bu Shihe do not mix colours.
The invention also discloses C23, a kind of electronic equipment, wherein the electronic equipment includes:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the place when executed Manage method of the device execution according to any one of A1~A11.
The invention also discloses D24, a kind of computer readable storage mediums, wherein the computer readable storage medium The one or more programs of storage, one or more of programs when being executed by a processor, are realized described in any one of A1~A11 Method.

Claims (10)

1. a kind of method for processing video frequency, wherein this method includes:
Identify the picture material of designated;
The configuration file of the picture material of description designated is generated according to recognition result;
The configuration file is provided, toning processing is carried out to the video according to the configuration file to realize.
2. the method for claim 1, wherein the picture material of the identification designated includes:
Default different classes of video image scene;
It identifies the video image scene classification belonging to the picture material of designated, and records the video image scene of respective classes Beginning and ending time in the designated.
3. method as claimed in claim 2, wherein the video image scene belonging to the picture material of the identification designated Classification includes:
Each engineering for the different classes of video image scene that designated is sequentially inputted to be respectively used in identification video It practises in model;
Obtain the recognition result of each machine learning model output.
4. method as claimed in claim 3, wherein this method further comprises:
For a kind of video image scene of classification, the video for the video image scene for belonging to the category is obtained, by regarding for acquisition Frequency is input to machine learning model as training data and is trained study, obtains the video of the category in video for identification The machine learning model of image scene;
And so on, obtain the corresponding machine learning model of video image scene of all categories.
5. method as claimed in claim 2, wherein the different classes of video image scene includes:
Video image scene containing face;
Landscape class video image scene.
6. method as claimed in claim 5, wherein the picture material for generating description designated according to recognition result Configuration file further comprises:
When designated includes the video image scene containing face, further according to the size ratio of face therein Judge whether that video image scene containing face carries out toning processing to this, and will determine that in result write-in configuration file.
7. the method as described in any one of claim 1-6, wherein this method further comprises:
It is written at toning corresponding with the picture material of the designated described in the configuration file in the configuration file Reason scheme.
8. a kind of video process apparatus, wherein the device includes:
Recognition unit is suitable for identifying the picture material of designated;
Configuration file generation unit is suitable for generating the configuration file of the picture material of description designated according to recognition result;
Unit is provided, the configuration file is adapted to provide for, the video is carried out at toning according to the configuration file with realizing Reason.
9. a kind of electronic equipment, wherein the electronic equipment includes:
Processor;And
It is arranged to the memory of storage computer executable instructions, the executable instruction makes the processor when executed Execute method according to any one of claims 1 to 7.
10. a kind of computer readable storage medium, wherein the computer-readable recording medium storage one or more program, One or more of programs when being executed by a processor, realize method according to any one of claims 1 to 7.
CN201810085226.4A 2018-01-29 2018-01-29 A kind of method for processing video frequency and device Pending CN108495107A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810085226.4A CN108495107A (en) 2018-01-29 2018-01-29 A kind of method for processing video frequency and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810085226.4A CN108495107A (en) 2018-01-29 2018-01-29 A kind of method for processing video frequency and device

Publications (1)

Publication Number Publication Date
CN108495107A true CN108495107A (en) 2018-09-04

Family

ID=63343855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810085226.4A Pending CN108495107A (en) 2018-01-29 2018-01-29 A kind of method for processing video frequency and device

Country Status (1)

Country Link
CN (1) CN108495107A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110913263A (en) * 2019-11-29 2020-03-24 联想(北京)有限公司 Video processing method and device and electronic equipment
CN113497954A (en) * 2020-03-20 2021-10-12 阿里巴巴集团控股有限公司 Video toning method, media data processing method, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000259372A (en) * 1999-03-09 2000-09-22 Fuji Photo Film Co Ltd Image output method, device therefor and recording medium
CN102111546A (en) * 2009-12-25 2011-06-29 佳能株式会社 Method for processing image, image processing apparatus, and imaging apparatus
CN103617432A (en) * 2013-11-12 2014-03-05 华为技术有限公司 Method and device for recognizing scenes
CN105874808A (en) * 2014-01-03 2016-08-17 汤姆逊许可公司 Method and apparatus for video optimization using metadata

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000259372A (en) * 1999-03-09 2000-09-22 Fuji Photo Film Co Ltd Image output method, device therefor and recording medium
CN102111546A (en) * 2009-12-25 2011-06-29 佳能株式会社 Method for processing image, image processing apparatus, and imaging apparatus
CN103617432A (en) * 2013-11-12 2014-03-05 华为技术有限公司 Method and device for recognizing scenes
CN105874808A (en) * 2014-01-03 2016-08-17 汤姆逊许可公司 Method and apparatus for video optimization using metadata

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110913263A (en) * 2019-11-29 2020-03-24 联想(北京)有限公司 Video processing method and device and electronic equipment
CN110913263B (en) * 2019-11-29 2021-05-18 联想(北京)有限公司 Video processing method and device and electronic equipment
CN113497954A (en) * 2020-03-20 2021-10-12 阿里巴巴集团控股有限公司 Video toning method, media data processing method, equipment and storage medium
CN113497954B (en) * 2020-03-20 2023-02-03 阿里巴巴集团控股有限公司 Video toning method, device and storage medium

Similar Documents

Publication Publication Date Title
CN108235117A (en) A kind of video shading process and device
US8553941B2 (en) Methods, circuits, devices, apparatuses and systems for providing image composition rules, analysis and improvement
CN107592474A (en) A kind of image processing method and device
US20170270679A1 (en) Determining a hair color treatment option
CN108322788A (en) Advertisement demonstration method and device in a kind of net cast
KR20170101957A (en) Hair dyeing system using smart device
CN107918764A (en) information output method and device
CN107820005A (en) Image processing method, device and electronic installation
CN105126342B (en) A kind of game score method and apparatus
CN108495058A (en) Image processing method, device and computer readable storage medium
CN107948640A (en) Video playing test method, device, electronic equipment and storage medium
CN107509287A (en) Adjust method and device, Intelligent illumination device and the storage medium of light
CN107493440A (en) A kind of method and apparatus of display image in the application
CN108446705A (en) The method and apparatus of image procossing
CN108495107A (en) A kind of method for processing video frequency and device
CN108236784A (en) The training method and device of model, storage medium, electronic device
CN108696699A (en) A kind of method and apparatus of video processing
CN108848416A (en) The evaluation method and device of audio-video frequency content
CN108648139A (en) A kind of image processing method and device
CN108133718A (en) A kind of method and apparatus handled video
KR102082766B1 (en) Method and apparatus for distinguishing objects
CN106713968A (en) Live broadcast data display method and device
CN108449626A (en) Video processing, the recognition methods of video, device, equipment and medium
CN108470362A (en) A kind of method and apparatus for realizing video toning
CN105959593A (en) Exposure method for camera device and camera device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180904