CN110913263A - Video processing method and device and electronic equipment - Google Patents

Video processing method and device and electronic equipment Download PDF

Info

Publication number
CN110913263A
CN110913263A CN201911199988.8A CN201911199988A CN110913263A CN 110913263 A CN110913263 A CN 110913263A CN 201911199988 A CN201911199988 A CN 201911199988A CN 110913263 A CN110913263 A CN 110913263A
Authority
CN
China
Prior art keywords
target
image
picture display
frame
display parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911199988.8A
Other languages
Chinese (zh)
Other versions
CN110913263B (en
Inventor
徐霄
刘景贤
柯海滨
靳玉茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911199988.8A priority Critical patent/CN110913263B/en
Publication of CN110913263A publication Critical patent/CN110913263A/en
Application granted granted Critical
Publication of CN110913263B publication Critical patent/CN110913263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Abstract

The embodiment of the application discloses a video processing method, a video processing device and electronic equipment. Based on the scheme of the application, the picture display parameters can be obtained according to the image with the specific picture display effect, and the video frame to be output is processed based on the picture display parameters, so that the picture display effect of the video output is the same as or similar to the specific picture display effect, and the film watching experience is improved.

Description

Video processing method and device and electronic equipment
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a video processing method and apparatus, and an electronic device.
Background
With the technological progress and the continuous improvement of aesthetic standards of people, the display effect problem of film and television works is more and more serious, and various excessive buffing, polishing, beautifying, filter effect and the like seriously influence the watching experience of users. Such as: the skin of the person is flawless under the ten-level beauty filter, the texture and pores of the skin of the person are hardly seen in the close-up lens of the person, the person looks like only the eyes, the lips and the nose, and the face is in a white and bright blank. The contrast of shade and shade of light can better highlight the inherent emotion of a person, and the white light of the face makes the character appear hollow and empty. The chief role is still so that the background is blurred to be completely obscured. The film viewing experience of the user is seriously influenced.
Therefore, how to overcome the problem of the display effect of the film and television works becomes a technical problem to be solved urgently.
Disclosure of Invention
The application aims to provide a video processing method, a video processing device and electronic equipment, and the method comprises the following technical scheme:
a video processing method, comprising:
obtaining target picture display parameters, wherein the target picture display parameters are obtained by processing at least one frame of image;
processing the video frame which is not output according to the target picture display parameters to obtain a target video frame, wherein the picture display effect of the target video frame is adapted to the target picture display parameters;
and outputting the target video frame.
Preferably, the method for processing at least one frame of image to obtain the target screen display parameter includes:
processing a frame of target image to obtain image display parameters of the target image, wherein the image display parameters of the target image are the target image display parameters;
the one-frame target image is a frame image designated by a user or one frame image in a plurality of frame images output last time.
Preferably, the method for processing at least one frame of image to obtain the target screen display parameter includes:
respectively processing each frame of target image in the multiple frames of target images to obtain the picture display parameters of each frame of target image; the multi-frame target image is an image in different image sources output within a preset historical time;
target picture display parameters are determined from picture display parameters of the target images of the frames.
In the above method, preferably, the determining target screen display parameters from the screen display parameters of the target images in each frame includes:
clustering the picture display parameters of the multi-frame target images according to the images;
and determining target picture display parameters according to the clustering result, wherein the number of picture display parameters contained in the clustering category to which the target picture display parameters belong is more than that contained in other clustering categories.
Preferably, the method for processing the target image to obtain the screen display parameters of the target image for each frame of the target image includes:
generating a composite image according to the target image;
and acquiring the picture display parameters of the synthesized image, wherein the picture display parameters of the synthesized image are the picture display parameters of the target image.
In the above method, preferably, the generating a composite image according to the target image includes:
and inputting the target image into a pre-trained image synthesis model to obtain a synthesis image output by the image synthesis model.
Preferably, the method for processing a video frame that is not output according to the target screen display parameter to obtain a target video frame includes:
and inputting the target picture display parameters and the video frame into a pre-trained picture display parameter conversion model to obtain a target video frame.
Preferably, the obtaining of the target picture display parameter and the processing of the video frame not output according to the target picture display parameter to obtain the target video frame includes:
processing the frame of target image through a pre-trained image display parameter conversion model to obtain target image display parameters;
and processing the target picture display parameters and the video frames which are not output through the picture display parameter conversion model to obtain the target video frames.
A video processing apparatus comprising:
an obtaining module, configured to obtain a target picture display parameter, where the target picture display parameter is obtained by processing at least one frame of image;
the conversion module is used for processing the video frames which are not output according to the target picture display parameters to obtain target video frames, and the picture display effect of the target video frames is adapted to the target picture display parameters;
and the output module is used for outputting the target video frame.
An electronic device, comprising:
a memory for storing at least one set of instructions;
a processor for invoking and executing the set of instructions in the memory, by executing the set of instructions:
obtaining target picture display parameters, wherein the target picture display parameters are obtained by processing at least one frame of image;
processing the video frame which is not output according to the target picture display parameters to obtain a target video frame, wherein the picture display effect of the target video frame is adapted to the target picture display parameters;
and outputting the target video frame.
According to the scheme, the video processing method, the video processing device and the electronic equipment provided by the application, the target picture display parameters are obtained by processing at least one frame of image, and when the video is played, the picture display parameters of the video frame are adjusted according to the target picture display parameters, so that the picture display effect of the video frame is matched with the target picture display parameters, and the target video frame is output. Based on the scheme of the application, the picture display parameters can be obtained according to the image with the specific picture display effect, and the video frame to be output is processed based on the picture display parameters, so that the picture display effect of the video output is the same as or similar to the specific picture display effect, the intelligence of video processing is improved, and the film watching experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of an implementation of a video processing method according to an embodiment of the present application;
fig. 2 is a flowchart of an implementation of processing at least one frame of image to obtain target frame display parameters according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of an implementation of determining target frame display parameters from frame display parameters of target images according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating an implementation of processing a target image to obtain a frame display parameter of the target image according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than described or illustrated herein.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present disclosure.
The video processing method and apparatus provided in the embodiment of the present application may be applied to a video player, where the video player may be an application program dedicated for playing audio/video, or may be a browser, or may be another application program having an audio/video playing function, and are not limited herein.
An implementation flowchart of the video processing method provided in the embodiment of the present application is shown in fig. 1, and may include:
step S11: and obtaining target picture display parameters, wherein the target picture display parameters are obtained by processing at least one frame of image.
Optionally, the target picture display parameters may be read from the memory, and at this time, the target picture display parameters are obtained by processing at least one frame of image in advance and stored in the memory. For example, the video processing device acquires at least one frame of image according to a preset period, processes the at least one frame of image, obtains a target picture display parameter, and stores the target picture display parameter in the memory. The at least one frame of image may be an image that has been viewed by a user.
Or, the target picture display parameters may be obtained by processing at least one frame of image in real time, and at this time, the target picture display parameters are obtained in real time. For example, when the user is watching a video, if the user does not like the screen display effect of the opened video, at least one frame of image having the screen display effect that the user likes can be selected to be input to the video processing apparatus, and the target screen display parameter is obtained.
Optionally, the target picture display parameters may be obtained when it is monitored that the video starts to be played, or the target picture display parameters may be obtained only when a video processing instruction is received.
The target picture display parameter is a parameter characterizing the picture display effect, and may be a set of parameters, which may include, but is not limited to, the following: color, depth of field, vignetting, etc. Each can be further subdivided, for example, colors include: hue, saturation, brightness, etc.; the depth of field includes: front depth of field, back depth of field, etc.; the vignetting includes a lens diameter, a lens ratio, and the like.
Step S12: and processing the video frame which is not output according to the target picture display parameters to obtain a target video frame, wherein the picture display effect of the target video frame is adapted to the target picture display parameters.
A video is composed of a number of still images, and each still image in the video is called a video frame.
In the embodiment of the application, for each video frame which is not output, the video frame is processed according to the target picture display parameters, and a target video frame with a picture display effect matched with the target picture display parameters is obtained.
Step S13: and outputting the target video frame.
According to the video processing method provided by the embodiment of the application, the target picture display parameters are obtained by processing at least one frame of image, and when the video is played, the picture display parameters of the video frame are adjusted according to the target picture display parameters, so that the picture display effect of the video frame is matched with the target picture display parameters, and the target video frame is output. Based on the scheme of the application, the picture display parameters can be obtained according to the image with the specific picture display effect (for example, the image with the picture display effect liked by the user), and the video frame to be output is processed based on the picture display parameters, so that the picture display effect of the video output is the same as or similar to the specific picture display effect, the intelligence of video processing is improved, and the film watching experience is improved.
In an alternative embodiment, a frame of image preferred by the user may be processed to obtain the target screen display parameters. The one frame image preferred by the user may be designated by the user, or one frame may be selected as the one frame image preferred by the user from among a plurality of frame images recently viewed by the user. Specifically, one implementation manner of processing at least one frame of image to obtain the target picture display parameter may be:
processing a frame of target image to obtain image display parameters of the target image, wherein the image display parameters of the target image are target image display parameters;
the target image is a frame of image designated by a user or a frame of image in a plurality of frames of images output last time.
The multi-frame image which is output last time is the multi-frame image which is watched by the user last time. For example, the multi-frame image that is output last time is an image in a video that is watched by the user last time. Alternatively, there may be multiple photographs that the user has recently viewed. The plurality of photos that the user browses at the last time may be the plurality of photos that the user browses in the local memory at the last time or may be the plurality of photos that the user browses in the network at the last time.
When the target image is one of the multiple frame images that have been output last time, the target image may be any one of the multiple frame images that have been output last time, or one of the multiple frame images that have been output last time and have the highest definition.
In addition to processing a frame of image to obtain the target picture display parameters, in an alternative embodiment, a plurality of frames of images may be processed to obtain the picture display parameters of the plurality of frames of images, and then the target picture display parameters may be selected from the picture display parameters. Specifically, as shown in fig. 2, an implementation flowchart of processing at least one frame of image to obtain the target screen display parameter may include:
step S21: respectively processing each frame of target image in the multiple frames of target images to obtain the picture display parameters of each frame of target image; the multi-frame target image is an image in different image sources output within a preset historical time.
The images in the different image sources output within the preset historical time length may refer to images in different videos output within the latest preset historical time length. For example, if the user has watched 10 videos in the last 3 months, one frame of image can be extracted from each of the 10 videos as the target image, and a total of 10 frames of target images are obtained. And processing each frame of target image to obtain the picture display parameters of each frame of target image.
Step S22: target picture display parameters are determined from picture display parameters of the target images of the frames.
In the foregoing example, after the screen display parameters of each frame of the target image are obtained, the screen display parameters of one frame of the target image are selected as the target screen display parameters from the screen parameters of 10 frames of the target image.
In an alternative embodiment, an implementation flowchart of determining the target picture display parameter from the picture display parameters of the target images in each frame is shown in fig. 3, and may include:
step S31: and clustering the picture display parameters of the multi-frame target images according to the images.
The image-based clustering means that the picture display parameters of each frame of target image are clustered as a whole. For example, suppose that 10 frame target images are P0, P1, P2, P3, P4, P5, P6, P7, P8, P9, respectively, and the corresponding screen display parameters are:
P0:a0,b0,c0,d0,e0;P1:a1,b1,c1,d1,e1;
P2:a2,b2,c2,d2,e2;P3:a3,b3,c3,d3,e3;
P4:a4,b4,c4,d4,e4;P5:a5,b5,c5,d5,e5;
P6:a6,b6,c6,d6,e6;P7:a7,b7,c7,d7,e7;
P8:a8,b8,c8,d8,e8;P9:a9,b9,c9,d9,e9;
for convenience of description, if the screen display parameters of the target image Pi (i ═ 0, 1, 2, …, 9) are denoted as Si, that is, Si ═ i (ai, bi, ci, di, ei), in the embodiment of the present application, 10 screen display parameters S0, S1, S2, S3, S4, S5, S6, S7, S8, S9 are clustered.
Step S32: and determining target picture display parameters according to the clustering result, wherein the number of picture display parameters contained in the clustering category to which the target picture display parameters belong is more than that contained in other clustering categories.
Optionally, the target screen display parameter may be any one of the screen display parameters included in the cluster category to which the target screen display parameter belongs.
Assuming that the 10 screen display parameter clustering result is 3 types, for convenience of description, 3 cluster types are respectively referred to as cluster type 1, cluster type 2 and cluster type 3, 2 screen display parameters (assumed as S7, S8) in the cluster type 1, 6 screen display parameters (assumed as S0, S1, S2, S4, S6, S9) in the cluster type 2, and 2 screen display parameters (assumed as S3, S5) in the cluster type 3, and since the number of screen display parameters included in the cluster type 2 is greater than the number of screen display parameters included in the cluster type 1 and greater than the number of screen display parameters included in the cluster type 3, one screen display parameter is determined as a target screen display parameter from the 6 screen display parameters S0, S1, S2, S4, S6, S9 included in the cluster type 2. Alternatively, one screen display parameter may be randomly determined as the target screen display parameter from the 6 screen display parameters included in the cluster category 2.
In an alternative embodiment, for each frame of the target image (denoted as a for convenience of description), an implementation flowchart of processing the target image a to obtain the frame display parameters of the target image a is shown in fig. 4, and may include:
step S41: a composite image is generated from the target image a.
The composite image is an image that is virtually created by the video processing apparatus based on the target image a, and is not an image that is actually photographed, and the content of the composite image may be the same as or different from the content of the target image a. The similarity between the screen display effect of the composite image and the screen display effect of the target image a is greater than a threshold value. That is, the screen display effect of the composite image is the same as or similar to that of the target image a.
In an alternative embodiment, the target image a may be input into a pre-trained image synthesis model to obtain a synthesized image output by the image synthesis model. Wherein the content of the first and second substances,
the image synthesis model can be obtained by training an image sample set marked with picture display parameters. Some parameters in the image display parameters labeled by the image sample can be directly read from the parameters carried by the image sample, and the other part of parameters are parameters not carried by the image sample, and the parameters are labeled manually. Specifically, during training, for each image sample (denoted as Y for convenience of description) in the image sample set, the image sample Y is input to the image synthesis model to obtain a synthesized image (denoted as H for convenience of description) output by the image synthesis model, and then parameters of the image synthesis model are updated based on a difference between screen display parameters of the synthesized image H and screen display parameters labeled for the image sample Y. During the training process, there is no need to pay attention to what the content of the composite image is. Of course, in the embodiment of the present application, whether or not to pay attention to the content of the composite image is not limited, and the content of the composite image may be paid attention to, or the content of the composite image may not be paid attention to. If attention is paid to the contents of the synthesized image, when updating the parameters of the image synthesis model, the parameters of the image synthesis model may be updated according to the difference between the screen display parameters of the synthesized image H and the labeled screen display parameters of the image sample Y, and the difference between the contents of the synthesized image H and the contents of the image sample Y.
Step S42: and acquiring the picture display parameters of the synthetic image, wherein the picture display parameters of the synthetic image are the picture display parameters of the target image.
In an optional embodiment, an implementation manner of processing the video frame that is not output according to the target picture display parameter to obtain the target video frame may be:
and inputting the target picture display parameters and the video frame into a pre-trained picture display parameter conversion model to obtain the target video frame.
Optionally, the frame display parameter conversion model may be obtained by training a first sample set, where each sample in the first sample set is composed of a frame display parameter and an image. Specifically, during training, for each sample in the first sample set, the sample is input into the picture display parameter conversion model, a predicted image output by the picture display parameter conversion model is obtained, and parameters of the picture display parameter conversion model are updated according to the difference between the content of the predicted image and the content of the image in the sample and the difference between the picture display parameters of the predicted image and the picture display parameters in the sample.
In an optional embodiment, the obtaining of the target picture display parameter, and processing the video frame that is not output according to the target picture display parameter to obtain the target video frame may be implemented in a manner that:
processing a frame of target image through a pre-trained image display parameter conversion model to obtain target image display parameters;
and processing the target picture display parameters and the video frames which are not output through the picture display parameter conversion model to obtain the target video frames.
In the embodiment of the application, the target picture display parameters and the target video frames are obtained from the same model.
Optionally, the frame display parameter conversion model may be obtained by training a second sample set, where each sample in the second sample set is composed of a first type of image labeled with the frame display parameter and a second type of image not labeled with the frame display parameter. Specifically, during training, for each sample in the second sample set, the sample is input into a picture display parameter conversion model, the picture display parameter conversion model processes the first type of image in the sample to obtain picture display parameters of the first type of image, a predicted image is generated according to the picture display parameters of the first type of image and the second type of image, and then the parameters of the picture display parameter conversion model are updated according to the difference between the content of the predicted image output by the picture display parameter conversion model and the content of the second type of image in the sample and the difference between the picture display parameters of the predicted image and the picture display parameters labeled by the first type of image in the sample.
Corresponding to the method embodiment, an embodiment of the present application further provides a video processing apparatus, and a schematic structural diagram of the video processing apparatus provided in the embodiment of the present application is shown in fig. 5, and the video processing apparatus may include:
an obtaining module 51, a converting module 52 and an output module 53; wherein the content of the first and second substances,
the obtaining module 51 is configured to obtain a target picture display parameter, where the target picture display parameter is obtained by processing at least one frame of image;
the conversion module 52 is configured to process a video frame that is not output according to the target picture display parameter to obtain a target video frame, where a picture display effect of the target video frame is adapted to the target picture display parameter;
the output module 53 is configured to output the target video frame.
According to the video processing device provided by the embodiment of the application, the target picture display parameters are obtained by processing at least one frame of image, and when the video is played, the picture display parameters of the video frame are adjusted according to the target picture display parameters, so that the picture display effect of the video frame is matched with the target picture display parameters, and the target video frame is output. Based on the scheme of the application, the picture display parameters can be obtained according to the image with the specific picture display effect (for example, the image with the picture display effect liked by the user), and the video frame to be output is processed based on the picture display parameters, so that the picture display effect of the video output is the same as or similar to the specific picture display effect, the intelligence of video processing is improved, and the film watching experience is improved.
In an optional embodiment, the obtaining module 51 may be specifically configured to: and reading target picture display parameters obtained by processing the at least one frame of image in advance from the memory.
In an alternative embodiment, the obtaining module 51 may specifically be configured to: and acquiring the at least one frame of image, and processing the at least one frame of image to obtain the target picture display parameters.
Optionally, the obtaining module 51 may specifically include:
the first processing unit is used for processing a frame of target image to obtain the picture display parameters of the target image, wherein the picture display parameters of the target image are the target picture display parameters;
the one-frame target image is a frame image designated by a user or one frame image in a plurality of frame images output last time.
Optionally, the obtaining module 51 may specifically include:
the second processing unit is used for respectively processing each frame of target image in the multiple frames of target images to obtain the picture display parameters of each frame of target image; the multi-frame target image is an image in different image sources output within a preset historical time;
and the determining unit is used for determining target picture display parameters from the picture display parameters of the target images of the frames.
Optionally, the determining unit may include:
the clustering subunit is used for clustering the picture display parameters of the multi-frame target images according to the images;
and the determining subunit is used for determining target picture display parameters according to the clustering result, wherein the number of picture display parameters contained in the clustering category to which the target picture display parameters belong is more than the number of picture display parameters contained in other clustering categories.
Optionally, the processing unit may specifically include:
a generation subunit, configured to generate, for each frame of target image, a composite image from the frame of target image;
and the acquisition subunit is used for acquiring the picture display parameters of the synthesized image of the target image, wherein the picture display parameters are the picture display parameters of the target image.
Optionally, the generating subunit may specifically be configured to: and inputting the frame of target image into a pre-trained image synthesis model to obtain a synthesis image output by the image synthesis model.
Optionally, the first processing unit may specifically be configured to: processing the frame of target image through a pre-trained image display parameter conversion model to obtain target image display parameters;
the conversion module 52 may be specifically configured to: and processing the target picture display parameters and the video frames which are not output through the picture display parameter conversion model to obtain the target video frames.
In an optional embodiment, the conversion module 52 may specifically be configured to: and inputting the target picture display parameters and the video frame into a pre-trained picture display parameter conversion model to obtain a target video frame.
Corresponding to the method embodiment, the present application further provides an electronic device, a schematic structural diagram of which is shown in fig. 6, and the electronic device may include:
a memory 61 for storing at least one set of instructions;
a processor 62 for invoking and executing the set of instructions in the memory, by executing the set of instructions:
obtaining target picture display parameters, wherein the target picture display parameters are obtained by processing at least one frame of image;
processing the video frame which is not output according to the target picture display parameters to obtain a target video frame, wherein the picture display effect of the target video frame is adapted to the target picture display parameters;
and outputting the target video frame.
According to the electronic device provided by the embodiment of the application, the target picture display parameters are obtained by processing at least one frame of image, and when a video is played, the picture display parameters of the video frame are adjusted according to the target picture display parameters, so that the picture display effect of the video frame is matched with the target picture display parameters, and the target video frame is output. Based on the scheme of the application, the picture display parameters can be obtained according to the image with the specific picture display effect (for example, the image with the picture display effect liked by the user), and the video frame to be output is processed based on the picture display parameters, so that the picture display effect of the video output is the same as or similar to the specific picture display effect, the intelligence of video processing is improved, and the film watching experience is improved.
Optionally, when the processor 62 processes at least one frame of image to obtain the target screen display parameter, the processor may specifically be configured to:
processing a frame of target image to obtain image display parameters of the target image, wherein the image display parameters of the target image are the target image display parameters;
the one-frame target image is a frame image designated by a user or one frame image in a plurality of frame images output last time.
Optionally, when the processor 62 processes at least one frame of image to obtain the target screen display parameter, the processor may specifically be configured to:
respectively processing each frame of target image in the multiple frames of target images to obtain the picture display parameters of each frame of target image; the multi-frame target image is an image in different image sources output within a preset historical time;
target picture display parameters are determined from picture display parameters of the target images of the frames.
Optionally, when the processor 62 determines the target screen display parameter from the screen display parameters of each frame of the target image, the method may be specifically configured to:
clustering the picture display parameters of the multi-frame target images according to the images;
and determining target picture display parameters according to the clustering result, wherein the number of picture display parameters contained in the clustering category to which the target picture display parameters belong is more than that contained in other clustering categories.
Optionally, for each frame of target image, when the processor 62 processes the target image to obtain the screen display parameters of the target image, the method may specifically be used to:
generating a composite image according to the target image;
and acquiring the picture display parameters of the synthesized image, wherein the picture display parameters of the synthesized image are the picture display parameters of the target image.
Optionally, when the processor 62 generates the composite image according to the target image, it may specifically be configured to:
and inputting the target image into a pre-trained image synthesis model to obtain a synthesis image output by the image synthesis model.
Optionally, when the processor 62 processes the video frame that is not output according to the target picture display parameter to obtain the target video frame, the processing may be specifically configured to:
and inputting the target picture display parameters and the video frame into a pre-trained picture display parameter conversion model to obtain a target video frame.
Optionally, the processor 62 obtains target picture display parameters, and processes the video frame that is not output according to the target picture display parameters, so as to obtain the target video frame, which may be specifically used in:
processing the frame of target image through a pre-trained image display parameter conversion model to obtain target image display parameters;
and processing the target picture display parameters and the video frames which are not output through the picture display parameter conversion model to obtain the target video frames.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be understood that the technical problems can be solved by combining and combining the features of the embodiments from the claims.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A video processing method, comprising:
obtaining target picture display parameters, wherein the target picture display parameters are obtained by processing at least one frame of image;
processing the video frame which is not output according to the target picture display parameters to obtain a target video frame, wherein the picture display effect of the target video frame is adapted to the target picture display parameters;
and outputting the target video frame.
2. The method of claim 1, wherein processing at least one frame of image to obtain the target frame display parameter comprises:
processing a frame of target image to obtain image display parameters of the target image, wherein the image display parameters of the target image are the target image display parameters;
the one-frame target image is a frame image designated by a user or one frame image in a plurality of frame images output last time.
3. The method of claim 1, wherein processing at least one frame of image to obtain the target frame display parameter comprises:
respectively processing each frame of target image in the multiple frames of target images to obtain the picture display parameters of each frame of target image; the multi-frame target image is an image in different image sources output within a preset historical time;
target picture display parameters are determined from picture display parameters of the target images of the frames.
4. The method of claim 3, wherein determining the target picture display parameters from the picture display parameters of the target images of the frames comprises:
clustering the picture display parameters of the multi-frame target images according to the images;
and determining target picture display parameters according to the clustering result, wherein the number of picture display parameters contained in the clustering category to which the target picture display parameters belong is more than that contained in other clustering categories.
5. The method according to any one of claims 2 to 4, wherein for each frame of the target image, processing the target image to obtain the frame display parameters of the target image comprises:
generating a composite image according to the target image;
and acquiring the picture display parameters of the synthesized image, wherein the picture display parameters of the synthesized image are the picture display parameters of the target image.
6. The method of claim 5, the generating a composite image from the target image, comprising:
and inputting the target image into a pre-trained image synthesis model to obtain a synthesis image output by the image synthesis model.
7. The method according to claim 1, wherein the processing the video frame that is not output according to the target picture display parameter to obtain the target video frame comprises:
and inputting the target picture display parameters and the video frame into a pre-trained picture display parameter conversion model to obtain a target video frame.
8. The method according to claim 2, wherein the obtaining target picture display parameters and processing the video frame that is not output according to the target picture display parameters to obtain the target video frame comprises:
processing the frame of target image through a pre-trained image display parameter conversion model to obtain target image display parameters;
and processing the target picture display parameters and the video frames which are not output through the picture display parameter conversion model to obtain the target video frames.
9. A video processing apparatus comprising:
an obtaining module, configured to obtain a target picture display parameter, where the target picture display parameter is obtained by processing at least one frame of image;
the conversion module is used for processing the video frames which are not output according to the target picture display parameters to obtain target video frames, and the picture display effect of the target video frames is adapted to the target picture display parameters;
and the output module is used for outputting the target video frame.
10. An electronic device, comprising:
a memory for storing at least one set of instructions;
a processor for invoking and executing the set of instructions in the memory, by executing the set of instructions:
obtaining target picture display parameters, wherein the target picture display parameters are obtained by processing at least one frame of image;
processing the video frame which is not output according to the target picture display parameters to obtain a target video frame, wherein the picture display effect of the target video frame is adapted to the target picture display parameters;
and outputting the target video frame.
CN201911199988.8A 2019-11-29 2019-11-29 Video processing method and device and electronic equipment Active CN110913263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911199988.8A CN110913263B (en) 2019-11-29 2019-11-29 Video processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911199988.8A CN110913263B (en) 2019-11-29 2019-11-29 Video processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110913263A true CN110913263A (en) 2020-03-24
CN110913263B CN110913263B (en) 2021-05-18

Family

ID=69820634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911199988.8A Active CN110913263B (en) 2019-11-29 2019-11-29 Video processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110913263B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440635A (en) * 2013-09-17 2013-12-11 厦门美图网科技有限公司 Learning-based contrast limited adaptive histogram equalization method
CN103761536A (en) * 2014-01-28 2014-04-30 五邑大学 Human face beautifying method based on non-supervision optimal beauty features and depth evaluation model
CN104111941A (en) * 2013-04-18 2014-10-22 阿里巴巴集团控股有限公司 Method and equipment for information display
US20140375758A1 (en) * 2013-06-25 2014-12-25 Vonage Network Llc Method and apparatus for dynamically adjusting aspect ratio of images during a video call
CN104853230A (en) * 2015-05-14 2015-08-19 无锡天脉聚源传媒科技有限公司 Hot-spot video push method and apparatus
CN105404629A (en) * 2014-09-12 2016-03-16 华为技术有限公司 Method and device for determining map interface
CN105763922A (en) * 2016-04-28 2016-07-13 徐文波 Video processing method and apparatus
CN106412671A (en) * 2016-09-29 2017-02-15 维沃移动通信有限公司 Video playing method and mobile terminal
CN106547798A (en) * 2015-09-23 2017-03-29 阿里巴巴集团控股有限公司 Information-pushing method and device
CN106937173A (en) * 2015-12-31 2017-07-07 北京国双科技有限公司 Video broadcasting method and device
CN106952239A (en) * 2017-03-28 2017-07-14 厦门幻世网络科技有限公司 image generating method and device
CN107741898A (en) * 2017-10-13 2018-02-27 杭州浮云网络科技有限公司 A kind of game player based on big data operates preference analysis method and system
CN108184169A (en) * 2017-12-28 2018-06-19 广东欧珀移动通信有限公司 Video broadcasting method, device, storage medium and electronic equipment
CN108495107A (en) * 2018-01-29 2018-09-04 北京奇虎科技有限公司 A kind of method for processing video frequency and device
CN109492182A (en) * 2018-11-05 2019-03-19 珠海格力电器股份有限公司 Page generation method, device, page display system and storage medium
CN110007816A (en) * 2019-02-26 2019-07-12 努比亚技术有限公司 A kind of display area determines method, terminal and computer readable storage medium
CN110083430A (en) * 2019-04-30 2019-08-02 成都市映潮科技股份有限公司 A kind of system theme color replacing options, device and medium
CN110914834A (en) * 2017-08-01 2020-03-24 3M创新有限公司 Neural style migration for image modification and recognition

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104111941A (en) * 2013-04-18 2014-10-22 阿里巴巴集团控股有限公司 Method and equipment for information display
US20140375758A1 (en) * 2013-06-25 2014-12-25 Vonage Network Llc Method and apparatus for dynamically adjusting aspect ratio of images during a video call
CN103440635A (en) * 2013-09-17 2013-12-11 厦门美图网科技有限公司 Learning-based contrast limited adaptive histogram equalization method
CN103761536A (en) * 2014-01-28 2014-04-30 五邑大学 Human face beautifying method based on non-supervision optimal beauty features and depth evaluation model
CN105404629A (en) * 2014-09-12 2016-03-16 华为技术有限公司 Method and device for determining map interface
CN104853230A (en) * 2015-05-14 2015-08-19 无锡天脉聚源传媒科技有限公司 Hot-spot video push method and apparatus
CN106547798A (en) * 2015-09-23 2017-03-29 阿里巴巴集团控股有限公司 Information-pushing method and device
CN106937173A (en) * 2015-12-31 2017-07-07 北京国双科技有限公司 Video broadcasting method and device
CN105763922A (en) * 2016-04-28 2016-07-13 徐文波 Video processing method and apparatus
CN106412671A (en) * 2016-09-29 2017-02-15 维沃移动通信有限公司 Video playing method and mobile terminal
CN106952239A (en) * 2017-03-28 2017-07-14 厦门幻世网络科技有限公司 image generating method and device
CN110914834A (en) * 2017-08-01 2020-03-24 3M创新有限公司 Neural style migration for image modification and recognition
CN107741898A (en) * 2017-10-13 2018-02-27 杭州浮云网络科技有限公司 A kind of game player based on big data operates preference analysis method and system
CN108184169A (en) * 2017-12-28 2018-06-19 广东欧珀移动通信有限公司 Video broadcasting method, device, storage medium and electronic equipment
CN108495107A (en) * 2018-01-29 2018-09-04 北京奇虎科技有限公司 A kind of method for processing video frequency and device
CN109492182A (en) * 2018-11-05 2019-03-19 珠海格力电器股份有限公司 Page generation method, device, page display system and storage medium
CN110007816A (en) * 2019-02-26 2019-07-12 努比亚技术有限公司 A kind of display area determines method, terminal and computer readable storage medium
CN110083430A (en) * 2019-04-30 2019-08-02 成都市映潮科技股份有限公司 A kind of system theme color replacing options, device and medium

Also Published As

Publication number Publication date
CN110913263B (en) 2021-05-18

Similar Documents

Publication Publication Date Title
US7859551B2 (en) Object customization and presentation system
JP4402850B2 (en) Appreciation data correction method and apparatus, and recording medium
DE112018007721T5 (en) Acquire and modify 3D faces using neural imaging and time tracking networks
US10013804B2 (en) Delivering virtualized content
US11132770B2 (en) Image processing methods and apparatuses, computer readable storage media and electronic devices
CN107920202B (en) Video processing method and device based on augmented reality and electronic equipment
JP2002077592A (en) Image processing method
CN110072116A (en) Virtual newscaster's recommended method, device and direct broadcast server
JP2018506198A (en) Method and apparatus for generating extrapolated image based on object detection
CN111066026A (en) Techniques for providing virtual light adjustments to image data
US20220312056A1 (en) Rendering a modeled scene
US20140022459A1 (en) Apparatus and method for tuning an audiovisual system to viewer attention level
KR20000030532A (en) Internet web-based service for storing, image-processing and printing personal or group photograph and album
KR101927965B1 (en) System and method for producing video including advertisement pictures
CN108810513B (en) Method and device for displaying picture quality of panoramic video
CN116761037B (en) Method, device, equipment and medium for video implantation of multimedia information
US20080317432A1 (en) System and method for portrayal of object or character target features in an at least partially computer-generated video
CN110913263B (en) Video processing method and device and electronic equipment
US8599286B2 (en) Display control apparatus, image processing apparatus, recording medium on which a display control program is recorded and recording medium on which an image processing program is recorded
US20200221165A1 (en) Systems and methods for efficient video content transition effects generation
CN111383289A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN111831615B (en) Method, device and system for generating video file
CN113610723A (en) Image processing method and related device
CN113489899A (en) Special effect video recording method and device, computer equipment and storage medium
Rerabek et al. Audiovisual focus of attention and its application to Ultra High Definition video compression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant