CN111935418A - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111935418A
CN111935418A CN202010832512.XA CN202010832512A CN111935418A CN 111935418 A CN111935418 A CN 111935418A CN 202010832512 A CN202010832512 A CN 202010832512A CN 111935418 A CN111935418 A CN 111935418A
Authority
CN
China
Prior art keywords
target
color
video frame
video
image area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010832512.XA
Other languages
Chinese (zh)
Other versions
CN111935418B (en
Inventor
曹恩丹
李治中
吴磊
王元吉
马立萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202010832512.XA priority Critical patent/CN111935418B/en
Publication of CN111935418A publication Critical patent/CN111935418A/en
Application granted granted Critical
Publication of CN111935418B publication Critical patent/CN111935418B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Color Image Communication Systems (AREA)

Abstract

The present disclosure relates to a video processing method and apparatus, an electronic device, and a storage medium, the method including: receiving a first user instruction, wherein the first user instruction is used for instructing color mixing processing on a target video; and responding to the first user instruction, and performing color matching processing on a target image area of each video frame in at least one video frame included in the target video by using a reference image area to obtain a color matching processing result of the target video. In the embodiment of the disclosure, one-key color matching can be realized according to the reference image area, so that user operation is reduced, and color matching efficiency is improved.

Description

Video processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
The video toning technology is used for adjusting the color of each image frame in a video, and in the toning process of the video, a user often needs to manually adjust the color of each video clip through video editing software. Under the condition that video material fragments are more, the manual adjustment method is complex to operate and low in color matching efficiency. In addition, even if the adjustment is repeated for many times, the entire effect of the finally toned video may be unnatural when the different segments are linked.
In some video editing software, a plurality of different filter styles are provided for users to select, but the number of the provided filters is usually limited, all color mixing styles required by the users cannot be covered, the operation is complicated, and the color mixing efficiency needs to be further improved.
Disclosure of Invention
The present disclosure provides a video processing technical solution.
According to an aspect of the present disclosure, there is provided a video processing method including: receiving a first user instruction, wherein the first user instruction is used for instructing color mixing processing on a target video; and responding to the first user instruction, and performing color matching processing on a target image area of each video frame in at least one video frame included in the target video by using a reference image area to obtain a color matching processing result of the target video.
In one possible implementation, the method further includes: receiving a second user instruction, the second user instruction being for indicating the reference image region.
In one possible implementation, the reference image region includes at least a portion of a first image input by a user; the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes: and performing color matching processing on the whole area of each first video frame in at least one first video frame included in the target video by using at least one part of the first image to obtain a color matching processing result of the at least one first video frame.
In one possible implementation, the reference image region includes a background region of each of at least one second video frame included in the target video; the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes: and performing color matching processing on the foreground area in each second video frame by using the background area of each second video frame in the at least one second video frame to obtain a color matching processing result of the at least one second video frame.
In one possible implementation, the reference image region includes at least a portion of a target background image; the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes: and performing color matching processing on the foreground area of each third video frame in at least one third video frame in the target video by using at least one part of the target background image to obtain a color matching processing result of the foreground area of the at least one third video frame.
In one possible implementation manner, the receiving a first user instruction, where the first user instruction is used to instruct to color-mix a target video, includes: receiving a video background replacement instruction, wherein the video background replacement instruction is used for indicating background replacement of the target video; the method further comprises the following steps: and synthesizing the color matching processing result of the foreground area of the at least one third video frame and the target background image to obtain a background replacement result of the target video.
In one possible implementation, the method further includes: detecting whether a target object exists in a target image area of the at least one video frame; the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes: and under the condition that the target object is detected in the target image area of the fourth video frame, performing color matching processing on a second area, except for the first area where the target object is located, in the target image area of the fourth video frame to obtain a color matching processing result of the fourth video frame.
In a possible implementation manner, the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video further includes: and performing color matching processing on a first area where the target object is located in the fourth video frame, wherein the color matching amplitude of the color matching processing on the first area is smaller than the color matching amplitude of the color matching processing on the second area.
In one possible implementation, the target object includes a person.
In a possible implementation manner, the performing, by using a reference image area, color matching on a target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes: performing color analysis on the reference image area to obtain a reference color parameter; and performing color matching processing on a target image area of each video frame in at least one video frame in the target video by using the reference color parameter to obtain a color matching processing result of the target video.
In one possible implementation, the method further includes: performing color analysis on a target image area of each video frame in the at least one video frame to obtain a second color parameter of the target image area of each video frame; performing color matching processing on a target image area of each video frame in at least one video frame in the target video by using the reference color parameter to obtain a color matching processing result of the target video, including: and adjusting the second color parameter of the target image area of each video frame in the at least one video frame according to the reference color parameter to obtain a third color parameter of the target image area of each video frame.
In a possible implementation manner, the reference color parameter includes a color parameter of each reference pixel point of a plurality of reference pixel points included in the reference image region, and the second color parameter includes a color parameter of each target pixel point of a plurality of target pixel points included in the target image region; the adjusting, according to the reference color parameter, the second color parameter of the target image area of each video frame in the at least one video frame to obtain a third color parameter of the target image area of each video frame includes: determining an adjustment range for adjusting the color parameter of each target pixel point in the plurality of target pixel points according to the color parameters of the plurality of reference pixel points included in the reference color parameter and the color parameters of the plurality of target pixel points included in the second color parameter; and adjusting the color parameter of each target pixel point according to the adjustment amplitude of each target pixel point in the plurality of target pixel points to obtain the third color parameter.
In a possible implementation manner, the adjusting, according to the reference color parameter, the second color parameter of the target image area of each of the at least one video frame to obtain the third color parameter of the target image area of each of the at least one video frame includes: adjusting the color parameter of each target pixel point in the plurality of target pixel points according to the difference between a first reference value of the color parameters of the plurality of reference pixel points included in the reference color parameter and a second reference value of the color parameters of the plurality of target pixel points included in the second color parameter, so as to obtain a third color parameter; or according to the correspondence between the plurality of reference pixel points included in the reference image region and the plurality of target pixel points of the target image region, adjusting the color parameter of the corresponding target pixel point included in the second color parameter by using the color parameter of each reference pixel point of the plurality of reference pixel points included in the reference color parameter, so as to obtain a third color parameter.
In a possible implementation manner, the reference color parameter includes a color parameter of each of a plurality of pixels included in the reference image region.
In one possible implementation, the reference color parameter includes at least one of: hue, saturation, brightness, hue, color temperature, contrast, white balance, RGB values.
According to an aspect of the present disclosure, there is provided a video processing apparatus including: the device comprises a first instruction receiving unit, a first processing unit and a second instruction receiving unit, wherein the first instruction receiving unit is used for receiving a first user instruction, and the first user instruction is used for indicating color mixing processing of a target video; and the color matching unit is used for responding to the first user instruction and performing color matching processing on a target image area of each video frame in at least one video frame included in the target video by using a reference image area to obtain a color matching processing result of the target video.
In one possible implementation, the apparatus further includes: a second instruction receiving unit, configured to receive a second user instruction, where the second user instruction is used to indicate the reference image area.
In one possible implementation, the reference image region includes at least a portion of a first image input by a user; the color matching unit is configured to perform color matching on the entire area of each first video frame in at least one first video frame included in the target video by using at least one part of the first image, so as to obtain a color matching result of the at least one first video frame.
In one possible implementation, the reference image region includes a background region of each of at least one second video frame included in the target video; the color matching unit is configured to perform color matching processing on a foreground region in each second video frame by using a background region of each second video frame in the at least one second video frame, so as to obtain a color matching processing result of the at least one second video frame.
In one possible implementation, the reference image region includes at least a portion of a target background image; and the color matching unit is used for performing color matching processing on the foreground area of each third video frame in at least one third video frame in the target video by using at least one part of the target background image to obtain a color matching processing result of the foreground area of the at least one third video frame.
In a possible implementation manner, the first instruction receiving unit is configured to receive a video background replacement instruction, where the video background replacement instruction is used to instruct to perform background replacement on the target video; the device further comprises: and the synthesizing unit is used for synthesizing the color matching processing result of the foreground area of the at least one third video frame and the target background image to obtain a background replacing result of the target video.
In one possible implementation, the apparatus further includes: a detection unit for detecting whether a target object exists in a target image area of the at least one video frame; and the color matching unit is used for performing color matching processing on a second area, except for the first area where the target object is located, in the target image area of the fourth video frame under the condition that the target object is detected in the target image area of the fourth video frame, so as to obtain a color matching processing result of the fourth video frame.
In a possible implementation manner, the color matching unit is configured to perform color matching processing on a first region where the target object is located in the fourth video frame, where a color matching range for performing color matching processing on the first region is smaller than a color matching range for performing color matching processing on the second region.
In one possible implementation, the target object includes a person.
In a possible implementation manner, the color matching unit is configured to perform color analysis on the reference image region to obtain a reference color parameter; and performing color matching processing on a target image area of each video frame in at least one video frame in the target video by using the reference color parameter to obtain a color matching processing result of the target video.
In one possible implementation, the apparatus further includes: the analysis unit is used for carrying out color analysis on a target image area of each video frame in the at least one video frame to obtain a second color parameter of the target image area of each video frame; and the color matching unit is used for adjusting the second color parameter of the target image area of each video frame in the at least one video frame according to the reference color parameter to obtain the third color parameter of the target image area of each video frame.
In a possible implementation manner, the reference color parameter includes a color parameter of each reference pixel point of a plurality of reference pixel points included in the reference image region, and the second color parameter includes a color parameter of each target pixel point of a plurality of target pixel points included in the target image region; the color matching unit is used for determining the adjustment range for adjusting the color parameter of each target pixel point in the target pixel points according to the color parameters of the reference pixel points included in the reference color parameter and the color parameters of the target pixel points included in the second color parameter; and adjusting the color parameter of each target pixel point according to the adjustment amplitude of each target pixel point in the plurality of target pixel points to obtain the third color parameter.
In a possible implementation manner, the color matching unit is configured to adjust the color parameter of each target pixel point in the plurality of target pixel points according to a difference between a first reference value of the color parameter of the plurality of reference pixel points included in the reference color parameter and a second reference value of the color parameter of the plurality of target pixel points included in the second color parameter, so as to obtain a third color parameter; or the color parameter adjusting unit is configured to adjust the color parameter of the corresponding target pixel included in the second color parameter by using the color parameter of each reference pixel in the plurality of reference pixels included in the reference color parameter according to a corresponding relationship between the plurality of reference pixels included in the reference image region and the plurality of target pixels of the target image region, so as to obtain a third color parameter.
In a possible implementation manner, the reference color parameter includes a color parameter of each of a plurality of pixels included in the reference image region.
In one possible implementation, the reference color parameter includes at least one of: hue, saturation, brightness, hue, color temperature, contrast, white balance, RGB values.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, in response to receiving a first user instruction, a reference image area is utilized to perform color matching processing on a target image area of each video frame in at least one video frame included in a target video to obtain a color matching processing result of the target video, so that one-key color matching of the video can be realized, the user operation is simple, and the video color matching efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of a color processing method according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of yet another color processing method according to an embodiment of the present disclosure;
FIG. 3 shows a flow diagram of yet another color processing method according to an embodiment of the present disclosure;
FIG. 4 shows a block diagram of a color processing apparatus according to an embodiment of the present disclosure;
FIG. 5 shows a block diagram of a color process according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of a color process according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flow chart of a video processing method according to an embodiment of the present disclosure, as shown in fig. 1, the video processing method includes:
in step S11, a first user instruction for instructing to color-tune the target video is received.
The target video may be a video to be color-mixed by a user, and the video may be a video of any format, including one or more video frames, which is not limited by the present disclosure.
The user instructs the terminal device or the server to perform color matching processing on the target video by sending a first user instruction. Optionally, the user may issue the first user instruction by operating a palette or background replacement control in an application of the terminal device. For example, a user may perform a specified operation on a specific control of the application operating interface to issue a first user instruction, where the control may be displayed on the interface in the form of a touch button, the specified operation may be, for example, a click operation or another operation, and after the specified operation is detected, the application is triggered to perform color matching processing on the target video. In some embodiments, the user may also issue the first user instruction through a network connection (e.g., a connection to the server via a specific URL link), or may issue the first user instruction in other ways, which is not limited by the embodiments of the present disclosure.
In step S12, in response to the first user instruction, a target image area of each video frame in at least one video frame included in the target video is color-mixed by using a reference image area, so as to obtain a color-mixing processing result of the target video.
The reference image area is an image area which is used as a color matching reference in the video color matching process and is a basis for performing color matching processing on the target video area. The reference image area may include a plurality of reference image areas, which will be described in a possible implementation manner disclosed in the following text, and will not be described herein again.
In the embodiment of the present disclosure, in the process of performing color matching, color matching is performed on a target image area of each video frame in at least one video frame included in a target video, where the at least one video frame may be all video frames in the target video, that is, color matching is performed on the entire target video, or may also be a partial video frame in the target video.
The target image area may be the entire image area of the video frame or may be a portion of the image area in the video frame. The following description will be specifically described in conjunction with possible implementations of the present disclosure, and details are not repeated herein.
After the target image area in one or more video frames of the target video is toned by using the reference image area, the toning processing result of the target video may be: the color parameters of the target video such as hue, style and brightness are kept adaptive to the reference image.
In the embodiment of the disclosure, in the process of toning the second video frame, after receiving the first user instruction, the reference image area is used to perform toning processing on the target image area of each video frame in at least one video frame included in the target video to obtain a toning processing result of the target video, so that one-key toning of the video can be realized, the user operation is simple, and the video toning efficiency is improved.
The video processing method provided by the present disclosure may also be various, and the reference image area utilized in the video processing method provided by the present disclosure may be a reference image area indicated by a user, or may also be an automatically selected reference image area.
In a possible implementation manner, the application may automatically select the reference image region, for example, a certain region in a video frame of the target video may be automatically selected as the reference image region, or any one of a plurality of preset reference images may be automatically selected as the reference image region, which is not limited by the present disclosure.
In one possible implementation, the method further includes: receiving a second user instruction, the second user instruction being for indicating the reference image region.
The reference image region in the embodiment of the present disclosure may be specified by a user, for example, the user may specify all or a part of a region in the first image as the reference image region by inputting the first image, or the user may specify a background region of each of at least one second video frame included in the target video as the reference image region.
When the reference image area is the whole image area in the first image, after the user inputs the first image, the user can directly perform a specified operation on a first control provided by the operation interface, that is, a second user instruction is issued, that is, the user indicates that the whole image area in the first image is indicated as the reference image area.
In the case that the reference image area is a partial image area in the first image, the user inputs the first image, and then the user may select the reference image area in the first image, and after the selection is completed, the selected image area is indicated as the reference image area. Then, the user can perform a specified operation on the first control provided by the operation interface, that is, a second user instruction is issued, that is, the selected area in the first image is specified as the reference image area.
When the reference image area is a background area of the video frame, the specifying operation may be a specifying operation performed by a user on a second control provided by the operation interface, and after the specifying operation is performed on the second control, the foreground area is toned by using the background area in the second video frame as the reference image area.
In the embodiment of the disclosure, the user can specify the reference image area used in the video toning process through the second user instruction, so as to meet the diversified toning requirements of the user.
In one possible implementation, the reference image region includes at least a portion of a first image input by a user; the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes: and performing color matching processing on the whole area of each first video frame in at least one first video frame included in the target video by using at least one part of the first image to obtain a color matching processing result of the at least one first video frame.
The user may input the first image by operating a control in the interface, for example, the first image may be input in the form of image import or image upload. After the user inputs the first image, the whole area of each first video frame in the at least one first video frame can be subjected to color matching processing by using a part or all of the first image as a reference image area.
In the embodiment of the disclosure, the reference image area is at least a part of the first image input by the user, and then the user determines the reference image area by inputting the form of the first image, so that the diversified requirements of the user on the color-adjusting style during color adjustment can be met, and meanwhile, the user does not need to search and search the filters continuously among the filters to adjust the color, so that the user operation is simpler, and the color-adjusting efficiency is higher.
In one possible implementation, the reference image region includes a background region of each of at least one second video frame included in the target video; the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes: and performing color matching processing on the foreground area in each second video frame by using the background area of each second video frame in the at least one second video frame to obtain a color matching processing result of the at least one second video frame.
In one possible implementation, the method further includes: and performing foreground and background segmentation on each second video frame in the at least one second video frame to obtain a background area and a foreground area of each second video frame.
The background region and the foreground region of the second video frame may be identified by an artificial intelligence technique, for example, may be determined by a foreground-background segmentation model, which may be a neural network model obtained by pre-training, and the specific training process is not described herein again.
In the process of segmenting the foreground and the background, the key frames in the second video frame can be input into a foreground and background segmentation model to be segmented, and foreground and background segmentation results of the key frames in the second video frame are obtained; for a non-key frame in the second video frame, foreground tracking processing may be performed on the non-key frame in the second video frame based on a foreground-background segmentation result of the key frame, so as to obtain a foreground-background segmentation result of the non-key frame in the second video frame.
The foreground tracking processing may be implemented by a target tracking technology in computer vision, for example, by a foreground tracking model, which may be a neural network model obtained by pre-training, and the specific training process is not described here.
After the foreground and background segmentation is performed on the second video frame, the background area in the second video frame can be used for performing color matching processing on the foreground area.
In the embodiment of the disclosure, the background area of the second video frame is used for performing color mixing processing on the foreground area of the second video frame, so that the color of the picture of the second video frame is more uniform on the whole, the overall visual effect of the target video is improved, and the user experience is better.
In one possible implementation, the reference image region includes at least a portion of a target background image;
the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes: and performing color matching processing on the foreground area of each third video frame in at least one third video frame in the target video by using at least one part of the target background image to obtain a color matching processing result of the at least one third video frame.
The target background image includes at least one image, which may be a single image, or at least one video frame in the background video. The target background image may be input by the user or may be in a preset material library.
In the embodiment of the disclosure, color matching processing in a process of changing the background of a video can be realized, and in the process of changing the background of the video, the foreground of at least one third video frame in a target video and a target background image are synthesized. The foreground in the third video frame may be determined by a front-background segmentation process, and after a foreground region of each third video frame in at least one third video frame in the target video is determined, the determined foreground region may be subjected to a color matching process.
After the foreground region of at least one third video frame is determined, at least one part of the target background image can be used for conducting color matching processing on the foreground region of at least one third video frame, and a color matching processing result of the foreground region of at least one third video frame is obtained.
After the color matching processing is carried out on the foreground area of the at least one third video frame by using the target background image, the color matching processing result of the foreground area of the at least one third video frame and the target background image can be synthesized, and the background replacement result of the target video is obtained.
In the embodiment of the disclosure, the target background image is used for performing color matching processing on the foreground area of at least one third video frame in the target video, so that the overall visual effect of the video after background replacement is more coordinated, and the user experience is better.
In one possible implementation manner, the receiving a first user instruction, where the first user instruction is used to instruct to color-mix a target video, includes: receiving a video background replacement instruction, wherein the video background replacement instruction is used for indicating background replacement of the target video; the method further comprises the following steps: and synthesizing the color matching processing result of the foreground area of the at least one third video frame and the target background image to obtain a background replacement result of the target video.
In the process of performing the synthesizing process, a result of the color matching process on the foreground region of the at least one third video frame may be used as a foreground image, and the target background image may be used as a background image to perform the fusing process. Optionally, in the process of performing fusion processing, the foreground image may be extracted from the original video frame and image-synthesized with the background image to obtain a background replacement result of the target video.
In the embodiment of the disclosure, in response to a received video background replacement instruction, a color matching result of a foreground region of at least one third video frame and a target background image are synthesized to obtain a background replacement result of a target video, so that one-key color matching in a video background replacement process can be realized, the operation of a user is simple, and the color matching efficiency of the video is improved.
In one possible implementation, the method further includes: detecting whether a target object exists in a target image area of the at least one video frame; the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes: and under the condition that the target object is detected in the target image area of the fourth video frame, performing color matching processing on a second area, except for the first area where the target object is located, in the target image area of the fourth video frame to obtain a color matching processing result of the fourth video frame.
The target object may be a preset object, for example, a person, a cat, a dog, a plant, a prop, and the like, and the process of detecting the target object in the target image area may be implemented according to the related technology of target detection, and the process of detecting the target object is not limited in the present disclosure.
When the target object is detected in the target image area of the fourth video frame, in order to keep the degree of reality of the color of the target object as much as possible, the color matching process may be performed on a second area other than the first area where the target object is located in the target image area of the fourth video frame, and the color matching process may not be performed on the first area where the target object is located, or the color matching may be performed on the first area and the second area, respectively.
In this embodiment, the performing, by using a reference image area, color matching on a target image area of each of at least one video frame included in the target video to obtain a color matching result of the target video further includes: and performing color matching processing on a first area where the target object is located in the fourth video frame, wherein the color matching amplitude of the color matching processing on the first area is smaller than the color matching amplitude of the color matching processing on the second area.
In the embodiment of the present disclosure, the first region and the second region may be respectively toned, and in order to keep the degree of reality of the color of the target object as much as possible, the toning amplitude for toning the first region is smaller than the toning amplitude for toning the second region. For example, the tone width of the tone processing performed on the first area may be 50% of the tone width of the tone processing performed on the second area.
In the case where the target object is not detected in the target image area of the fourth video frame, the target image area may be adjusted as a whole.
In the embodiment of the disclosure, by detecting the target object in the target image area and then performing color matching processing on the second area, which is outside the first area where the target object is located, in the target image area, the degree of reality of the color of the target object can be kept as much as possible, and the user experience is improved.
In the embodiment of the disclosure, the target object includes a person, so that by detecting the person in the target image area and then performing color matching processing on a second area outside the first area where the person is located in the target image area, the reality degree of the skin color of the person can be kept as much as possible, and the user experience is improved.
In a possible implementation manner, after the color matching processing result is obtained, the color matching processing result of the target video may be displayed through the terminal device, and specifically, the color-matched target video may be displayed in the user interface through a display module of the terminal device.
The user can send out an adjustment instruction by operating an adjustment control in the terminal device application program, and the adjustment instruction can be an instruction for instructing secondary color matching processing on the color matching result. The adjustment instruction may include an adjustment parameter of the secondary color matching process, and the user may select a specific adjustment parameter in the user interface of the terminal, for example, select to increase the brightness of the 10 th frame of the color matching process result by 10%. The adjustment instruction may be to adjust some or all of the toning results, which is not limited by this disclosure.
In addition, in the embodiment of the present disclosure, various adjustments may be performed based on the result of the toning process, for example, the adjustment instruction may also be an instruction for instructing adjustment of the size of the processed target video. The present disclosure is not limited to the specific adjustment manner.
After receiving the adjustment instruction, part or all of the color-adjusting processing result can be adjusted based on the adjustment instruction. In the adjusting process, the adjustment may be performed according to an adjustment parameter carried in the adjustment instruction, for example, the brightness of the 10 th frame of the color matching processing result is adjusted up by 10% based on the adjustment parameter in the adjustment instruction.
In a possible implementation manner, the performing, by using a reference image area, color matching on a target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes: performing color analysis on the reference image area to obtain a reference color parameter; and performing color matching processing on a target image area of each video frame in at least one video frame in the target video by using the reference color parameter to obtain a color matching processing result of the target video.
The color parameter is a parameter for determining the visual effect of a color, and in a computer, the color representation is realized by defining a color parameter in a color space, which is also called a color model (or color space, or color system), and is usually represented by a three-dimensional model, i.e. three-dimensional coordinates representing three parameters are specified, which describes the position of a specific color in the three-dimensional coordinates, and different colors can be defined according to different coordinate parameters of the color space.
A commonly used color space is a red-green-blue (RGB) color space, and color spaces such as HSL, LMS, CMYK, CIE YUV, hsb (hsv), YCbCr, and the like exist in the related art based on standards specified by the Commission Internationale de L' Eclairage (CIE) standard chromaticity system. The expression form of the color space is various, different color spaces can have different characteristics, and the color parameters of different color spaces can be mutually converted.
According to the definitions of the color parameters in different color spaces, the corresponding color parameters can be analyzed from the target image area, and the specific analysis mode can be realized by the related technology, which is not limited in the present disclosure.
When the image is stored in the computer, the color components of each pixel point of the image under the default color space can be stored, the color space of most images in the computer is an RGB color space, the RGB color space is divided into three color components of red (R), green (G) and blue (B), and the value range of each color component is 0-255. When the computer reads the image from the storage medium, the image is read through a digital image processing technology, that is, the three-dimensional component of each pixel point in the image in the default color space can be obtained, that is, the color parameter of the image in the default color space is obtained.
In order to obtain color parameters of other color spaces outside the default color space of the image, the color parameters may be converted based on the default color space to obtain color parameters of other color spaces, and the specific conversion process may be implemented based on related technologies, which is not described herein.
The resolved reference color parameters may include at least one of:
hue, saturation, brightness, hue, color temperature, contrast, white balance, RGB values.
After the reference color parameters are analyzed, the reference color parameters can be utilized to perform color matching processing on the target image area of each video frame in at least one video frame in the target video, and a color matching processing result of the target video is obtained.
In the embodiment of the disclosure, the reference color parameter is obtained by performing color analysis on the reference image region, so that the color information of the reference image region can be accurately obtained, and then the reference color parameter is used for performing color matching processing on the target video, so that the accuracy of the color matching processing result of the target video is improved.
In one possible implementation, the method further includes: performing color analysis on a target image area of each video frame in the at least one video frame to obtain a second color parameter of the target image area of each video frame; performing color matching processing on a target image area of each video frame in at least one video frame in the target video by using the reference color parameter to obtain a color matching processing result of the target video, including: and adjusting the second color parameter of the target image area of each video frame in the at least one video frame according to the reference color parameter to obtain a third color parameter of the target image area of each video frame.
The color analysis of the target image region may be performed in the same manner as the color analysis of the reference image region, and may be specifically implemented by a related technology of color analysis, which is not described in detail in this disclosure.
In the process of adjusting the second color parameter by using the reference color parameter, the second color parameter is made to be closer to the reference color parameter, so that the visual effect of the target image area is made to be closer to the visual effect of the reference image area. The adjustment may be various, and may be specifically referred to one or more implementations provided in the present disclosure hereinafter.
In the embodiment of the present disclosure, the second color parameter of the target image area is adjusted by using the reference color parameter to adjust the target image area, and since the second color parameter of the target image area is directly adjusted in the adjustment process, the accuracy of the obtained color matching processing result of the target video is higher.
In a possible implementation manner, the reference color parameter includes a color parameter of each of a plurality of pixels included in the reference image region.
The reference image region may include a plurality of pixel points, and the reference color parameter of the reference image region may include a color parameter of each pixel point of the plurality of pixel points.
In the embodiment of the disclosure, since the reference color parameter includes the color parameter of each pixel point in the plurality of pixel points included in the reference image region, the target video is subjected to color matching according to the overall visual effect of the reference image region, so that the visual effect of the target image region is equal to the visual effect of the reference image region on the whole, and the accuracy of color matching by using the reference image region is improved.
The following describes an exemplary process of adjusting the second color parameter by using the reference color parameter in conjunction with various possible implementations provided by the present disclosure.
In a possible implementation manner, the reference color parameter includes a color parameter of each reference pixel point of a plurality of reference pixel points included in the reference image region, and the second color parameter includes a color parameter of each target pixel point of a plurality of target pixel points included in the target image region;
the adjusting, according to the reference color parameter, the second color parameter of the target image area of each video frame in the at least one video frame to obtain a third color parameter of the target image area of each video frame includes: determining an adjustment range for adjusting the color parameter of each target pixel point in the plurality of target pixel points according to the color parameters of the plurality of reference pixel points included in the reference color parameter and the color parameters of the plurality of target pixel points included in the second color parameter; and adjusting the color parameter of each target pixel point according to the adjustment amplitude of each target pixel point in the plurality of target pixel points to obtain the third color parameter.
Because the color parameters of the multiple reference pixel points and the color parameters of the multiple target pixel points obtained through color analysis are determined, the difference between the two color parameters can be obtained by comparing the two color parameters, the difference can reflect the adjustment amplitude of the color parameter of each pixel point in the multiple target pixel points, the larger the difference is, the larger the adjustment amplitude is, and the smaller the difference is, the smaller the adjustment amplitude is. For a specific way of determining the adjustment amplitude, please refer to possible implementation manners provided in the present disclosure, which will not be described herein.
The color parameters of the plurality of reference pixels and the color parameters of the plurality of target pixels are compared, which may be based on a reference value in the color parameters, or may also be based on a correspondence between the plurality of reference pixels and the plurality of target pixels.
In a possible implementation manner, in the process of performing color matching processing on the foreground region in at least one second video frame and/or at least one third video frame, the adjustment amplitude of the color parameter of the target pixel point in the foreground region is inversely related to the first distance between the target pixel point and the foreground region boundary. Namely, the smaller the toning amplitude of the target pixel points far away from the boundary of the foreground region is, the larger the toning amplitude of the target pixel points close to the boundary of the foreground region is, the gradual change of the toning amplitude is realized, namely, the adjustment amplitude at the boundary of the foreground is large, the adjustment amplitude inside the foreground is small, the visual effect of a toning processing result is improved, and the user experience is better.
After the adjustment range of each target pixel point in the plurality of target pixel points is obtained, the color parameter of each target pixel point can be adjusted according to the adjustment range, and a third color parameter is obtained.
In the embodiment of the disclosure, the adjustment range for adjusting the color parameter of each target pixel point in the plurality of target pixel points is determined according to the color parameters of the plurality of reference pixel points included in the reference color parameter and the color parameters of the plurality of target pixel points included in the second color parameter, so that the difference between the target image area and the reference image area can be accurately obtained, and then the color parameter of each target pixel point is adjusted according to the adjustment range, thereby realizing the color adjustment of the image at the pixel level of the target image area, and improving the accuracy of color adjustment of the target image area.
In a possible implementation manner, the adjusting, according to the reference color parameter, the second color parameter of the target image area of each of the at least one video frame to obtain the third color parameter of the target image area of each of the at least one video frame includes:
adjusting the color parameter of each target pixel point in the plurality of target pixel points according to the difference between a first reference value of the color parameters of the plurality of reference pixel points included in the reference color parameter and a second reference value of the color parameters of the plurality of target pixel points included in the second color parameter, so as to obtain a third color parameter;
in the adjusting of the second color parameter by using the reference color parameter, the adjusting may be performed based on a reference value, and in some embodiments, the reference value may include at least one of: a maximum value of the color parameters, a minimum value of the color parameters, an average value of the color parameters. For example, the reference value may be a luminance maximum value in the image parameter, and/or a luminance minimum value in the image parameter, and/or a luminance average value in the image parameter.
For example, a difference between a first reference value in the reference color parameter and a second reference value in the second color parameter may be obtained. It should be noted that the two reference values should be values of the same color parameter, for example, both are brightness maximum values in the respective images.
Here, the process of color adjustment is exemplarily described by taking the first reference value as the maximum value of the luminance in the reference color parameter and the second reference value as the maximum value of the luminance in the second color parameter as an example. The maximum brightness value in the second color parameter can be subtracted from the maximum brightness value in the reference color parameter to obtain a difference value, and the difference value is used as an adjustment range for adjusting the color parameter of each target pixel point in the plurality of target pixel points. And then adding the difference value to the color parameter of each target pixel point in the plurality of target pixel points to obtain a plurality of third color parameters.
In the process, the brightness of the second color parameter is integrally adjusted by using the difference value, so that the alignment of the reference color parameter and the maximum brightness of the second color parameter is realized.
In an example, the second color parameter may also be adjusted by using the minimum brightness value and the average brightness value as reference values, and a specific implementation process is similar to a process of adjusting the second color parameter by using the maximum brightness value, which is not described herein again.
In an example, the second color parameter may also be adjusted by simultaneously using a plurality of reference values, for example, the second color parameter may be adjusted, so that the maximum value, the minimum value, and the average value of the second color parameter are adapted to the reference color parameter, and the specific adjustment process is not described herein again.
In the embodiment of the disclosure, the second color parameter is adjusted according to the reference value, so that the difference between the target image region and the reference image region can be accurately obtained, and then the color parameter of each target pixel point in the target pixel points is adjusted by using the difference to obtain the third color parameter, thereby realizing the color mixing of the image at the pixel level of the target image region and improving the accuracy of color mixing of the target image region.
In a possible implementation manner, the adjusting, according to the reference color parameter, the second color parameter of the target image area of each of the at least one video frame to obtain the third color parameter of the target image area of each of the at least one video frame includes: and adjusting the color parameter of the corresponding target pixel point included in the second color parameter by using the color parameter of each reference pixel point in the plurality of reference pixel points included in the reference color parameter according to the corresponding relationship between the plurality of reference pixel points included in the reference image region and the plurality of target pixel points of the target image region, so as to obtain a third color parameter.
The corresponding relation can be the corresponding relation between the position coordinates of the pixel point of the reference image area and the pixel point of the target image area, the position coordinates can be the relative coordinates of the pixel point in the image, and can also be the absolute coordinates of the pixel point in the image, the relative coordinates of the pixel point are measured by the percentage of the position of the pixel point from the origin of the coordinate, and the absolute coordinates of the pixel point are measured by the fixed unit length.
Under the condition that the resolutions of the reference image area and the target image area are the same, the corresponding relation of the absolute coordinates of the pixel points can be utilized, namely, the pixel points with the same absolute coordinates of the reference image area and the target image area have the corresponding relation. Under the condition that the resolutions of the reference image area and the target image area are different, the corresponding relation of the relative coordinates of the pixel points can be utilized, namely, the pixel points with the same relative coordinates of the reference image area and the target image area have the corresponding relation.
Then, for the pixel points with the same position coordinates in the reference image region and the target image region, the color parameters may be adjusted to be consistent, that is, the second color parameter is adjusted by using the reference color parameter, so that the second color parameter is the same as the reference color parameter.
In addition, the corresponding relationship may also be other corresponding relationships, for example, the multiple reference pixels included in the reference image region may be sorted according to the size of the color parameter, the multiple target pixels of the target image region may be sorted according to the size of the color parameter, and the reference pixels and the target pixels in the reference image region and the target image region that have the same order are regarded as having the corresponding relationship.
The sequence can be an absolute sequence or a relative sequence, wherein the absolute sequence is the ranking ordered one by one according to the size of the color parameters; the relative sequence is the percentage ranking after being sorted one by one according to the size of the color parameters.
Then, for the pixels in the same order in the reference image region and the target image region, the color parameters may be adjusted to be consistent, that is, the reference color parameter is used to adjust the second color parameter, so that the second color parameter is the same as the reference color parameter.
In the embodiment of the disclosure, the second color parameter is adjusted according to the reference value, so that the difference between the target image region and the reference image region can be accurately obtained, and then the color parameter of each target pixel point in the target pixel points is adjusted by using the difference to obtain the third color parameter, thereby realizing the color mixing of the image at the pixel level of the target image region and improving the accuracy of color mixing of the target image region.
In the following, the color processing method provided by the present disclosure is exemplified with reference to specific application scenarios, and the contents not elaborated in this section may refer to the related description, and the contents in this section may also be used to exemplify the foregoing contents.
Referring to fig. 2, in a possible application scenario provided by the present disclosure, in a case that a reference image region includes at least a portion of a first image input by a user, a color processing method provided by the present disclosure includes:
in step S201, a first image input by a user is received, and a target video.
The first image may be any image input by the user, and the format type of the image input by the user is not particularly limited by the present disclosure.
Similarly, the target video may also be any video input by the user, and the format type of the video input by the user is not particularly limited in the present disclosure.
In step S202, a first user instruction is received, and a reference image region indicated by the user in the first user instruction is determined.
In this implementation, the user can freely designate a reference image region, which is a region in the first image, and the region is designated by the user, and the region may be a whole image region in the first image or a partial image region in the first image. The specific determination process may refer to the related description above, and is not described herein again.
After the user determines the reference image area, a first user instruction can be triggered.
In step S203, in response to the first user instruction, color analysis is performed on the reference image region to obtain a reference color parameter.
In step S204, a color analysis is performed on a target image area of each video frame in at least one video frame of the target video, so as to obtain a second color parameter of the target image area of each video frame.
In step S205, the adjustment ranges of the color parameters of the target pixels are determined according to the difference between the first reference value in the reference color parameter and the second reference value in the second color parameter.
In step S206, a portrait in the target image area is detected, and when the portrait is detected, the portrait area and the area outside the portrait area in the target image area are respectively adjusted according to an adjustment range, where the adjustment range for the portrait area is smaller than the adjustment range for the area outside the portrait area in the target area.
In the process of adjusting the color parameters of the portrait area, in order to maintain the natural state of the skin color of the portrait, the adjustment of the color parameters is kept as small as possible, or no adjustment is required.
For the specific adjustment process, reference may be made to the related description above, and details are not repeated here.
In step S207, the plurality of target videos after color matching are merged to obtain a merged video.
Based on the reference image area designated by the user, the plurality of video segments can be adjusted so that the color tones of the adjusted plurality of video segments are kept consistent. Then through merging a plurality of video clips after mixing colors, the video after merging is more natural in the connection between different video clips, and the connection between different frames is more natural, so that the influence of the inconsistency of the color light, the color temperature or the parameters among cameras of the environment on the tone of the video clips when the video is shot is reduced, and the user experience is improved.
In the embodiment of the disclosure, in the process of color mixing of the target video, the used color parameters are obtained by analyzing the first image input by the user, so as to meet the diversified color mixing requirements of the user, meanwhile, the user can directly input the image as the reference image to mix the color of the target video without continuously searching and looking up the filter among a plurality of filters to mix the color, so that the user operation is simpler, and the color mixing efficiency is higher.
Referring to fig. 3, in a possible application scenario provided by the present disclosure, in a case that a reference image region includes a target background image, and a target image region includes a foreground region of at least one second video frame of a target video, a color processing method provided by the present disclosure includes:
in step S301, a target video and a target background image input by a user are received.
In step S302, foreground and background segmentation is performed on each second video frame in the target video, and a background region and a foreground region in each second video frame are determined.
In step S303, color analysis is performed on the target background image to obtain a reference color parameter.
In step S304, color analysis is performed on the foreground region of each second video frame in the target video to obtain a second color parameter.
In step S305, color parameters of a plurality of target pixels in each second video frame are adjusted according to the reference color parameter.
In step S306, a foreground region of each second video frame in the color-mixed target video is used as a foreground, and a target background image is used as a background, and video synthesis processing is performed to obtain a synthesized video.
In the embodiment of the disclosure, in the process of changing the background of the video, the target background image is used for performing color matching processing on the foreground area of the target video, so that the overall visual effect of the synthesized video is more coordinated, and the user experience is better.
In one possible implementation, the color processing method may be performed by an electronic device such as a terminal device or a server, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a color processing apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any color processing method provided by the present disclosure, and the descriptions and corresponding descriptions of the corresponding technical solutions and the corresponding descriptions in the methods section are omitted for brevity.
Fig. 4 shows a block diagram of a color processing apparatus according to an embodiment of the present disclosure, and as shown in fig. 4, the apparatus 40 includes:
a first instruction receiving unit 401, configured to receive a first user instruction, where the first user instruction is used to instruct to perform color matching processing on a target video;
a color matching unit 402, configured to perform color matching on a target image area of each video frame in at least one video frame included in the target video by using a reference image area in response to the first user instruction, so as to obtain a color matching result of the target video.
In one possible implementation, the apparatus further includes:
a second instruction receiving unit, configured to receive a second user instruction, where the second user instruction is used to indicate the reference image area.
In one possible implementation, the reference image region includes at least a portion of a first image input by a user;
the color matching unit 402 is configured to perform color matching on the entire area of each first video frame in at least one first video frame included in the target video by using at least one portion of the first image, so as to obtain a color matching result of the at least one first video frame.
In one possible implementation, the reference image region includes a background region of each of at least one second video frame included in the target video;
the color matching unit 402 is configured to perform color matching processing on a foreground region in each second video frame by using a background region of each second video frame in the at least one second video frame, so as to obtain a color matching processing result of the at least one second video frame.
In one possible implementation, the reference image region includes at least a portion of a target background image; the color matching unit 402 is configured to perform color matching on a foreground region of each third video frame in at least one third video frame in the target video by using at least a part of the target background image, so as to obtain a color matching result of the foreground region of the at least one third video frame.
In a possible implementation manner, the first instruction receiving unit is configured to receive a video background replacement instruction, where the video background replacement instruction is used to instruct to perform background replacement on the target video; the device further comprises: and the synthesizing unit is used for synthesizing the color matching processing result of the foreground area of the at least one third video frame and the target background image to obtain a background replacing result of the target video.
In one possible implementation, the apparatus further includes:
a detection unit for detecting whether a target object exists in a target image area of the at least one video frame;
the color matching unit 402 is configured to, when the target object is detected in a target image area of a fourth video frame, perform color matching on a second area, except for a first area where the target object is located, in the target image area of the fourth video frame, to obtain a color matching result of the fourth video frame.
In a possible implementation manner, the color matching unit 402 is configured to perform color matching processing on a first area where the target object is located in the fourth video frame, where a color matching range of the color matching processing on the first area is smaller than a color matching range of the color matching processing on the second area.
In one possible implementation, the target object includes a person.
In a possible implementation manner, the color matching unit 402 is configured to perform color analysis on the reference image region to obtain a reference color parameter; and performing color matching processing on a target image area of each video frame in at least one video frame in the target video by using the reference color parameter to obtain a color matching processing result of the target video.
In one possible implementation, the apparatus further includes:
the analysis unit is used for carrying out color analysis on a target image area of each video frame in the at least one video frame to obtain a second color parameter of the target image area of each video frame;
the color matching unit 402 is configured to adjust a second color parameter of a target image area of each video frame in the at least one video frame according to the reference color parameter, so as to obtain a third color parameter of the target image area of each video frame.
In a possible implementation manner, the reference color parameter includes a color parameter of each reference pixel point of a plurality of reference pixel points included in the reference image region, and the second color parameter includes a color parameter of each target pixel point of a plurality of target pixel points included in the target image region;
the color matching unit 402 is configured to determine, according to the color parameters of the plurality of reference pixels included in the reference color parameter and the color parameters of the plurality of target pixels included in the second color parameter, an adjustment range for adjusting the color parameter of each target pixel of the plurality of target pixels; and adjusting the color parameter of each target pixel point according to the adjustment amplitude of each target pixel point in the plurality of target pixel points to obtain the third color parameter.
In a possible implementation manner, the color matching unit 402 is configured to adjust the color parameter of each target pixel point in the plurality of target pixel points according to a difference between a first reference value of the color parameter of the plurality of reference pixel points included in the reference color parameter and a second reference value of the color parameter of the plurality of target pixel points included in the second color parameter, so as to obtain a third color parameter; or the color parameter adjusting unit is configured to adjust the color parameter of the corresponding target pixel included in the second color parameter by using the color parameter of each reference pixel in the plurality of reference pixels included in the reference color parameter according to a corresponding relationship between the plurality of reference pixels included in the reference image region and the plurality of target pixels of the target image region, so as to obtain a third color parameter.
In a possible implementation manner, the reference color parameter includes a color parameter of each of a plurality of pixels included in the reference image region.
In one possible implementation, the reference color parameter includes at least one of:
hue, saturation, brightness, hue, color temperature, contrast, white balance, RGB values.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code, which when run on a device, a processor in the device executes instructions for implementing the video processing method provided in any of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the video processing method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 5 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 5, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 6 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 6, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (18)

1. A video processing method, comprising:
receiving a first user instruction, wherein the first user instruction is used for instructing color mixing processing on a target video;
and responding to the first user instruction, and performing color matching processing on a target image area of each video frame in at least one video frame included in the target video by using a reference image area to obtain a color matching processing result of the target video.
2. The method of claim 1, further comprising:
receiving a second user instruction, the second user instruction being for indicating the reference image region.
3. The method of any of claims 1-2, wherein the reference image area comprises at least a portion of a first image input by a user;
the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes:
and performing color matching processing on the whole area of each first video frame in at least one first video frame included in the target video by using at least one part of the first image to obtain a color matching processing result of the at least one first video frame.
4. The method according to any one of claims 1-3, wherein the reference image area comprises a background area of each of at least one second video frame included in the target video;
the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes:
and performing color matching processing on the foreground area in each second video frame by using the background area of each second video frame in the at least one second video frame to obtain a color matching processing result of the at least one second video frame.
5. The method of any of claims 1-3, wherein the reference image area comprises at least a portion of a target background image;
the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes:
and performing color matching processing on the foreground area of each third video frame in at least one third video frame in the target video by using at least one part of the target background image to obtain a color matching processing result of the foreground area of the at least one third video frame.
6. The method of claim 5, wherein the receiving a first user instruction, the first user instruction being used for instructing to color-tune the target video, comprises:
receiving a video background replacement instruction, wherein the video background replacement instruction is used for indicating background replacement of the target video;
the method further comprises the following steps:
and synthesizing the color matching processing result of the foreground area of the at least one third video frame and the target background image to obtain a background replacement result of the target video.
7. The method according to any one of claims 1-6, further comprising:
detecting whether a target object exists in a target image area of the at least one video frame;
the performing, by using the reference image area, color matching on the target image area of each video frame in at least one video frame included in the target video to obtain a color matching result of the target video includes:
and under the condition that the target object is detected in the target image area of the fourth video frame, performing color matching processing on a second area, except for the first area where the target object is located, in the target image area of the fourth video frame to obtain a color matching processing result of the fourth video frame.
8. The method according to claim 7, wherein the performing, by using the reference image area, a color matching process on the target image area of each of at least one video frame included in the target video to obtain a color matching process result of the target video further comprises:
and performing color matching processing on a first area where the target object is located in the fourth video frame, wherein the color matching amplitude of the color matching processing on the first area is smaller than the color matching amplitude of the color matching processing on the second area.
9. The method of any of claims 7-8, wherein the target object comprises a human figure.
10. The method according to any one of claims 1 to 9, wherein the performing, by using the reference image area, color matching on the target image area of each of at least one video frame included in the target video to obtain a color matching result of the target video includes:
performing color analysis on the reference image area to obtain a reference color parameter;
and performing color matching processing on a target image area of each video frame in at least one video frame in the target video by using the reference color parameter to obtain a color matching processing result of the target video.
11. The method of claim 10, further comprising:
performing color analysis on a target image area of each video frame in the at least one video frame to obtain a second color parameter of the target image area of each video frame;
performing color matching processing on a target image area of each video frame in at least one video frame in the target video by using the reference color parameter to obtain a color matching processing result of the target video, including:
and adjusting the second color parameter of the target image area of each video frame in the at least one video frame according to the reference color parameter to obtain a third color parameter of the target image area of each video frame.
12. The method according to claim 11, wherein the reference color parameter comprises a color parameter of each of a plurality of reference pixels included in the reference image region, and the second color parameter comprises a color parameter of each of a plurality of target pixels included in the target image region;
the adjusting, according to the reference color parameter, the second color parameter of the target image area of each video frame in the at least one video frame to obtain a third color parameter of the target image area of each video frame includes:
determining an adjustment range for adjusting the color parameter of each target pixel point in the plurality of target pixel points according to the color parameters of the plurality of reference pixel points included in the reference color parameter and the color parameters of the plurality of target pixel points included in the second color parameter;
and adjusting the color parameter of each target pixel point according to the adjustment amplitude of each target pixel point in the plurality of target pixel points to obtain the third color parameter.
13. The method according to claim 11 or 12, wherein said adjusting the second color parameter of the target image area of each of the at least one video frame according to the reference color parameter to obtain the third color parameter of the target image area of each of the at least one video frame comprises:
adjusting the color parameter of each target pixel point in the plurality of target pixel points according to the difference between a first reference value of the color parameters of the plurality of reference pixel points included in the reference color parameter and a second reference value of the color parameters of the plurality of target pixel points included in the second color parameter, so as to obtain a third color parameter; or
And adjusting the color parameter of the corresponding target pixel point included in the second color parameter by using the color parameter of each reference pixel point in the plurality of reference pixel points included in the reference color parameter according to the corresponding relationship between the plurality of reference pixel points included in the reference image region and the plurality of target pixel points of the target image region, so as to obtain a third color parameter.
14. The method according to any of claims 10-13, wherein the reference color parameter comprises a color parameter for each of a plurality of pixels comprised by the reference image region.
15. The method according to any of claims 10-14, wherein the reference color parameter comprises at least one of:
hue, saturation, brightness, hue, color temperature, contrast, white balance, RGB values.
16. A video processing apparatus, comprising:
the device comprises a first instruction receiving unit, a first processing unit and a second instruction receiving unit, wherein the first instruction receiving unit is used for receiving a first user instruction, and the first user instruction is used for indicating color mixing processing of a target video;
and the color matching unit is used for responding to the first user instruction and performing color matching processing on a target image area of each video frame in at least one video frame included in the target video by using a reference image area to obtain a color matching processing result of the target video.
17. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 15.
18. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 15.
CN202010832512.XA 2020-08-18 2020-08-18 Video processing method and device, electronic equipment and storage medium Active CN111935418B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010832512.XA CN111935418B (en) 2020-08-18 2020-08-18 Video processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010832512.XA CN111935418B (en) 2020-08-18 2020-08-18 Video processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111935418A true CN111935418A (en) 2020-11-13
CN111935418B CN111935418B (en) 2022-12-09

Family

ID=73305444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010832512.XA Active CN111935418B (en) 2020-08-18 2020-08-18 Video processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111935418B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597840A (en) * 2020-12-14 2021-04-02 深圳集智数字科技有限公司 Image identification method, device and equipment
CN113327193A (en) * 2021-05-27 2021-08-31 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN114758027A (en) * 2022-04-12 2022-07-15 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1194077A (en) * 1995-08-22 1998-09-23 汤姆森消费电子有限公司 Parallel mode on-screen display system
US6134345A (en) * 1998-08-28 2000-10-17 Ultimatte Corporation Comprehensive method for removing from an image the background surrounding a selected subject
CN101326514A (en) * 2005-12-09 2008-12-17 微软公司 Background removal in a live video
CN106603859A (en) * 2016-12-30 2017-04-26 努比亚技术有限公司 Photo filter processing method, device and terminal
CN106971165A (en) * 2017-03-29 2017-07-21 武汉斗鱼网络科技有限公司 The implementation method and device of a kind of filter
US20170244908A1 (en) * 2016-02-22 2017-08-24 GenMe Inc. Video background replacement system
CN109739414A (en) * 2018-12-29 2019-05-10 努比亚技术有限公司 A kind of image processing method, mobile terminal, computer readable storage medium
US20190313071A1 (en) * 2018-04-04 2019-10-10 Motorola Mobility Llc Dynamic chroma key for video background replacement
CN111292281A (en) * 2020-01-20 2020-06-16 安徽文香信息技术有限公司 Image processing method, device, equipment and storage medium
CN111339420A (en) * 2020-02-28 2020-06-26 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1194077A (en) * 1995-08-22 1998-09-23 汤姆森消费电子有限公司 Parallel mode on-screen display system
US6134345A (en) * 1998-08-28 2000-10-17 Ultimatte Corporation Comprehensive method for removing from an image the background surrounding a selected subject
CN101326514A (en) * 2005-12-09 2008-12-17 微软公司 Background removal in a live video
US20170244908A1 (en) * 2016-02-22 2017-08-24 GenMe Inc. Video background replacement system
CN106603859A (en) * 2016-12-30 2017-04-26 努比亚技术有限公司 Photo filter processing method, device and terminal
CN106971165A (en) * 2017-03-29 2017-07-21 武汉斗鱼网络科技有限公司 The implementation method and device of a kind of filter
US20190313071A1 (en) * 2018-04-04 2019-10-10 Motorola Mobility Llc Dynamic chroma key for video background replacement
CN109739414A (en) * 2018-12-29 2019-05-10 努比亚技术有限公司 A kind of image processing method, mobile terminal, computer readable storage medium
CN111292281A (en) * 2020-01-20 2020-06-16 安徽文香信息技术有限公司 Image processing method, device, equipment and storage medium
CN111339420A (en) * 2020-02-28 2020-06-26 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597840A (en) * 2020-12-14 2021-04-02 深圳集智数字科技有限公司 Image identification method, device and equipment
CN113327193A (en) * 2021-05-27 2021-08-31 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN114758027A (en) * 2022-04-12 2022-07-15 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111935418B (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN112767285B (en) Image processing method and device, electronic device and storage medium
CN108986199B (en) Virtual model processing method and device, electronic equipment and storage medium
CN111935418B (en) Video processing method and device, electronic equipment and storage medium
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
US20210058595A1 (en) Method, Device, and Storage Medium for Converting Image
CN111709890B (en) Training method and device for image enhancement model and storage medium
CN110944230B (en) Video special effect adding method and device, electronic equipment and storage medium
CN112801916A (en) Image processing method and device, electronic equipment and storage medium
EP3208745B1 (en) Method and apparatus for identifying picture type
CN112465843A (en) Image segmentation method and device, electronic equipment and storage medium
CN112219224B (en) Image processing method and device, electronic equipment and storage medium
CN110619610B (en) Image processing method and device
CN106792255B (en) Video playing window frame body display method and device
CN111861942A (en) Noise reduction method and device, electronic equipment and storage medium
CN111953903A (en) Shooting method, shooting device, electronic equipment and storage medium
CN105677352B (en) Method and device for setting application icon color
CN109167921B (en) Shooting method, shooting device, shooting terminal and storage medium
CN107730443B (en) Image processing method and device and user equipment
CN113450431B (en) Virtual hair dyeing method, device, electronic equipment and storage medium
CN113570581A (en) Image processing method and device, electronic equipment and storage medium
WO2023045961A1 (en) Virtual object generation method and apparatus, and electronic device and storage medium
CN111275641A (en) Image processing method and device, electronic equipment and storage medium
CN114445298A (en) Image processing method and device, electronic equipment and storage medium
CN111325148A (en) Method, device and equipment for processing remote sensing image and storage medium
CN110648373B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant