WO2021093718A1 - 视频处理方法、视频修复方法、装置及设备 - Google Patents

视频处理方法、视频修复方法、装置及设备 Download PDF

Info

Publication number
WO2021093718A1
WO2021093718A1 PCT/CN2020/127717 CN2020127717W WO2021093718A1 WO 2021093718 A1 WO2021093718 A1 WO 2021093718A1 CN 2020127717 W CN2020127717 W CN 2020127717W WO 2021093718 A1 WO2021093718 A1 WO 2021093718A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
image
original
area
target
Prior art date
Application number
PCT/CN2020/127717
Other languages
English (en)
French (fr)
Inventor
熊宝玉
汪贤
鲁方波
成超
陈熊
张海斌
樊鸿飞
李果
张玉梅
蔡媛
张文杰
豆修鑫
许道远
Original Assignee
北京金山云网络技术有限公司
北京金山云科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201911126554.5A external-priority patent/CN112822474A/zh
Priority claimed from CN201911118706.7A external-priority patent/CN112819699A/zh
Priority claimed from CN201911126297.5A external-priority patent/CN112819702B/zh
Application filed by 北京金山云网络技术有限公司, 北京金山云科技有限公司 filed Critical 北京金山云网络技术有限公司
Publication of WO2021093718A1 publication Critical patent/WO2021093718A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • This application relates to the technical field of video processing, and in particular to a video processing method, video repair method, device and electronic equipment.
  • Old Vedio Reparing (Old Vedio Reparing) is mainly to restore early unclear TV series or movies.
  • old movies are mostly affected by the acquisition equipment at the time and have problems such as low definition. Therefore, they need to be repaired to make them clear, so as to give people a better visual sensory effect.
  • the current restoration of old films mainly adopts manual restoration methods, which are time-consuming and labor-intensive, and the processing results are unstable.
  • the purpose of the embodiments of the present application is to provide a video processing method, a video repair method, a device, and an electronic device, which can improve the processing effect and processing efficiency of the video repair.
  • an embodiment of the present application provides a video processing method, including: obtaining an original video; adjusting the quality influencing parameter of the original video to obtain a target video corresponding to the original video, and the video quality of the target video Lower than the video quality of the original video; wherein the quality influencing parameters include at least two of the following: a noise parameter, a brightness parameter, and a definition parameter; constructing a video training set based on the original video and the target video; Wherein, the video training set saves the corresponding relationship between the target video and the original video, the video training set is used to train a video repair model, and the trained video repair model is used to repair the video.
  • an embodiment of the present application provides a video repair method, including: obtaining a video to be repaired; inputting the video to be repaired to a video repair model to obtain a repaired video output by the video repairing model; wherein, the video
  • the repair model is obtained by training an initial video repair model according to a video training set.
  • the video training set includes an original video and a target video corresponding to the original video, and the target video is an impact on the quality of the original video.
  • the video quality of the target video is lower than the video quality of the original video
  • the quality influencing parameter includes at least two of the following: a noise parameter, a brightness parameter, and a definition parameter.
  • an embodiment of the present application provides a video processing device, including: an original video acquisition module configured to acquire the original video; a parameter adjustment module configured to adjust the quality influencing parameters of the original video to obtain the original video Corresponding to the target video, the video quality of the target video is lower than the video quality of the original video; wherein the quality influencing parameters include at least two of the following: noise parameters, brightness parameters, and resolution parameters; training set construction Module, configured to construct a video training set based on the original video and the target video; wherein the video training set saves the corresponding relationship between the target video and the original video, and the video training set is used to train the video A repair model, and the video repair model after training is used to repair the video.
  • an embodiment of the present application also provides a video repair device, which includes: a video to-be-repaired acquisition module, configured to acquire the video to be repaired; and a repairing module, which is configured to input the to-be-repaired video into the video repair model to obtain the video to be repaired.
  • the repair result output by the video repair model wherein, the video repair model is obtained by training an initial video repair model according to a video training set, and the video training set includes an original video and a target video corresponding to the original video ,
  • the target video is obtained after adjusting the quality influencing parameter of the original video, the video quality of the target video is lower than the video quality of the original video, and the quality influencing parameter includes at least two of the following : Noise parameters, brightness parameters and clarity parameters.
  • the video processing methods and devices provided in the first and third aspects of the embodiments of the present application first obtain the original video, adjust the quality influencing parameters of the original video, and obtain a target video whose video quality is lower than that of the original video, based on the original video and
  • the target video constructs a video training set.
  • the quality influencing parameters include at least two of the following: noise parameters, brightness parameters, and sharpness parameters.
  • the video training set saves the corresponding relationship between the target video and the original video, and the video repair model of the video training set is used for training, so that the trained video repair model can be used to repair the video.
  • a target video with a lower video quality than the original video can be obtained, the target video can be effectively used to directly imitate the old film, and finally a video training set can be constructed based on the original video and the target video. Since the video training set contains the original video and the target video, it can be directly set as a training video repair model, which effectively solves the problem of difficulty in obtaining a training set of a video repair model that can directly repair old movies as a whole.
  • the video to be repaired is first obtained, and then the video to be repaired is input to the video repair model to obtain the repair result output by the video repair model.
  • the video repair model is obtained by training the initial video repair model according to the video training set.
  • the video training set includes an original video and a target video corresponding to the original video, and the target video is obtained by adjusting the quality influencing parameters of the original video, and the video quality of the target video is lower than the video quality of the original video.
  • the video quality influencing parameters include at least two of the following: noise parameters, brightness parameters, and sharpness parameters.
  • each processing process will have a certain impact on the repair result, resulting in accumulated errors in the repaired video.
  • the embodiment of the application may The overall repair of the video through only one neural network can effectively alleviate the problem of accumulated errors in the repair results in related technologies.
  • an embodiment of the present application also provides a video processing method, including:
  • a first area is determined in the original image frame, wherein the original image frame includes a first area and a second area, the first area in the original image frame is in an underexposed state, and the original image frame The second area of is not under-exposed;
  • the enhancement processing is used to adjust the exposure state of different areas in the image to a normal exposure state, and the first area and the second area in the target image frame The areas are all in a normal state of exposure;
  • a second video is generated.
  • an embodiment of the present application provides a video processing device, including:
  • An image extraction module configured to obtain the original image frame in the first video
  • An image analysis module configured to determine a first area in the original image frame, wherein the original image frame includes a first area and a second area, and the first area in the original image frame is in an underexposed state, The second area in the original image frame is not in an under-exposed state;
  • the enhancement processing module is configured to perform enhancement processing on the original image frame to obtain a target image frame, wherein the enhancement processing is used to adjust the exposure state of different regions in the image to a normal exposure state, and the target image frame Both the first area and the second area are in a normal exposure state;
  • the video generation module is configured to generate a second video according to the target image frame.
  • an embodiment of the present application also provides an image enhancement method, including:
  • the original image includes a first area and a second area, the first area in the original image is in an underexposed state, and the original image
  • the second area is not in an under-exposed state, the first area in the initial enhanced image is not in an under-exposed state, and the second area in the initial enhanced image is in an over-exposed state;
  • the light weight information is used to indicate the light weight corresponding to each pixel in the original image, and the light weight is related to the brightness of the corresponding pixel;
  • an image enhancement device including:
  • the enhancement module is configured to perform contrast enhancement processing on the original image to be processed to obtain an initial enhanced image, wherein the original image includes a first area and a second area, and the first area in the original image is in an underexposed state, so The second area in the original image is not in an underexposed state, the first area in the initial enhanced image is not in an underexposed state, and the second area in the initial enhanced image is in an overexposed state;
  • the first acquisition module is configured to acquire the light weight information of the original image, wherein the light weight information is used to indicate the light weight corresponding to each pixel in the original image, and the light weight corresponds to the corresponding pixel.
  • the light weight information is used to indicate the light weight corresponding to each pixel in the original image, and the light weight corresponds to the corresponding pixel.
  • the fusion module is configured to perform image fusion processing on the original image and the initial enhanced image according to the illumination weight information to obtain a target enhanced image, wherein the first region and the second region in the target enhanced image are both in Exposure is normal.
  • the original image to be processed is subjected to contrast enhancement processing to obtain an initial enhanced image.
  • the original image includes a first area and a second area.
  • the first area in the original image is under-exposed
  • the second area in the original image is not under-exposed
  • the first area in the initial enhanced image is not under-exposed.
  • the second area in the initial enhanced image is in an overexposed state.
  • an embodiment of the present application also provides an electronic device, including a memory and a processor, the memory stores a computer program that can run on the processor, and the processor executes the above-mentioned first aspect.
  • the method described above may either execute the method described in the second aspect above, or execute the method described in the fifth aspect above, or execute the method described in the seventh aspect above.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program executes the method described in the first aspect when the computer program is run by a processor, or executes The method described in the second aspect described above, or the method described in the fifth aspect described above is executed, or the method described in the seventh aspect described above is executed.
  • FIG. 1 is a schematic flowchart of a video processing method provided by an embodiment of this application
  • FIG. 2 is a flowchart of a video repair method provided by an embodiment of the application
  • FIG. 3 is a schematic structural diagram of a video processing device provided by an embodiment of this application.
  • FIG. 4 is a schematic structural diagram of a video repair device provided by an embodiment of the application.
  • Fig. 5 is a flowchart of a video processing method provided by an embodiment of the application.
  • FIG. 6 is a flowchart of a specific example of a video processing method provided by an embodiment of the application.
  • FIG. 7 is a schematic flowchart of an image enhancement method provided by an embodiment of this application.
  • FIG. 8 is a schematic flowchart of another image enhancement method provided by an embodiment of the application.
  • FIG. 9 is a schematic structural diagram of an image enhancement device provided by an embodiment of the application.
  • FIG. 10 is a schematic structural diagram of another image enhancement device provided by an embodiment of this application.
  • FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • the main method is to use multiple neural network models to repair in stages, that is, to split the repair process of old movies into multiple processing processes, and each processing process needs to use one.
  • the realization of this corresponding neural network model is more complicated and cumbersome.
  • each processing process will have a certain impact on the repair result, it causes cumulative errors in the repaired video.
  • the inventor found that the reason for not directly adopting an overall neural network model for the whole process in the related technology is that the training set used to train the overall neural network model is currently not available.
  • the embodiments of the present application provide a video processing method, a video repair method, device, and electronic equipment.
  • a video training set for training a video repair model can be obtained, and the video training set can be used to train the training set.
  • the video repair model directly repairs the old film as a whole.
  • this embodiment can also effectively alleviate the problem of accumulation of errors in the repair results in related technologies, and help improve the repair effect of old movies.
  • Step S102 Obtain the original video.
  • high-definition video may be selected as the original video, such as a high-definition video captured by a user through a device with a shooting function (for example, a smart phone or a camera), or a high-definition video downloaded by the user from the Internet.
  • the high-definition video may be a video whose definition parameter is higher than the preset first threshold value, the noise parameter is lower than the preset second threshold value, and the brightness parameter is higher than the preset third threshold value.
  • a video upload channel can be provided for users so that users can choose and upload high-definition videos by themselves, and use the high-definition videos uploaded by users as original videos.
  • Step S104 Adjust the quality influencing parameters of the original video to obtain the target video corresponding to the original video.
  • the video quality of the target video is lower than the video quality of the original video
  • the quality influencing parameters include at least two of the following: a noise parameter, a brightness parameter, and a definition parameter.
  • a noise parameter a parameter that has different degrees of noise, dark areas, and blurring
  • at least two of the above-mentioned quality influencing parameters are adjusted.
  • noise can be randomly added to the original video to make the original video have different degrees of noise
  • the brightness parameters of the original video can be adjusted randomly to increase the dark area in the original video
  • the definition of the original video can also be reduced. Make the original video more blurry.
  • the target video with lower video quality is obtained by the above method, and the “old film” can be imitated by the obtained target video.
  • Step S106 Construct a video training set based on the original video and the target video.
  • the video training set saves the corresponding relationship between the target video and the original video
  • the video training set is used to train the video repair model
  • the trained video repair model is used to repair the video. Since the training process of the neural network is to learn the mapping relationship between the input and the output, the embodiment of the present application uses the original video and the target video corresponding to the original video to construct a video training set to train the neural network to obtain the video repair Video repair model.
  • the target video can be used to simulate the "old film” and use it as the input of the neural network
  • the original video can be used to simulate the "repair result" of the "old film” and use it as the output of the neural network.
  • the neural network is made to learn the mapping relationship between the "old film” and the "repair result” to obtain a video repair model for repairing the old film.
  • a target video with a lower video quality than the original video can be obtained, and the target video can be effectively used to directly imitate the old film, and finally can be based on The original video and the target video construct a video training set. Since the video training set contains the original video and the target video, it can be directly used to train the video repair model, effectively solving the problem of difficulty in obtaining the training set of the video repair model that can directly repair the old film as a whole.
  • the embodiment of the present application cuts the original video into frames to obtain multiple image frames in the original video.
  • you can The goal of obtaining a target video whose video quality is lower than the original video is achieved.
  • the embodiment of the present application provides a specific implementation manner for adjusting the quality influencing parameter of the original video to obtain the target video corresponding to the original video. Refer to the following steps 1 to 4:
  • Step 1 Add random noise to each image frame in the original video when the quality influencing parameter includes a noise parameter.
  • the following operations can be performed on each image frame in the original video, see the following steps 1.1 to 1.2:
  • Step 1.1 Determine the first random noise level within a pre-configured random noise interval. Because there are different degrees of noise in old movies, in one embodiment, random numbers are used to set a random noise interval, and random noises with different first random noise levels (that is, compression noise) are randomly generated within this random noise interval. .
  • the compression noise may include JPEG (Joint Photographic Experts Group) compression noise, salt and pepper noise, Poisson noise, and Gaussian white noise.
  • Step 1.2 Add the compressed noise of the first random noise level to the image frame in the original video.
  • the same first random noise level can be added to each image frame in the original video, or different first random noise levels can be added to each image frame, and the added random noise level can be selected based on the actual situation. The first random noise level, so that the noise parameters in the original video after adding compressed noise are closer to the noise parameters of the old film.
  • Step 2 Perform random brightness adjustment on each image frame in the original video when the quality influencing parameter includes a brightness parameter.
  • the embodiment of this application provides an implementation of random brightness adjustment for each image frame in the original video. If the color format of the image frame in the original video is a color format other than the YUV (Luminance, Chrominance, Chroma) format, The color image format of each image frame in the original video can be converted to the YUV format, and the brightness parameter of each image frame after the format conversion can be reduced by a random value under the Y channel. Due to the poor shooting technology of the early old films and the impact of the environment on the shooting, the resulting films will have dark areas in many places. It is also an important task to enhance the dark areas in the restoration of the old films.
  • YUV Luminance, Chrominance, Chroma
  • Step 2 simulates the effect of dark areas in the old film.
  • the color format of the image frame in the original video is a color format other than the YUV format, such as RGB (Red, Green, Blue) format
  • directly adjusting the brightness value of the image frame may cause the problem of color distortion of the image frame. Therefore, in the embodiment of the present application, the image format of the image frame is converted to the YUV format, and the brightness parameter of the image frame of the YUV format is adjusted by a gamma correction method.
  • the Y channel in the YUV format represents Luminance
  • the U channel represents Chrominance
  • the V channel represents saturation (Chroma).
  • the Y channel is adjusted by the gamma correction method, and the U channel and V channel are not adjusted.
  • the adjustment can avoid the color distortion of the image frame to a certain extent.
  • the parameters of the gamma correction method can be randomly set, and the Y channel can be changed to a random degree to make different image frames The brightness parameters have changed to varying degrees.
  • the RGB format image frame can be converted to the YUV format image frame.
  • Step 3 In the case where the quality influencing parameter includes a definition parameter, the first resolution of each image frame in the original video is adjusted to the target resolution, and then adjusted to the first resolution. Wherein, the target resolution is smaller than the first resolution.
  • the embodiment of the present application performs the execution on each image frame in the original video In the following operation, first randomly select a resolution from the resolution set, determine the randomly selected resolution as the target resolution, and adjust the first resolution of each image frame in the original video to the target resolution, Then adjust to the first resolution.
  • the resolution set includes 480p, 720p, 1080p, 2k, 4k, 8k and other resolutions
  • the first resolution of each image frame of the original image frame is 8K
  • the selected target resolution is 480p
  • the above-mentioned image frames can be down-sampled at random scales to obtain 480P image frames
  • the 480P image frames can be up-sampled to restore their resolution to 8K.
  • the image frame after adjusting the definition parameters will become a blurred image frame.
  • Step 4 Use the original video after adjusting the quality influencing parameters as the target video corresponding to the original video. It should be emphasized that the embodiment of the present application does not limit the sequence of adjusting the noise parameter, brightness parameter, and definition parameter of the original video, and the sequence of adjusting the parameters can be set based on actual conditions.
  • This application implements training the video repair model through the video training set obtained from steps S102 to S106 above, so that the video repair model can repair the old movies as a whole. Therefore, after constructing the video training set based on the original video and the target video, the embodiment of the application An implementation manner for repairing old movies using the above-mentioned video repair model is also provided. First, train the video repair model according to the video training set to obtain the trained video repair model, and then input the video to be repaired into the trained video repair model to obtain the repair video output by the trained video repair model.
  • the definition parameter of the video to be repaired is lower than the preset fourth threshold
  • the noise parameter is higher than the preset fifth threshold
  • the brightness parameter is lower than the preset sixth threshold.
  • the trained video repair model is used in the video Each frame of the image is repaired.
  • the definition parameter of the input video to be repaired can be higher than the preset first threshold, the noise parameter is lower than the preset second threshold, and the brightness parameter is higher than A video with a third threshold is preset.
  • the fourth threshold is lower than the first threshold
  • the fifth threshold is lower than the second threshold
  • the sixth threshold is lower than the third threshold.
  • an embodiment of the present application may further include: acquiring an original image frame in the first video; determining a first area in the original image frame, wherein the original image frame includes the first area and The second area, the first area in the original image frame is in an under-exposed state, and the second area in the original image frame is not in the under-exposed state; performing enhancement processing on the original image frame to obtain a target image frame, wherein, the enhancement processing is used to adjust the exposure state of different areas in the image to the normal exposure state, and the first area and the second area in the target image frame are both in the normal exposure state; according to the target image frame, generate The second video.
  • the subsequent embodiments which will not be repeated here.
  • an embodiment of the present application may further include: performing contrast enhancement processing on the original image to be processed to obtain an initial enhanced image, wherein the original image includes a first area and a second area, and the original image The first area of is in an under-exposed state, the second area in the original image is not in an under-exposed state, the first area in the initial enhanced image is not in an under-exposed state, and the second area in the initial enhanced image Is in an overexposed state; acquiring light weight information of the original image, wherein the light weight information is set to indicate the light weight corresponding to each pixel in the original image, and the light weight corresponds to the brightness of the corresponding pixel Related; Perform image fusion processing on the original image and the initial enhanced image according to the light weight information to obtain a target enhanced image, wherein the first region and the second region in the target enhanced image are both in a normal exposure state .
  • an embodiment of the present application also provides a video repair method. Referring to the flowchart of a video repair method shown in FIG. 2, the method may include the following steps:
  • Step S202 Obtain a video to be repaired.
  • the video to be repaired may be an early TV series or movie or other film and television works, or it may be a damaged low-quality video.
  • the low-quality video may be a video whose resolution is lower than the preset fourth threshold, noise is higher than the preset fifth threshold, and brightness is lower than the preset sixth threshold.
  • step S204 the video to be repaired is input to the video repair model, and the repair video output by the video repair model is obtained.
  • the video repair model is obtained by training the initial video repair model according to the video training set, and the video training set includes the original video and the target video corresponding to the original video.
  • the target video is obtained after adjusting the quality influencing parameters of the original video, and the video quality of the target video is lower than the video quality of the original video.
  • the quality influencing parameters include at least two of the following: noise parameters, brightness parameters, and sharpness parameters.
  • CNN Convolutional Neural Network
  • the repair model provided in the embodiment of the present application may include a convolutional neural network.
  • this application embodiment provides a training method for the repair model, see the following steps 1 to 2:
  • Step 1 Obtain the video training set.
  • the video training set is obtained by the method for constructing the video training set provided in the foregoing embodiment, and includes a large number of original videos and target videos corresponding to the original videos.
  • Step 2 Use the target video in the video training set as the input of the convolutional neural network, and use the original video in the video training set as the output of the convolutional neural network to train the convolutional neural network.
  • the convolutional neural network learns the mapping relationship between the input and the output to obtain the video repair model needed to repair the video, and repair low-quality film and television works or videos through the video repair model.
  • the video repair model's processing of the to-be-repaired model is equivalent to the processing of de-drying, dark-field stretching, and de-blurring of the to-be-repaired video, so as to obtain a high-definition repair result corresponding to the low-quality video to be repaired.
  • the video to be repaired needs to be input into the pre-trained video repair model to obtain the repaired high-definition video (that is, the aforementioned repair result).
  • the video repair method provided by the embodiments of the present application is more complicated and complicated in stages than the multiple neural network models used in related technologies to perform a more complicated and complex staged repair method. Since each processing process will have a certain impact on the repair result, resulting in a repaired video There are accumulated errors, and the embodiment of the present application can repair the video as a whole through only one neural network, which can effectively alleviate the problem of accumulated errors in the repair results in related technologies.
  • an embodiment of the present application may further include: acquiring an original image frame in the first video; determining a first area in the original image frame, wherein the original image frame includes the first area and The second area, the first area in the original image frame is in an under-exposed state, and the second area in the original image frame is not in the under-exposed state; performing enhancement processing on the original image frame to obtain a target image frame, wherein, the enhancement processing is used to adjust the exposure state of different areas in the image to the normal exposure state, and the first area and the second area in the target image frame are both in the normal exposure state; according to the target image frame, generate The second video.
  • the subsequent embodiments which will not be repeated here.
  • an embodiment of the present application may further include: performing contrast enhancement processing on the original image to be processed to obtain an initial enhanced image, wherein the original image includes a first area and a second area, and the original image The first area of is in an under-exposed state, the second area in the original image is not in an under-exposed state, the first area in the initial enhanced image is not in an under-exposed state, and the second area in the initial enhanced image Is in an overexposed state; acquiring light weight information of the original image, wherein the light weight information is set to indicate the light weight corresponding to each pixel in the original image, and the light weight corresponds to the brightness of the corresponding pixel Related; Perform image fusion processing on the original image and the initial enhanced image according to the light weight information to obtain a target enhanced image, wherein the first region and the second region in the target enhanced image are both in a normal exposure state .
  • an embodiment of the present application also provides a video processing device.
  • the device may include the following parts:
  • the original video obtaining module 302 is configured to obtain the original video.
  • the parameter adjustment module 304 is configured to adjust the quality influencing parameters of the original video to obtain the target video corresponding to the original video, and the video quality of the target video is lower than the video quality of the original video; wherein the quality influencing parameters include at least two of the following: noise Parameters, brightness parameters and resolution parameters.
  • the training set construction module 306 is set to construct a video training set based on the original video and the target video; wherein the video training set saves the corresponding relationship between the target video and the original video, the video training set is used to train the video repair model, and the video repair after the training The model is used to repair the video.
  • the video processing device can obtain a target video with a lower video quality than the original video by adjusting at least two quality influencing parameters, and can effectively adopt the target video to directly imitate the old film, and finally can be based on the original video And the target video constructs a video training set. Since the video training set contains the original video and the target video, it can be directly used to train the video repair model, effectively solving the problem of difficulty in obtaining the training set of the video repair model that can directly repair the old film as a whole.
  • the above-mentioned parameter adjustment module 304 is further configured to add random noise to each image frame in the original video when the quality influencing parameter includes a noise parameter; in the case where the quality influencing parameter includes a brightness parameter , Randomly adjust the brightness of each image frame in the original video; when the quality influencing parameters include definition parameters, adjust the first resolution of each image frame in the original video to the target resolution before adjusting Is the first resolution; where the target resolution is smaller than the first resolution; the original video after adjusting the quality influencing parameters is used as the target video corresponding to the original video.
  • the above-mentioned parameter adjustment module 304 is further configured to: perform the following operations on each image frame in the original video: determine the first random noise level within a pre-configured random noise interval; The compression noise of the first random noise level is added to the frame.
  • the above-mentioned parameter adjustment module 304 is further configured to: if the color format of the image frame in the original video frame is a color format other than the YUV format, change the color image of each image frame in the original video The format is converted to YUV format; under the Y channel, the brightness parameter of each format converted image frame is reduced by a random value.
  • the aforementioned parameter adjustment module 304 is further configured to: perform the following operations on each image frame in the original video: randomly select a resolution from the resolution set, and determine the randomly selected resolution as the target Resolution: After adjusting the first resolution of each image frame in the original video to the target resolution, then adjust it to the first resolution.
  • the above-mentioned video processing device further includes a repair module configured to: after constructing a video training set based on the original video and the target video, train the video repair model according to the video training set to obtain the trained video repair Model, where the trained video repair model is used to repair each frame of image in the video; the video to be repaired is input to the trained video repair model, and the repaired video output by the trained video repair model is obtained.
  • a repair module configured to: after constructing a video training set based on the original video and the target video, train the video repair model according to the video training set to obtain the trained video repair Model, where the trained video repair model is used to repair each frame of image in the video; the video to be repaired is input to the trained video repair model, and the repaired video output by the trained video repair model is obtained.
  • the above-mentioned video processing device further includes: an image extraction module configured to obtain an original image frame in the first video; an image analysis module configured to determine a first region in the original image frame, Wherein, the original image frame includes a first area and a second area, the first area in the original image frame is in an under-exposed state, and the second area in the original image frame is not in an under-exposed state; an enhancement processing module , Set to perform enhancement processing on the original image frame to obtain a target image frame, wherein the enhancement processing is used to adjust the exposure state of different regions in the image to a normal exposure state, and the first region in the target image frame And the second area are both in a normal exposure state; the video generation module is configured to generate a second video according to the target image frame.
  • the above-mentioned video processing device further includes: an enhancement module configured to perform contrast enhancement processing on the original image to be processed to obtain an initial enhanced image, wherein the original image includes a first area and a second area, The first area in the original image is in an under-exposed state, the second area in the original image is not in an under-exposed state, the first area in the initial enhanced image is not in an under-exposed state, and the initial enhanced image The second area in the image is in an overexposed state;
  • the first acquisition module is configured to acquire the light weight information of the original image, wherein the light weight information is set to indicate the light corresponding to each pixel in the original image The light weight is related to the brightness of the corresponding pixel;
  • the fusion module is configured to perform image fusion processing on the original image and the initial enhanced image according to the light weight information to obtain a target enhanced image, wherein Both the first area and the second area in the target enhanced image are in a normal exposure state.
  • an embodiment of the present application also provides a video repairing device.
  • the device may include the following parts:
  • the to-be-repaired video acquisition module 402 is configured to acquire the to-be-repaired video.
  • the repair module 404 is configured to input the video to be repaired into the video repair model to obtain the repair video output by the video repair model; wherein, the video repair model is obtained by training the initial video repair model according to the video training set
  • the video training set includes an original video and a target video corresponding to the original video, the target video is obtained after adjusting the quality influencing parameters of the original video, and the video quality of the target video is lower than
  • the quality influencing parameter includes at least two of the following: a noise parameter, a brightness parameter, and a definition parameter.
  • the above-mentioned video restoration device further includes: an image extraction module configured to obtain an original image frame in the first video; an image analysis module configured to determine a first region in the original image frame, wherein , The original image frame includes a first area and a second area, the first area in the original image frame is in an under-exposed state, and the second area in the original image frame is not in an under-exposed state; an enhancement processing module, It is set to perform enhancement processing on the original image frame to obtain a target image frame, wherein the enhancement processing is used to adjust the exposure state of different regions in the image to a normal exposure state, and the first region in the target image frame and The second areas are all in a normal exposure state; the video generation module is configured to generate a second video according to the target image frame.
  • the above-mentioned video restoration device further includes: an enhancement module configured to perform contrast enhancement processing on the original image to be processed to obtain an initial enhanced image, wherein the original image includes a first area and a second area, so The first area in the original image is in an under-exposed state, the second area in the original image is not in an under-exposed state, the first area in the initial enhanced image is not in an under-exposed state, and in the initial enhanced image The second area of the is in an overexposed state; the first acquisition module is configured to acquire the light weight information of the original image, wherein the light weight information is set to indicate the light weight corresponding to each pixel in the original image The light weight is related to the brightness of the corresponding pixel; the fusion module is configured to perform image fusion processing on the original image and the initial enhanced image according to the light weight information to obtain a target enhanced image, wherein the target Both the first area and the second area in the enhanced image are in a normal exposure state.
  • an enhancement module configured to perform contrast enhancement processing on the original image to be
  • the old film restoration process is split into multiple processing processes in the related art.
  • Each processing process needs to be implemented by a corresponding neural network model, because each processing process Will have a certain impact on the repair result, which leads to cumulative errors in the repaired video.
  • this application embodiment can repair the video as a whole through only one neural network, which can effectively alleviate the problem of accumulation of errors in the repair results in related technologies.
  • this embodiment also provides a video processing method. As shown in Figure 5, the method includes the following steps S502-S508.
  • Step S502 Obtain the original image frame in the first video.
  • the first video is a video that needs to be improved in image quality, for example, an old movie, an old TV series, and other types of videos.
  • An image frame is the smallest unit that composes a video.
  • An image frame is a still picture. Based on the persistence effect of human eyes, a video can be formed by quickly playing multiple image frames in sequence.
  • the original image frames in this embodiment are image frames used to form the first video, and the number thereof is multiple.
  • a special video processing tool can be used to obtain the original image frame in the first video.
  • the video processing software FFmpeg can be used to cut the video into multiple image frames.
  • FFmpeg is a set of open source computer programs that can be used to record, convert digital audio and video, and convert them into streams. It provides a complete solution for recording, converting, and streaming audio and video.
  • the video can be decomposed into a sequence of pictures to obtain the original image frame in the first video.
  • Step S504 Determine a first area in the original image frame, where the original image frame includes a first area and a second area, the first area in the original image frame is under-exposed, and the second area in the original image frame is not In a state of underexposure.
  • underexposure refers to underexposure, which is manifested as the lack of detail in the darker areas of the image
  • overexposure refers to overexposure, which manifests as the lack of detail in the brighter areas of the image.
  • Old movies, old TV series and other types of videos usually have the problem of underexposure.
  • the original image frame can be divided into a plurality of sub-regions, and then the exposure state of each sub-region can be judged, and the region in the under-exposed state, that is, the first region, can be determined.
  • the original image frame can be evenly divided into multiple grids, the grid shape is, for example, a square, a regular hexagon, etc., and each grid is regarded as a sub-region.
  • the image contour in the original image frame can also be determined first, and the area within each image contour is regarded as a sub-region.
  • the exposure state it can be combined with the shooting environment and based on the average value of the pixels in the sub-region. For example, for an image taken by the sky, if the average value of the pixels in the sub-area exceeds 120 and does not exceed 150, the sub-area is judged to be in the normal state of exposure, and if the average value of the pixels in the sub-area does not exceed 120, it is judged that the sub-area is in the normal state. Under-exposed state, if the average value of pixels in the sub-region exceeds 150, it is determined that the sub-region is in the over-exposed state.
  • the exposure state it can also be judged according to the brightness distribution of the sub-region. For example, if the peaks of the brightness histogram of the sub-area are concentrated on the right side, it is judged that the sub-area is in the over-exposed state. If the peak distribution of the brightness histogram of the area is uniform, it is judged that the sub-area is in a normal state of exposure.
  • Step S506 Perform enhancement processing on the original image frame to obtain a target image frame, where the enhancement processing is used to adjust the exposure state of different areas in the image to a normal exposure state, and the first area and the second area in the target image frame are both at Exposure is normal.
  • an enhancement process is performed on the original image frame, and the enhancement process can adjust the exposure state of different regions of the image to a normal exposure state.
  • the first area in the target image frame corresponds to the first area in the original image frame
  • the second area in the target image frame corresponds to the second area in the original image frame.
  • step S506 further includes: obtaining an auxiliary image frame by performing contrast enhancement processing on the original image frame, wherein the first area in the auxiliary image frame is in a normal exposure state, and the second area in the auxiliary image frame is in a normal exposure state. Over-exposed state; image fusion is performed on the original image frame and the auxiliary image frame to obtain the target image frame.
  • a processing method commonly used in related technologies for ratio enhancement processing adjusts the area that is originally under-exposed to the normal state of exposure, and often adjusts the area that is originally in the normally-exposed state to the over-exposed state.
  • the auxiliary image frame is obtained by performing contrast enhancement processing on the original image frame.
  • the contrast enhancement processing can adopt methods such as histogram equalization and gray scale transformation.
  • the target image frame is obtained by performing image fusion on the original image frame and the auxiliary image frame.
  • image fusion refers to the process of synthesizing the information of two images to obtain the target image.
  • the image fusion process includes: obtaining the light weight information of the original image frame, where the light weight information is used to indicate the light weight corresponding to each pixel in the original image frame, and the light weight is compared with the corresponding pixel point. Brightness correlation; image fusion is performed on the original image frame and the auxiliary image frame according to the light weight information to obtain the target image frame.
  • the above-mentioned light weight information can represent the light and dark intensity of the original image frame, and the light weight and the brightness of the corresponding pixel point can be positively correlated or negatively correlated.
  • the light weight information can be characterized by a weight image, and the size of the weight image is consistent with the size of the original image; taking the light weight and the brightness of the corresponding pixel as an example, the weight image is in the original image frame.
  • the lighter weight corresponding to the darker area (the first area) is smaller, while the lighter weight corresponding to the lighter area (the second area) in the original image frame is larger.
  • the above-mentioned original image frame and weighted image can both be expressed in the form of an image matrix (two-dimensional matrix).
  • the rows of the image matrix correspond to the height of the image (in pixels), and the columns of the image matrix correspond to the width of the image.
  • the unit is pixel).
  • the elements in the image matrix of the original image frame correspond to the pixel values of the pixels of the original image frame, and the elements in the image matrix of the weighted image (light weight) are related to the brightness of the pixels in the original image frame.
  • the image matrix of the original image frame and the image matrix of the weighted image are the same in the number of rows and columns, and the two elements of the two in the same position correspond to the same pixel in the original image.
  • the original image frame and the auxiliary image frame can be assigned different fusion weights based on the light weight information, so that for the weakly illuminated area (the first area), the original image frame The fusion weight of is smaller, and the fusion weight of the auxiliary image frame is larger. For the area with strong illumination (the second area), the fusion weight of the original image frame is larger, and the fusion weight of the auxiliary image frame is smaller.
  • the auxiliary image frame and the original image frame may be merged pixel by pixel.
  • the pixel A1 in the original image frame (the pixel value of this pixel is denoted as a1)
  • the pixel A2 (the pixel value of the pixel is denoted as a2) in the auxiliary image frame
  • the illumination weight of the pixel A1 is p
  • the light weight is related to the brightness of the corresponding pixel. For example, a pixel with a higher brightness corresponds to a higher light weight.
  • the illumination weight of the pixels in the first area is 0, and the illumination weight of the pixels in the second area is 1.
  • the illumination weight of the pixels in the first area is 0.2
  • the illumination weight of the pixels in the second area is 0.7.
  • the illumination weight in this embodiment makes the contribution of the first area in the original image frame to the target image frame smaller, and the contribution of the second area in the original image frame to the target image frame larger during the fusion process. Therefore, while the target image frame improves the brightness of the first area in the original image frame, it can also effectively retain the brightness information of the second area in the original image frame to prevent overexposure.
  • step S508 a second video is generated according to the target image frame.
  • the target image frames are spliced into videos in order, that is, the processed video, that is, the second video, is obtained.
  • the video processing method in this embodiment by performing enhancement processing on the original image frame, while adjusting the underexposed area in the original image frame to a normal exposure state, it also prevents overexposure in the normally exposed area in the original image frame. Helps to improve the image quality of the video.
  • the above-mentioned video processing method further includes: performing denoising processing on the original image frame to obtain an updated original image frame, wherein Noise processing is used to reduce the noise of the image; based on the updated original image frame, perform the step of obtaining the auxiliary image frame by performing contrast enhancement processing on the original image frame.
  • Image noise refers to unnecessary or redundant interference information existing in image data.
  • the causes of image noise usually involve multiple aspects. For example, in the image acquisition stage, due to internal factors such as photoelectric characteristics, equipment mechanical movement, equipment materials, equipment circuits, etc., as well as external factors such as electromagnetic wave interference, image noise will be caused. After the image acquisition is completed, new noise will be introduced in the transmission and decompression of image data. Image noise will affect the picture quality of old movies.
  • noise removal processing is performed on the noise in the original image frame.
  • Mean filter, adaptive Wiener filter, median filter, morphological noise filter, wavelet denoising and other means can be used to remove the noise in the image.
  • performing denoising processing to reduce image noise on the original image frame includes: performing denoising processing to reduce image noise on the original image frame through a pre-trained denoising model.
  • the mixed noise formed by acquisition noise, compression noise, etc. in the image frame is subjected to denoising processing.
  • the denoising model is generated before performing the denoising process.
  • the process of generating the denoising model includes: adding random intensity noise to the original image to obtain a noisy image; training the convolutional neural network model according to the original image and the noisy image to obtain the denoising model.
  • the original image is a picture with higher picture quality.
  • the random noise added to the original image is, for example, Gaussian noise, salt and pepper noise, and the like.
  • the denoising model is generated in the form of a convolutional neural network.
  • Convolutional Neural Networks is a type of feedforward neural network that includes convolution calculations and has a deep structure. It is one of the representative algorithms of deep learning.
  • a training data set is formed according to the original image and the noise image to train the convolutional neural network, where the noise image is used as the input of the convolutional neural network, and the original image is used as the output of the neural network.
  • the trained convolutional neural network model is the denoising model.
  • the above-mentioned video processing method further includes: performing edge enhancement processing on the original image frame to obtain an updated original image frame, wherein the edge The enhancement processing is used to improve the definition of the contour edges in the image; based on the updated original image frame, the step of obtaining an auxiliary image frame by performing contrast enhancement processing on the original image frame is performed.
  • edge enhancement processing is performed on the image frame, which is beneficial to improve the definition of the video.
  • methods such as high-pass filtering and spatial differentiation can be used to perform edge enhancement processing.
  • the spatial differentiation method is used for edge enhancement processing, and the gradient value is calculated by the gradient mode operator. Because the gray level at the edge changes greatly, the corresponding gradient value is also greater, so the gray level of the pixel with a large gradient value is strengthened Value can highlight the details at the edge, so as to achieve the purpose of edge enhancement.
  • the above-mentioned video processing method further includes: performing super-resolution processing on the target image frame, wherein the super-resolution processing is used to improve The resolution of the image.
  • super-resolution processing may be performed on image frames based on a sparse coding method, a self-model method, a Bayes method, a pyramid algorithm, a deep learning method, and the like.
  • the super-resolution processing includes: using a pre-trained super-resolution model to perform super-resolution processing to increase the image resolution of the target image frame after the dark field enhancement processing.
  • super-resolution processing is performed on the target image frame based on the neural network model.
  • a super-resolution model is first generated before performing super-resolution processing.
  • the process of generating the super-resolution model includes: compressing the sample image to obtain a low-resolution image; training the convolutional neural network model according to the sample image and the low-resolution image to obtain the super-resolution model.
  • a high-resolution picture is selected as the sample image.
  • a low-resolution image is obtained by performing quality compression on the sample image.
  • a training data set is formed based on sample images and low-resolution images to train the convolutional neural network, where the low-resolution images are used as the input of the convolutional neural network, and the sample images are used as the output of the neural network.
  • the trained convolutional neural network model is the super-resolution model.
  • the method further includes: performing super frame rate processing on the second video to obtain a third video for playback, wherein the super frame rate processing is used to improve the video Frame rate.
  • the frame rate of early movies is usually low, which will affect the smoothness of video playback.
  • super frame rate processing is performed on the video to make the video play more smoothly.
  • a simple frame rate enhancement algorithm a frame rate enhancement algorithm including motion compensation, and a frame rate enhancement algorithm based on an autoregressive model can be used to perform super frame rate processing.
  • the frame averaging method in the simple frame rate improvement algorithm is used to perform super frame rate processing, that is, the weighted average result of two adjacent frames of the second video is used as an interpolation frame and added to between the two frames.
  • the original frames of the second video and the interpolated frames obtained by the interpolation processing are spliced in order to obtain the third video.
  • Fig. 6 shows an example of the implementation of the video processing method in this embodiment.
  • the electronic device first uses the FFmpeg video processing tool to cut the video into multiple image frames, that is, execute step S601.
  • the electronic device inputs the image frame into a pre-trained convolutional neural network model to obtain a denoised image frame, that is, step S602 is executed.
  • the electronic device adopts a filtering or matrix-based edge sharpening method to perform edge sharpening processing on the image frame, that is, step S603 is executed.
  • step S605 For each image frame after edge enhancement, the electronic device constructs an auxiliary image through the existing image enhancement method, and merges the image frame and the auxiliary graphics according to a specific weight to improve the image quality of the weak-brightness area in the image frame and maintain the image frame
  • step S605 For each image frame after dark field enhancement processing, the electronic device inputs the image frame into a pre-trained convolutional neural network to obtain an image frame with a higher resolution, that is, step S606 is executed.
  • step S607 For the multiple image frames after the super-resolution processing, the electronic device inserts a new image frame into the multiple image frames based on the frame rate improvement algorithm of the autoregressive model.
  • the electronic device For the multiple image frames processed by the frame insertion, the electronic device splices these image frames into a video in sequence to obtain a processed video, that is, step S608 is executed.
  • This embodiment provides a video processing device, which includes an image extraction module, an image analysis module, an enhancement processing module, and a video generation module.
  • the image extraction module is configured to obtain the original image frame in the first video.
  • the image analysis module is configured to determine the first area in the original image frame, where the original image frame includes a first area and a second area, the first area in the original image frame is in an underexposed state, and the first area in the original image frame The second area is not under-exposed.
  • the enhancement processing module is configured to perform enhancement processing on the original image frame to obtain the target image frame, wherein the enhancement processing is used to adjust the exposure state of different areas in the image to the normal exposure state, and the first area and the second area in the target image frame The areas are all in a normal state of exposure.
  • the video generation module is configured to generate the second video according to the target image frame.
  • the enhancement processing module when the enhancement processing module performs enhancement processing on the original image frame to obtain the target image frame, it is set to: obtain the auxiliary image frame by performing contrast enhancement processing on the original image frame, wherein the second image frame in the auxiliary image frame One area is in a normal exposure state, and the second area in the auxiliary graphic frame is in an overexposed state; image fusion is performed on the original image frame and the auxiliary image frame to obtain the target image frame.
  • the enhancement processing module when the enhancement processing module fuses the original image frame and the auxiliary image frame to obtain the target image frame, it is set to: obtain the light weight information of the original image, where the light weight information is used to indicate the light weight information in the original image
  • the light weight corresponding to each pixel is related to the brightness of the corresponding pixel; the original image and the auxiliary image frame are image fused according to the light weight information to obtain the target image frame.
  • the video processing device further includes a denoising processing module configured to: perform denoising processing on the original image frame before obtaining the auxiliary image frame by performing contrast enhancement processing on the original image frame, The updated original image frame is obtained, wherein the denoising process is used to reduce the noise of the image.
  • the enhancement processing module executes the step of obtaining an auxiliary image frame by performing contrast enhancement processing on the original image frame.
  • the video processing device further includes an edge enhancement processing module configured to perform edge enhancement processing on the original image frame before obtaining the auxiliary image frame by performing contrast enhancement processing on the original image frame,
  • the updated original image frame is obtained, wherein the edge enhancement processing is used to improve the definition of the contour edge in the image.
  • the enhancement processing module executes the step of obtaining an auxiliary image frame by performing contrast enhancement processing on the original image frame.
  • the video processing device further includes a super-resolution module configured to perform super-resolution on the target image frame after fusing the original image frame and the auxiliary image frame to obtain the target image frame. Processing, where super-resolution processing is used to improve the resolution of the image.
  • the video processing device further includes a super frame rate module, and the super frame rate module is configured to: after generating the second video according to the target image frame, perform super frame rate processing on the second video to obtain The third video in which super frame rate processing is used to increase the frame rate of the video.
  • the local area of the old film (such as the video image taken in the 1980s and 1990s) is easy to be too dark, resulting in a poor viewing experience for users. These overdark areas belong to the dark field in the old film. .
  • the current image enhancement method is used to enhance the dark field of the old film, it is easy to appear that the enhancement effect of the darker area in the video image of the old film is insufficient, and the brighter area is over-enhanced, which causes the color distortion of the enhanced video image and affects The final enhancement effect.
  • embodiments of the present application also provide an image enhancement method, device, electronic device, and computer-readable storage medium, which can improve the enhancement effect and alleviate the problem of color distortion caused by image enhancement.
  • the above-mentioned image enhancement processing for old movies is only an exemplary application scenario of the embodiment of the present application, and the protection scope of the embodiment of the present application is not limited to this.
  • the image enhancement method, device, The electronic device and the computer-readable storage medium can also be applied to other images with many dark areas or uneven light and dark areas.
  • the embodiments of the present application provide an image enhancement method, which can be executed by an electronic device with image processing capabilities.
  • the electronic device can be, but is not limited to, any of the following: desktop computers, notebook computers, tablet computers, and smart devices. Cell phone etc.
  • the method mainly includes the following steps S702 to S706:
  • Step S702 Perform contrast enhancement processing on the original image to be processed to obtain an initial enhanced image, where the original image includes a first area and a second area, the first area in the original image is under-exposed, and the second area in the original image Not in the under-exposed state, the first area in the initial enhanced image is not in the under-exposed state, and the second area in the initial enhanced image is in the over-exposed state.
  • the above-mentioned original image to be processed is an image that needs to be image-enhanced. It may be a video picture of one frame (in a frame unit) in the old film, or may be another image with many dark areas or uneven light and dark areas.
  • the original image has a first area (darker area) in an under-exposed state and a second area that is not in an under-exposed state.
  • the second area is a normal-exposed area in a normal-exposed state or an over-exposed area in an over-exposed state.
  • a histogram equalization method may be used to perform contrast enhancement processing on the original image to be processed to obtain the initial enhanced image.
  • the following respectively introduces the above-mentioned histogram equalization method, Retinex algorithm and dark channel dehazing algorithm.
  • the foregoing histogram equalization method is simple to calculate and easy to implement.
  • the principle of the histogram equalization method is as follows: the histogram equalization essentially non-linearly stretches the original image and redistributes the image pixel values so that the number of pixel values in a certain gray scale range is approximately equal. In this way, the contrast of the top part of the peak in the middle of the original histogram is enhanced, while the contrast of the bottom part of the valley on both sides is reduced, so that the overall image is enhanced.
  • the foregoing histogram equalization method includes: DHE (Dynamic histogram equalization) or CLAHE (Contrast limited adaptive histogram equalization, contrast-limited adaptive histogram equalization).
  • DHE Dynamic histogram equalization
  • CLAHE Contrast limited adaptive histogram equalization, contrast-limited adaptive histogram equalization
  • DHE divides the histogram corresponding to the original image into different parts, and performs the histogram equalization operation in each histogram subset.
  • CLAHE adaptively limits the degree of contrast enhancement of histogram equalization.
  • Retinex algorithm (Retinex model): The Retinex algorithm estimates the illumination image based on the original image under the condition that the reflectance image is unknown. The Retinex algorithm can well enhance the contrast of dark areas and show more details.
  • Dark channel defogging algorithm (dark channel defogging model): After the original image is inverted, the dark channel defogging algorithm is used to process it, and then the result is inverted again to obtain the initial enhanced image.
  • Step S704 Obtain light weight information of the original image, where the light weight information is set to indicate the light weight corresponding to each pixel in the original image, and the light weight is related to the brightness of the corresponding pixel.
  • the above-mentioned light weight information can represent the light and dark intensity of the original image, and the light weight and the brightness of the corresponding pixel can be positively correlated or negatively correlated.
  • the light weight information can be represented by a weight image, and the size of the weight image is consistent with the size of the original image; taking the light weight and the brightness of the corresponding pixel as an example, the weight image is compared with the original image. The light weight corresponding to the dark area (the first area) is smaller, while the light weight corresponding to the lighter area (the second area) in the original image is larger.
  • both the original image and the weight image can be expressed in the form of an image matrix (two-dimensional matrix).
  • the rows of the image matrix correspond to the height of the image (in pixels), and the columns of the image matrix correspond to the width of the image ( The unit is pixel).
  • the elements in the image matrix of the original image correspond to the pixel values of the pixels of the original image, and the elements in the image matrix of the weighted image (lighting weight) are related to the brightness of the pixels in the original image.
  • the image matrix of the original image and the image matrix of the weighted image are the same in the number of rows and columns, and the two elements of the two in the same position correspond to the same pixel in the original image.
  • Step S706 Perform image fusion processing on the original image and the initial enhanced image according to the light weight information to obtain the target enhanced image, wherein the first area and the second area in the target enhanced image are both in a normal exposure state.
  • the second area in the original image is not under-exposed, and the first area in the initial enhanced image is not under-exposed.
  • the second area in the initial enhanced image is in the over-exposed state, so based on the light weight information, different fusion weights can be assigned to the original image and the initial enhanced image, so that the weakly illuminated area (the first area) , The fusion weight of the original image is smaller, and the fusion weight of the initial enhanced image is larger.
  • the fusion weight of the original image is larger, and the fusion weight of the initial enhanced image is smaller.
  • the above-mentioned illumination weight is positively correlated with the brightness of the corresponding pixel, and the original image and the initial enhanced image are image fusion processed by the following formula to obtain the target enhanced image:
  • D represents the image matrix of the target enhanced image
  • I represents the image matrix of the original image
  • W 1 represents the weight matrix when the illumination weight is positively correlated with the brightness of the corresponding pixel
  • the weight matrix is based on the original image indicated by the illumination weight information.
  • the light weight corresponding to each pixel is determined
  • E represents the image matrix of the original enhanced image
  • P represents the unit matrix corresponding to the original image
  • the unit matrix corresponding to the original image is the same unit as the number of rows and columns of the image matrix of the original image matrix.
  • the weight matrix may be composed of the respective light weights arranged according to the positions of the corresponding pixels.
  • the weight matrix may be the image matrix of the weight image.
  • the above-mentioned illumination weight is negatively related to the brightness of the corresponding pixel
  • the original image and the initial enhanced image are image fusion processed by the following formula to obtain the target enhanced image:
  • D represents the image matrix of the target enhanced image
  • I represents the image matrix of the original image
  • W 2 represents the weight matrix when the illumination weight is negatively correlated with the brightness of the corresponding pixel
  • the weight matrix is based on the original image indicated by the illumination weight information
  • the light weight corresponding to each pixel of is determined
  • E represents the image matrix of the initial enhanced image
  • P represents the unit matrix corresponding to the original image.
  • step S702 there is no order of execution between step S702 and step S704: in the embodiment shown in FIG. 1, step S702 is performed first, and then step S704 is performed; but in other embodiments, step S704 may also be performed first. , And then execute step S702.
  • the original image to be processed is subjected to contrast enhancement processing to obtain the initial enhanced image; the weight image corresponding to the original image is obtained, and the weight image includes the weight corresponding to each pixel in the original image.
  • the brightness is related; the original image and the initial enhanced image are fused according to the weighted image to obtain the target enhanced image.
  • full consideration is given to the different brightness of the pixels in different areas of the image, so that when the original image and the initial enhanced image obtained by the contrast enhancement process are image fused, the pixels of different brightness are correspondingly enhanced.
  • the different weights effectively improve the insufficient enhancement effect of the darker areas in the image existing in the related art, and the brighter areas (normally exposed areas or over-exposed areas) are easily over-enhanced, resulting in color distortion.
  • the following will take the original image as the image frame in the video to be processed, the color mode of the original image is the RGB mode, and the illumination weight information is represented by the weight image as an example.
  • the above image enhancement method will be exemplarily described with reference to FIG. 8 .
  • the method includes the following steps:
  • Step S802 Obtain a video to be processed.
  • the video to be processed here can be, but is not limited to, the old film mentioned above.
  • Step S804 Perform frame cutting processing on the video to be processed to obtain the original image.
  • the to-be-processed video is divided into one frame of video pictures (in units of frames), and these one frame of video pictures are the original images.
  • Step S806 Convert the color mode of the original image to the HSV mode to obtain the HSV channel image.
  • the color mode of the above-mentioned original image is the RGB mode, and the color mode of the original image needs to be converted to the HSV mode first to obtain the HSV channel image.
  • Each color in the HSV channel image is represented by hue (Hue, abbreviated as H), saturation (Saturation, abbreviated as S), and chromaticity (Value, abbreviated as V).
  • the HSV channel image includes H channel, The S channel and the V channel are three channels.
  • the pixel value of each pixel in the HSV channel image is represented by the color value of the three channels, and the color value of the V channel is also called the gray value.
  • Step S808 performing enhancement processing on the V channel in the HSV channel image to obtain an enhanced HSV channel image, where the enhancement processing is used to perform the enhancement processing on the HSV channel image according to the grayscale distribution of each pixel in the HSV channel image on the V channel.
  • the gray value of each pixel in the channel image is adjusted by equalization, so that the gray value of the pixel with the largest gray value in the HSV channel image is adjusted to the gray upper limit of the preset gray interval, and the enhanced HSV
  • the gray value of each pixel in the channel image is uniformly distributed within the preset gray level interval.
  • the aforementioned preset gray-scale interval can be set according to actual needs, and it can be the same as the gray-scale interval corresponding to the original image or different from the gray-scale interval corresponding to the original image.
  • the preset gray-scale interval is set to 0-255.
  • the gray value of the pixel with the largest gray value in the HSV channel image is adjusted to 255.
  • a histogram equalization method may be used to perform enhancement processing on the V channel in the HSV channel image to obtain an enhanced HSV channel image.
  • Step S810 Convert the color mode of the enhanced HSV channel image to an RGB mode to obtain an initial enhanced image.
  • the equalization adjustment of the degree value ensures that the gray value of each pixel in the enhanced HSV channel image appears at the same frequency at each gray level, so that the gray value of each pixel in the original image is improved, so that the original
  • the first area in the under-exposed state in the image becomes not in the under-exposed state in the initial enhanced image, and the second area in the original image that is not in the under-exposed state becomes in the over-exposed state in the initial enhanced image.
  • Step S812 Obtain the illumination image of the above-mentioned original image.
  • the Retinex algorithm may be used to obtain the reflection image of the original image, and then the illumination image of the original image may be determined based on the reflection image and the original image.
  • the Retinex algorithm can use a single-scale Retinex model, a multi-scale Retinex model, or an improved Retinex model.
  • the original image I can be regarded as the composition of the illumination image L and the reflection image R.
  • the incident light illuminates the reflective object, and the reflected light enters the human eye through the reflection of the reflective object.
  • the original image I and the illuminated image L seen by the human eye directly determine the brightness value of the pixel in the original image.
  • the reflected image R represents the intrinsic properties of the original image.
  • the direct relationship between I, R, and L can be expressed by the following formula .
  • I(x,y) represents the pixel value at the pixel point (x,y) in the original image I
  • R(x,y) represents the pixel value at the pixel point (x,y) in the reflected image R
  • L( x, y) represents the pixel value at the pixel point (x, y) in the illuminated image L.
  • the reflection image R is estimated by the Retinex algorithm, so that the illumination image L is obtained.
  • the illumination image L is solved by the following formula:
  • r(x, y) represents the logarithm of R(x, y) with 10 as the base
  • x represents the number of rows of pixels
  • y represents the number of columns of pixels
  • is the coefficient
  • c represents the Gaussian surround scale.
  • c is a preset parameter, c can be set according to actual needs, for example, set c to 3, 5 or 7.
  • the value of ⁇ can be determined by formulas (3) and (4), and then R(x, y) can be solved according to formulas (1) and (2), and then L(x, y).
  • Step S814 Perform normalization processing on the above-mentioned illumination image to obtain a weighted image corresponding to the original image.
  • the above-mentioned weight image is used to characterize the light weight information of the original image.
  • the maximum and minimum normalization method or the standard normalization method may be used to normalize the pixel value of the above-mentioned illuminated image to obtain a weighted image corresponding to the original image.
  • the pixel value obtained by normalizing the pixel value of each pixel in the illumination image is the weight value (the size of the illumination weight) in the weight image.
  • the light weight in the weighted image is positively correlated with the brightness of the corresponding pixel.
  • the normalized pixel value can be calculated by the following formula:
  • p represents the pixel value in the illuminated image L (that is, the pixel value before normalization)
  • q represents the normalized pixel value (that is, the weight value in the weighted image).
  • Step S816 Perform image fusion processing on the original image and the initial enhanced image according to the aforementioned weighted image to obtain the target enhanced image.
  • Image fusion processing can be performed on the original image and the initial enhanced image by the following formula to obtain the target enhanced image:
  • D represents the image matrix of the target enhanced image
  • I represents the image matrix of the original image
  • W represents the image matrix of the weighted image
  • E represents the image matrix of the initial enhanced image
  • P represents the identity matrix corresponding to the original image.
  • Step S818 Generate a target video for playback according to the target enhanced image corresponding to each image frame in the video to be processed.
  • the target enhanced image corresponding to each image frame can be obtained, and the target enhanced image corresponding to each image frame is merged in the original time sequence to obtain The target video for playback.
  • the video to be processed is cut into frames to obtain the original image; the histogram equalization method is used to obtain the initial enhanced image, and the weighted image is obtained based on the illumination image obtained by the Retinex algorithm, and the original image and the initial enhanced image are obtained according to The weighted images are fused to obtain the target enhanced image.
  • This method realizes the adaptive dark field enhancement of the video image, which can enhance the dark area on the original image, prevent the lighter area on the original image from being overexposed after the image is enhanced, ensure that the image color is not distorted, and improve the quality of the video image. quality.
  • this method is simple to calculate and easy to implement.
  • an embodiment of the present application also provides an image enhancement device.
  • the device includes:
  • the enhancement module 92 is configured to perform contrast enhancement processing on the original image to be processed to obtain an initial enhanced image, where the original image includes a first area and a second area, the first area in the original image is under-exposed, and the original image The second area is not in an under-exposed state, the first area in the initial enhanced image is not in an under-exposed state, and the second area in the initial enhanced image is in an over-exposed state;
  • the first obtaining module 94 is configured to obtain the light weight information of the original image, where the light weight information is used to indicate the light weight corresponding to each pixel in the original image, and the light weight is related to the brightness of the corresponding pixel;
  • the fusion module 96 is configured to perform image fusion processing on the original image and the initial enhanced image according to the light weight information to obtain the target enhanced image, wherein the first region and the second region in the target enhanced image are both in a normal exposure state.
  • the enhancement module 92 performs contrast enhancement processing on the original image to be processed to obtain an initial enhanced image, where the original image includes a first area and a second area, the first area in the original image is under-exposed, and the original The second area in the image is not in an underexposed state, the first area in the initial enhanced image is not in an underexposed state, and the second area in the initial enhanced image is in an overexposed state;
  • the first acquisition module 94 acquires the light weight of the original image Information, where the light weight information is used to indicate the light weight corresponding to each pixel in the original image, and the light weight is related to the brightness of the corresponding pixel;
  • the fusion module 96 performs image fusion on the original image and the initial enhanced image according to the light weight information Through processing, the target enhanced image is obtained, wherein the first region and the second region in the target enhanced image are both in a normal exposure state.
  • the above method performs image fusion processing on the original image and the initial enhanced image obtained through the contrast enhancement processing to obtain the target enhanced image, it fully takes into account the different brightness of the pixels in different areas of the image (light weight information), so that the obtained target Both the first area and the second area in the enhanced image are in a normal exposure state, thereby effectively improving the insufficient enhancement effect of the darker area in the image existing in the related art, and the lighter area is likely to be over-enhanced, leading to the problem of color distortion.
  • the above-mentioned enhancement module 92 is configured to: convert the color mode of the original image to the HSV mode to obtain an HSV channel image; Channel enhancement processing to obtain an enhanced HSV channel image, where the enhancement processing is used to determine the grayscale of each pixel in the HSV channel image according to the grayscale distribution of each pixel in the HSV channel image on the V channel.
  • Values are equalized and adjusted so that the gray value of the pixel with the largest gray value in the HSV channel image is adjusted to the gray upper limit of the preset gray interval, and the gray level of each pixel in the enhanced HSV channel image
  • the values are uniformly distributed in the preset grayscale interval; the color mode of the enhanced HSV channel image is converted to the RGB mode to obtain the initial enhanced image.
  • the above-mentioned first acquisition module 94 includes:
  • the obtaining unit 941 is configured to obtain the illumination image of the original image
  • the processing unit 942 is configured to perform normalization processing on the illumination image to obtain a weight image corresponding to the original image, where the weight image is used to represent the illumination weight information of the original image.
  • the above-mentioned acquisition unit 941 is configured to: acquire a reflection image of the original image; and determine the illumination image of the original image according to the reflection image and the original image.
  • the processing unit 942 is configured to: use a maximum-minimum normalization method or a standard normalization method to normalize the illumination image to obtain a weighted image corresponding to the original image.
  • the above-mentioned light weight is positively correlated with the brightness of the corresponding pixel;
  • the above-mentioned fusion module 96 is configured to perform image fusion processing on the original image and the initial enhanced image by the following formula to obtain the target enhanced image:
  • D represents the image matrix of the target enhanced image
  • I represents the image matrix of the original image
  • W represents the weight matrix
  • the weight matrix is determined according to the light weight corresponding to each pixel in the original image indicated by the light weight information
  • E represents the initial enhancement
  • P represents the identity matrix corresponding to the original image.
  • the above-mentioned apparatus when the original image is an image frame in the video to be processed, as shown in FIG. 10, the above-mentioned apparatus further includes:
  • the second acquisition module 1002 is configured to acquire the video to be processed; perform frame cutting processing on the video to be processed to obtain the original image;
  • the generating module 1004 is configured to generate a target video for playback according to the target enhanced image corresponding to each image frame in the video to be processed.
  • the embodiment of the present application also provides a device, which is an electronic device, the electronic device includes a processor and a storage device; the storage device stores a computer program, and the computer program is executed when the processor is run as described above The method of any one of the embodiments.
  • FIG 11 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • the electronic device 100 includes a processor 50, a memory 51, a bus 52, and a communication interface 53, through which the processor 50, the communication interface 53 and the memory 51 pass
  • the bus 52 is connected; the processor 50 is configured to execute an executable module stored in the memory 51, such as a computer program.
  • the memory 51 may include a high-speed random access memory (RAM, Random Access Memory), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM Random Access Memory
  • non-volatile memory such as at least one disk memory.
  • the communication connection between the system network element and at least one other network element is realized through at least one communication interface 53 (which may be wired or wireless), and the Internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
  • the bus 52 may be an ISA bus, a PCI bus, an EISA bus, or the like.
  • the bus can be divided into an address bus, a data bus, a control bus, and so on. For ease of presentation, only one bidirectional arrow is used to indicate in FIG. 5, but it does not mean that there is only one bus or one type of bus.
  • the memory 51 is configured to store a program, and the processor 50 executes the program after receiving an execution instruction.
  • the method executed by the device for stream process definition disclosed in any of the foregoing embodiments of the present application can be applied to processing In the device 50, or implemented by the processor 50.
  • the processor 50 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by an integrated logic circuit of hardware in the processor 50 or instructions in the form of software.
  • the aforementioned processor 50 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP), etc.; it may also be a digital signal processor (Digital Signal Processing, DSP for short), etc. ), Application Specific Integrated Circuit (ASIC for short), Field-Programmable Gate Array (FPGA for short) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory 51, and the processor 50 reads the information in the memory 51, and completes the steps of the above method in combination with its hardware.
  • the computer program product of the readable storage medium provided by the embodiment of the present application includes a computer readable storage medium storing program code.
  • the instructions included in the program code can be configured to execute the method described in the previous method embodiment. For implementation, refer to the foregoing method embodiment, which will not be repeated here.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the related technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including several
  • the instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .
  • the first aspect can effectively solve the problem of difficulty in obtaining the training set of the video repair model that can directly repair the old film as a whole;
  • the second aspect The overall repair of the video through only one neural network can effectively alleviate the problem of accumulation of errors in the repair results in related technologies;
  • the third aspect is to adjust the underexposed areas in the original image frame by enhancing the original image frame In order to expose the normal state, it also prevents the overexposure of the normal-exposed area in the original image frame, which is beneficial to improve the image quality of the video;
  • the fourth aspect effectively improves the darker area in the image existing in the related art ( The enhancement effect of the first area) is insufficient, and the lighter area (the second area) is prone to over-enhancement, resulting in color distortion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本申请提供了一种视频处理方法、视频修复方法、装置及电子设备,该视频处理方法包括:获取原始视频;调整原始视频的质量影响参数,得到原始视频对应的目标视频,目标视频的视频质量低于原始视频的视频质量;其中,质量影响参数包括噪声参数、亮度参数和清晰度参数中的至少两个;基于原始视频和目标视频构造视频训练集;视频训练集保存有目标视频与原始视频的对应关系,视频训练集用以训练视频修复模型,训练后的视频修复模型用于对视频进行修复。本申请可以得到用于训练神经网络的视频训练集,进而解决因难以获取训练集而无法直接基于神经网络完成视频修复工作的问题。另外,本申请还可以提高视频修复的处理效果以及处理效率。

Description

视频处理方法、视频修复方法、装置及设备
本申请要求于2019年11月15日提交中国国家知识产权局、申请号为201911126554.5、发明名称为“视频的处理方法、视频修复方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;本申请要求于2019年11月15日提交中国国家知识产权局、申请号为201911118706.7、发明名称为“视频处理方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;本申请要求于2019年11月15日提交中国国家知识产权局、申请号为201911126297.5、发明名称为“图像增强方法、装置、电子设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频处理技术领域,尤其是涉及一种视频处理方法、视频修复方法、装置及电子设备。
背景技术
老片修复(Old Vedio Reparing)主要是对早期不清晰的电视剧或电影进行修复。通常情况下,老片与现有影片相比,大多受当时采集设备影响而存在清晰度不高等问题,因此需要对其进行修复以使其变清晰,从而给人带来更好的视觉感官效果。目前的老片修复主要采用人工修复方式,费时费力、处理结果不稳定。
发明内容
有鉴于此,本申请实施例的目的在于提供一种视频处理方法、视频修复方法、装置及电子设备,可以提高视频修复的处理效果以及处理效率。
第一方面,本申请实施例提供了一种视频的处理方法,包括:获取原始视频;调整所述原始视频的质量影响参数,得到所述原始视频对应的目标视频,所述目标视频的视频质量低于所述原始视频的视频质量;其中,所述质量影响参数包括以下中的至少两个:噪声参数、亮度参数和清晰度参数;基于所述原始视频和所述目标视频构造视频训练集;其中,所述视频训练集保存有所述目标视频与所述原始视频的对应关系,所述视频训练集用以训练视频修复模型,训练后的视频修复模型用于对视频进行修复。
第二方面,本申请实施例提供一种视频修复方法,包括:获取待修复视频;将所述待修复视频输入至视频修复模型,得到所述视频修复模型输出的修复视频;其中,所述视频修复模型是根据视频训练集对初始视频修复模型进行训练获得的,所述视频训练集包括原始视频和与所述原始视频相对应的目标视频,所述目标视频是对所述原始视频的质量影响参数进行调整后获得的,所述目标视频的视频质量低于所述原始视频的视频质量,所述质量影响参数包括以下中的至少两个:噪声参数、亮度参数和清晰度参数。
第三方面,本申请实施例提供一种视频的处理装置,包括:原始视频获取模块,设置为获取原始视频;参数调整模块,设置为调整所述原始视频的质量影响参数,得到所述原始视频对应的目标视频,所述目标视频的视频质量低于所述原始视频的视频质量;其中,所述质量影响参数包括以下中的至少两个:噪声参数、亮度参数和分辨率参数;训练集构造模块,设置为基于所述原始视频和所述目标视频构造视频训练集;其中,所述视频训练集保存有所述目标视频与所述原始视频的对应关系,所述视频训练集用以训练视频修复模型,训练后的所述视频修复模型用于对视频进行修复。
第四方面,本申请实施例还提供一种视频修复装置,包括:待修复视频获取模块,设置为获取待修复视频;修复模块,设置为将所述待修复视频输入至视频修复模型,得到所述视频修复模型输出的修复结果;其中,所述视频修复模型是根据视频训练集对初始视频修复模型进行训练获得的,所述视频训练集包括原始视频和与所述原始视频相对应的目标视频,所述目标视频是对所述原始视频的质量影响参数进行调整后获得的,所述目标视频的视频质量低于所述原始视频的视频质量,所述质量影响参数包括以下中的至少两个:噪声参数、亮度参数和清晰度参数。
本申请实施例第一方面、第三方面提供的视频的处理方法、装置,首先获取原始视频,调整原始视频的质量影响参数,得到低于原始视频的视频质量的目标视频,并基于原始视频和目标视频构造视频训练集。其中,质量影响参数包括以下中的至少两个:噪声参数、亮度参数和清晰度参数。且视频训练集保存有目标视频与原始视频的对应关系,利用视频训练集视频修复模型进行训练,可使训练后的视频修复模型用于对视频进行修复。通过对至少两种质量影响参数进行调节,可以得到低于原始视频的视频质量的目标视频,可有效采用目标视频直接模仿老片,最终可基于原始视频以及目标视频构造视频训练集。由于视频训练集中包含有原始视频和目标视频,因此可直接设置为训练视频修复模型,有效解决了难以获取可直接对老片进行整体修复的视频修复模型的训练集的问题。
本申请实施例第二方面、第四方面提供的视频修复方法、装置,首先获取待修复视频,然后将待修复视频输入至视频修复模型,得到视频修复模型输出的修复结果。其中,视频修复模型是根据视频训练集对初始视频修复模型进行训练得到的。该视频训练集包括原始视频和与原始视频对应的目标视频,且目标视频是对原始视频的质量影响参数进行调整后得到的,目标视频的视频质量低于原始视频的视频质量。视频质量影响参数包括以下中的至少两个:噪声参数、亮度参数和清晰度参数。相较于相关技术中采用多种神经网络模型进行较为繁琐复杂的分阶段修复方式,由于每个处理过程都会对修复结果产生一定影响,导致修复后的视频存在累计误差,而本申请实施例可以仅通过一个神经网络对视频进行整体修复,可以有效缓解相关技术中的修复结果存在误差累计的问题。
第五方面,本申请实施例还提供了一种视频处理方法,包括:
获取第一视频中的原始图像帧;
在所述原始图像帧中确定出第一区域,其中,所述原始图像帧包括第一区域和第 二区域,所述原始图像帧中的第一区域处于欠曝状态,所述原始图像帧中的第二区域未处于欠曝状态;
对所述原始图像帧进行增强处理,得到目标图像帧,其中,所述增强处理用于将图像中不同区域的曝光状态调整至曝光正常状态,所述目标图像帧中的第一区域和第二区域均处于曝光正常状态;
根据所述目标图像帧,生成第二视频。
第六方面,本申请实施例提供了一种视频处理装置,包括:
图像提取模块,设置为获取第一视频中的原始图像帧;
图像分析模块,设置为在所述原始图像帧中确定出第一区域,其中,所述原始图像帧包括第一区域和第二区域,所述原始图像帧中的第一区域处于欠曝状态,所述原始图像帧中的第二区域未处于欠曝状态;
增强处理模块,设置为对所述原始图像帧进行增强处理,得到目标图像帧,其中,所述增强处理用于将图像中不同区域的曝光状态调整至曝光正常状态,所述目标图像帧中的第一区域和第二区域均处于曝光正常状态;
视频生成模块,设置为根据所述目标图像帧,生成第二视频。
本申请实施例第五方面、第六方面提供的视频处理方法、装置,通过对原始图像帧进行增强处理,将原始图像帧中的欠曝区域调整为曝光正常状态的同时,还防止原始图像帧中曝光正常状态的区域出现过曝,有利于提升视频的图像质量。
第七方面,本申请实施例还提供了一种图像增强方法,包括:
对待处理的原始图像进行对比度增强处理,得到初始增强图像,其中,所述原始图像包括第一区域和第二区域,所述原始图像中的第一区域处于欠曝状态,所述原始图像中的第二区域未处于欠曝状态,所述初始增强图像中的第一区域未处于欠曝状态,所述初始增强图像中的第二区域处于过曝状态;
获取所述原始图像的光照权重信息,其中,所述光照权重信息用于指示所述原始图像中的每个像素点对应的光照权重,所述光照权重与对应像素点的亮度相关;
根据所述光照权重信息对所述原始图像和所述初始增强图像进行图像融合处理,得到目标增强图像,其中,所述目标增强图像中的第一区域和第二区域均处于曝光正常状态。
第八方面,本申请实施例提供了一种图像增强装置,包括:
增强模块,设置为对待处理的原始图像进行对比度增强处理,得到初始增强图像,其中,所述原始图像包括第一区域和第二区域,所述原始图像中的第一区域处于欠曝状态,所述原始图像中的第二区域未处于欠曝状态,所述初始增强图像中的第一区域未处于欠曝状态,所述初始增强图像中的第二区域处于过曝状态;
第一获取模块,设置为获取所述原始图像的光照权重信息,其中,所述光照权重信息用于指示所述原始图像中的每个像素点对应的光照权重,所述光照权重与对应像素点的亮度相关;
融合模块,设置为根据所述光照权重信息对所述原始图像和所述初始增强图像进行图像融合处理,得到目标增强图像,其中,所述目标增强图像中的第一区域和第二区域均处于曝光正常状态。
本申请实施例第七方面、第八方面提供的图像增强方法、装置,对待处理的原始图像进行对比度增强处理,得到初始增强图像。其中,原始图像包括第一区域和第二区域,原始图像中的第一区域处于欠曝状态,原始图像中的第二区域未处于欠曝状态,初始增强图像中的第一区域未处于欠曝状态,初始增强图像中的第二区域处于过曝状态。获取原始图像的光照权重信息,其中,光照权重信息用于指示原始图像中的每个像素点对应的光照权重,光照权重与对应像素点的亮度相关。根据光照权重信息对原始图像和初始增强图像进行图像融合处理,得到目标增强图像,其中,目标增强图像中的第一区域和第二区域均处于曝光正常状态。上述方式在对原始图像和经对比度增强处理得到的初始增强图像进行图像融合处理,得到目标增强图像时,充分考虑到了图像中不同区域的像素点亮度不同(光照权重信息),使得所得到的目标增强图像中的第一区域和第二区域均处于曝光正常状态,从而有效改善了相关技术中存在的图像中较暗区域(第一区域)的增强效果不足,较亮区域(第二区域)又容易过度增强,导致颜色失真的问题。
第九方面,本申请实施例还提供了一种电子设备,包括存储器、处理器,所述存储器中存储有可在所述处理器上运行的计算机程序,所述处理器执行上述第一方面所述的方法,或者,执行上述第二方面所述的方法,或者,执行上述第五方面所述的方法,或者,执行上述第七方面所述的方法。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行上述第一方面所述的方法,或者,执行上述第二方面所述的方法,或者,执行上述第五方面所述的方法,或者,执行上述第七方面所述的方法。
附图说明
为了更清楚地说明本申请具体实施方式或相关技术中的技术方案,下面将对具体实施方式或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种视频的处理方法的流程示意图;
图2为本申请实施例提供的一种视频修复方法的流程图;
图3为本申请实施例提供的一种视频的处理装置的结构示意图;
图4为本申请实施例提供的一种视频修复装置的结构示意图;
图5为本申请实施例提供的一种视频处理方法的流程图。
图6为本申请实施例提供的视频处理方法的具体例子的流程图;
图7为本申请实施例提供的一种图像增强方法的流程示意图;
图8为本申请实施例提供的另一种图像增强方法的流程示意图;
图9为本申请实施例提供的一种图像增强装置的结构示意图;
图10为本申请实施例提供的另一种图像增强装置的结构示意图;
图11为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合实施例对本申请的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
目前存在基于深度学习修复老片的方式,主要是采用多种神经网络模型进行分阶段修复,也即,是将老片修复过程拆分为多个处理过程,每个处理过程都需要分别采用一种对应的神经网络模型实现,较为复杂繁琐。此外,由于每个处理过程都会对修复结果产生一定影响,因此导致修复后的视频存在累计误差。而发明人研究发现,相关技术中不针对全程直接采用一个整体的神经网络模型的原因在于,目前无法得到用于训练该整体的神经网络模型的训练集。
基于此,本申请实施例提供了一种视频的处理方法、一种视频修复方法、装置及电子设备,可以得到用于训练视频修复模型的视频训练集,而且可以利用该视频训练集训练得到的视频修复模型直接对老片进行整体修复。相比于现有采用多个神经网络模型进行分阶段修复的方式,本实施例还可以有效缓解相关技术中的修复结果存在误差累计的问题,有助于提升老片修复效果。
为便于对本实施例进行理解,首先对本申请实施例所公开的一种视频的处理方法进行详细介绍,参见图1所示的一种视频的处理方法的流程示意图,该方法可以包括以下步骤:
步骤S102,获取原始视频。
在一种实施方式中,可以选择高清视频作为原始视频,诸如用户通过具有拍摄功能的设备(例如,智能手机或相机)拍摄得到的高清视频,或用户从互联网中下载得到的高清视频。其中,高清视频可以为清晰度参数高于预设第一阈值,噪声参数低于预设第二阈值,亮度参数高于预设第三阈值的视频。在实际应用中,可以为用户提供视频上传通道,以便于用户自行选择并上传高清视频,并将用户上传的高清视频作为原始视频。
步骤S104,调整原始视频的质量影响参数,得到原始视频对应的目标视频。
其中,目标视频的视频质量低于原始视频的视频质量,质量影响参数包括以下中的至少两个:噪声参数、亮度参数和清晰度参数。考虑到“老片”存在不同程度的噪音、暗区域以及模糊问题,因此对上述质量影响参数中的至少两个进行调整。在实际应用中,可以在原始视频中随机添加噪声以使原始视频存在不同程度的噪声,可以随机调低原始视频的亮度参数以在原始视频中增加暗区域,还可以降低原始视频的清晰度以 使原始视频更为模糊。通过上述方式得到视频质量较低的目标视频,进而可以通过得到的目标视频模仿“老片”。
步骤S106,基于原始视频和目标视频构造视频训练集。
其中,视频训练集保存有目标视频与原始视频的对应关系,视频训练集用以训练视频修复模型,训练后的视频修复模型用于对视频进行修复。由于神经网络的训练过程是学习输入与输出之间的映射关系,因此本申请实施例采用原始视频和原始视频对应的目标视频构造视频训练集,用以对神经网络训练,得到用于修复视频的视频修复模型。在一种实施方式中,可以利用目标视频模拟“老片”,并将其作为神经网络的输入,利用原始视频模拟“老片”的“修复结果”,并将其作为神经网络的输出,从而使神经网络学习“老片”与“修复结果”之间的映射关系,以得到用于修复老片的视频修复模型。
本申请实施例提供的视频训练集的构造方法,通过对至少两种质量影响参数进行调节,可以得到低于原始视频的视频质量的目标视频,可有效采用目标视频直接模仿老片,最终可基于原始视频以及目标视频构造视频训练集。由于视频训练集中包含有原始视频和目标视频,因此可直接用于训练视频修复模型,有效解决了难以获取可直接对老片进行整体修复的视频修复模型的训练集的问题。
为了便于执行上述步骤S104,本申请实施例将原始视频切帧,得到原始视频中多张图像帧,通过调整每张图像帧的噪声参数、亮度参数和清晰度参数中至少的两个参数,可以实现得到视频质量低于原始视频的目标视频的目的。本申请实施例提供了一种调整原始视频的质量影响参数,得到原始视频对应的目标视频的具体实施方式,参见如下步骤1至步骤4:
步骤1,在质量影响参数包括噪声参数的情况下,向原始视频中的每个图像帧添加随机噪声。在一种实施方式中,可以对原始视频中的每个图像帧执行以下操作,参见如下步骤1.1至步骤1.2:
步骤1.1,在预先配置的随机噪声区间内确定第一随机噪声水平。因为老片中存在不同程度的噪声,因此在一种实施方式中,利用随机数设置随机噪声区间,在此随机噪声区间内随机生成不同第一随机噪声水平的随机噪声(也即,压缩噪声)。其中,压缩噪声可以包括JPEG(Joint Photographic Experts Group,联合图像专家小组)压缩噪声、椒盐噪声、泊松噪声和高斯白噪声等多种噪声。
步骤1.2,向原始视频中的图像帧中添加第一随机噪声水平的压缩噪声。在一种实施方式中,可以向原始视频中的每张图像帧中添加相同的第一随机噪声水平,也可以向各个图像帧分别添加不同的第一随机噪声水平,可基于实际情况选择添加的第一随机噪声水平,以使添加压缩噪声后的原始视频中的噪声参数更为贴近老片的噪声参数。
步骤2,在质量影响参数包括亮度参数的情况下,对原始视频中的每个图像帧进行随机亮度调节。本申请实施例提供了一种对原始视频中的每个图像帧进行随机亮度调节的实施方式,如果原始视频中的图像帧的颜色格式为YUV(Luminance、Chrominance、Chroma)格式以外的颜色格式,可以将原始视频中的每个图像帧的颜色图像格式转换为YUV格式,并在Y通道下对每个格式转换后的图像帧的亮度参数进行 随机数值的下调。由于早期的老片拍摄技术较差以及环境对拍摄影响,导致得到的影片在很多地方都会出现暗区域,在老片修复中对暗区域进行增强也是一项重要的工作,因此本申请实施例通过步骤2模拟老片中存在暗区域的效果。在实施时,如果原始视频中的图像帧的颜色格式为YUV格式以外的颜色格式,诸如RGB(Red,Green,Blue)格式,则直接调节图像帧的亮度值可能导致图像帧颜色失真的问题。因此,本申请实施例将图像帧的图像格式转换为YUV格式,并通过伽玛(gamma)校正方法调节YUV格式的图像帧的亮度参数。其中,YUV格式中Y通道表示明亮度(Luminance),U通道表示色度(Chrominance),V通道表示饱和度(Chroma),通过gamma校正方法对Y通道进行调整,且不对U通道和V通道进行调整,即可在一定程度上避免图像帧的颜色失真。另外,为了使得到的调节亮度参数后的图像帧中的暗区域是随机的,在一种实施方式中可以随机设置gamma校正方法的参数,对Y通道进行随机程度的gamma变化,使不同图像帧的亮度参数发生不同程度的变化。
若上述第二图像的格式为RGB格式,本申请实施例还提供了一种将RGB格式转换为YUV格式的方法,其中,Y=0.299R+0.587G+0.114B;U=-0.1687R-0.3313G+0.5B+128;V=0.5R-0.4187G-0.0813B+128,按照上述公式即可将RGB格式的图像帧转换为YUV格式的图像帧。
步骤3,在质量影响参数包括清晰度参数的情况下,将原始视频中的每张图像帧的第一分辨率调整至目标分辨率后,再调整为第一分辨率。其中,目标分辨率小于第一分辨率。为便于对上述在将原始视频中的每张图像帧的第一分辨率调整至目标分辨率后,再调整为第一分辨率进行理解,本申请实施例对原始视频中的每个图像帧执行以下操作,首先在分辨率集合中随机选择一个分辨率,并将随机选择的分辨率确定为目标分辨率,并将原始视频中的每张图像帧的第一分辨率调整至目标分辨率后,再调整为第一分辨率。假设分辨率集合包括480p,720p,1080p,2k,4k,8k等多种分辨率,若原始图像帧的每张图像帧的第一分辨率均为8K,且选择的目标分辨率为480p,则可以先对上述图像帧进行随机尺度的下采样,得到480P的图像帧,然后再对480P的图像帧进行上采样将其分辨率恢复至8K。因为对图像帧进行下采样的过程中会导致图像信息丢失,而上采样过程无法恢复丢失的图像信息,因此调整清晰度参数后的图像帧将变为模糊的图像帧。在另一种实施方式中,也可以先对第三图像执行上采样操作再执行下采样操作,以得到模糊的图像帧。
步骤4,将调整质量影响参数后的原始视频,作为原始视频对应的目标视频。需要强调的是,本申请实施例对调整原始视频的噪声参数、亮度参数和清晰度参数的先后顺序不作限制,可基于实际情况设置调整参数的顺序。
本申请实施通过上述步骤S102至步骤S106得到的视频训练集训练视频修复模型,可使视频修复模型对老片进行整体修复,因此在基于原始视频和目标视频构造视频训练集之后,本申请实施例还提供了一种利用上述利用视频修复模型修复老片的实施方式。首先根据视频训练集对视频修复模型进行训练,获得训练后的视频修复模型,再将待修复视频输入至训练后的视频修复模型,即可获得训练后的视频修复模型输出的 修复视频。其中,待修复视频的清晰度参数低于预设第四阈值,噪声参数高于预设第五阈值,亮度参数低于预设第六阈值的视频,训练后的视频修复模型用于对视频中的每帧图像进行修复。通过调整每张图像帧的噪声参数、亮度参数或清晰度参数,可使输入的待修复视频的清晰度参数高于预设第一阈值,噪声参数低于预设第二阈值,亮度参数高于预设第三阈值的视频。其中,上述第四阈值低于第一阈值,上述第五阈值低于第二阈值,上述第六阈值低于第三阈值。
在一种实施方式中,本申请实施例还可以包括:获取第一视频中的原始图像帧;在所述原始图像帧中确定出第一区域,其中,所述原始图像帧包括第一区域和第二区域,所述原始图像帧中的第一区域处于欠曝状态,所述原始图像帧中的第二区域未处于欠曝状态;对所述原始图像帧进行增强处理,得到目标图像帧,其中,所述增强处理用于将图像中不同区域的曝光状态调整至曝光正常状态,所述目标图像帧中的第一区域和第二区域均处于曝光正常状态;根据所述目标图像帧,生成第二视频。相关说明可以参见后续实施例,在此不再赘述。
在一种实施方式中,本申请实施例还可以包括:对待处理的原始图像进行对比度增强处理,得到初始增强图像,其中,所述原始图像包括第一区域和第二区域,所述原始图像中的第一区域处于欠曝状态,所述原始图像中的第二区域未处于欠曝状态,所述初始增强图像中的第一区域未处于欠曝状态,所述初始增强图像中的第二区域处于过曝状态;获取所述原始图像的光照权重信息,其中,所述光照权重信息设置为指示所述原始图像中的每个像素点对应的光照权重,所述光照权重与对应像素点的亮度相关;根据所述光照权重信息对所述原始图像和所述初始增强图像进行图像融合处理,得到目标增强图像,其中,所述目标增强图像中的第一区域和第二区域均处于曝光正常状态。相关说明可以参见后续实施例,在此不再赘述。
考虑到相关技术中,将老片修复分为多个处理过程,每个处理过程利用不同的神经网络依次对老片进行修复处理的方法,因为每个处理过程都会对视频处理结果产生一定影响,导致得到的修复结果中存在误差累计。因此本申请实施例还提供了一种视频修复方法,参见图2所示的一种视频修复方法的流程图,该方法可以包括以下步骤:
步骤S202,获取待修复视频。其中,待修复视频可以是早期的电视剧或电影等影视作品,也可以是受损的低质量视频。在一种实施方式中,低质视频可以为分辨率低于预设第四阈值,噪声高于预设第五阈值,亮度低于预设第六阈值的视频。
步骤S204,将待修复视频输入至视频修复模型,得到视频修复模型输出的修复视频。
其中,视频修复模型是根据视频训练集对初始视频修复模型进行训练获得的,视频训练集包括原始视频和与原始视频相对应的目标视频。目标视频是对原始视频的质量影响参数进行调整后获得的,目标视频的视频质量低于原始视频的视频质量。质量影响参数包括以下中的至少两个:噪声参数、亮度参数和清晰度参数。因为卷积神经网络(CNN,Convolutional Neural Network)在图像处理和语音识别方面表现出更好的性能,因此本申请实施例提供的修复模型可以包括卷积神经网络。为便于理解,本申 请实施例提供了一种修复模型的训练方法,参见如下步骤1至2:
步骤1,获取视频训练集。视频训练集即通过前述实施例提供的视频训练集的构造方法得到的,包括大量原始视频和原始视频对应的目标视频。
步骤2,将视频训练集中的目标视频作为卷积神经网络的输入,将视频训练集中的原始视频作为卷积神经网络的输出,对卷积神经网络进行训练。通过对卷积神经网络进行训练,使卷积神经网络学习输入与输出之间的映射关系,得到修复视频所需的视频修复模型,并通过视频修复模型修复低质量的影视作品或视频。
其中,视频修复模型对待修复模型的处理相当于对待修复视频进行了去燥、暗场拉伸以及去模糊处理,从而得到低质量的待修复视频对应的高清修复结果。在实施时,只需将待修复视频输入至预先训练得到的视频修复模型中,即可获取修复后的高清视频(也即,前述修复结果)。
本申请实施例提供的视频修复方法,相较于相关技术中采用多种神经网络模型进行较为繁琐复杂的分阶段修复方式,由于每个处理过程都会对修复结果产生一定影响,导致修复后的视频存在累计误差,而本申请实施例可以仅通过一个神经网络对视频进行整体修复,可以有效缓解相关技术中的修复结果存在误差累计的问题。
在一种实施方式中,本申请实施例还可以包括:获取第一视频中的原始图像帧;在所述原始图像帧中确定出第一区域,其中,所述原始图像帧包括第一区域和第二区域,所述原始图像帧中的第一区域处于欠曝状态,所述原始图像帧中的第二区域未处于欠曝状态;对所述原始图像帧进行增强处理,得到目标图像帧,其中,所述增强处理用于将图像中不同区域的曝光状态调整至曝光正常状态,所述目标图像帧中的第一区域和第二区域均处于曝光正常状态;根据所述目标图像帧,生成第二视频。相关说明可以参见后续实施例,在此不再赘述。
在一种实施方式中,本申请实施例还可以包括:对待处理的原始图像进行对比度增强处理,得到初始增强图像,其中,所述原始图像包括第一区域和第二区域,所述原始图像中的第一区域处于欠曝状态,所述原始图像中的第二区域未处于欠曝状态,所述初始增强图像中的第一区域未处于欠曝状态,所述初始增强图像中的第二区域处于过曝状态;获取所述原始图像的光照权重信息,其中,所述光照权重信息设置为指示所述原始图像中的每个像素点对应的光照权重,所述光照权重与对应像素点的亮度相关;根据所述光照权重信息对所述原始图像和所述初始增强图像进行图像融合处理,得到目标增强图像,其中,所述目标增强图像中的第一区域和第二区域均处于曝光正常状态。相关说明可以参见后续实施例,在此不再赘述。
对于前述实施例提供的视频的处理方法,本申请实施例还提供了一种视频的处理装置,参见图3所示的一种视频的处理装置的结构示意图,该装置可以包括以下部分:
原始视频获取模块302,设置为获取原始视频。
参数调整模块304,设置为调整原始视频的质量影响参数,得到原始视频对应的目标视频,目标视频的视频质量低于原始视频的视频质量;其中,质量影响参数包括以下中的至少两个:噪声参数、亮度参数和分辨率参数。
训练集构造模块306,设置为基于原始视频和目标视频构造视频训练集;其中,视频训练集保存有目标视频与原始视频的对应关系,视频训练集用以训练视频修复模型,训练后的视频修复模型用于对视频进行修复。
本申请实施例提供的视频的处理装置,通过对至少两种质量影响参数进行调节,可以得到低于原始视频的视频质量的目标视频,可有效采用目标视频直接模仿老片,最终可基于原始视频以及目标视频构造视频训练集。由于视频训练集中包含有原始视频和目标视频,因此可直接用于训练视频修复模型,有效解决了难以获取可直接对老片进行整体修复的视频修复模型的训练集的问题。
在一种实施方式中,上述参数调整模块304还设置为:在质量影响参数包括噪声参数的情况下,向原始视频中的每个图像帧添加随机噪声;在质量影响参数包括亮度参数的情况下,对原始视频中的每个图像帧进行随机亮度调节;在质量影响参数包括清晰度参数的情况下,将原始视频中的每张图像帧的第一分辨率调整至目标分辨率后,再调整为第一分辨率;其中,目标分辨率小于第一分辨率;将调整质量影响参数后的原始视频,作为原始视频对应的目标视频。
在一种实施方式中,上述参数调整模块304还设置为:对原始视频中的每个图像帧执行以下操作:在预先配置的随机噪声区间内确定第一随机噪声水平;向原始视频中的图像帧中添加第一随机噪声水平的压缩噪声。
在一种实施方式中,上述参数调整模块304还设置为:在原始视频帧中的图像帧的颜色格式为YUV格式以外的颜色格式的情况下,将原始视频中的每个图像帧的颜色图像格式转换为YUV格式;在Y通道下对每个格式转换后的图像帧的亮度参数进行随机数值的下调。
在一种实施方式中,上述参数调整模块304还设置为:对原始视频中的每个图像帧执行以下操作:在分辨率集合中随机选择一个分辨率,并将随机选择的分辨率确定为目标分辨率;将原始视频中的每张图像帧的第一分辨率调整至目标分辨率后,再调整为第一分辨率。
在一种实施方式中,上述视频的处理装置还包括修复模块,设置为:在基于原始视频和目标视频构造视频训练集之后,根据视频训练集对视频修复模型进行训练,获得训练后的视频修复模型,其中,训练后的视频修复模型用于对视频中的每帧图像进行修复;将待修复视频输入至训练后的视频修复模型,获得训练后的视频修复模型输出的修复视频。
在一种实施方式中,上述视频的处理装置还包括:图像提取模块,设置为获取第一视频中的原始图像帧;图像分析模块,设置为在所述原始图像帧中确定出第一区域,其中,所述原始图像帧包括第一区域和第二区域,所述原始图像帧中的第一区域处于欠曝状态,所述原始图像帧中的第二区域未处于欠曝状态;增强处理模块,设置为对所述原始图像帧进行增强处理,得到目标图像帧,其中,所述增强处理用于将图像中不同区域的曝光状态调整至曝光正常状态,所述目标图像帧中的第一区域和第二区域均处于曝光正常状态;视频生成模块,设置为根据所述目标图像帧,生成第二视频。
在一种实施方式中,上述视频的处理装置还包括:增强模块,设置为对待处理的原始图像进行对比度增强处理,得到初始增强图像,其中,所述原始图像包括第一区域和第二区域,所述原始图像中的第一区域处于欠曝状态,所述原始图像中的第二区域未处于欠曝状态,所述初始增强图像中的第一区域未处于欠曝状态,所述初始增强图像中的第二区域处于过曝状态;第一获取模块,设置为获取所述原始图像的光照权重信息,其中,所述光照权重信息设置为指示所述原始图像中的每个像素点对应的光照权重,所述光照权重与对应像素点的亮度相关;融合模块,设置为根据所述光照权重信息对所述原始图像和所述初始增强图像进行图像融合处理,得到目标增强图像,其中,所述目标增强图像中的第一区域和第二区域均处于曝光正常状态。
对于前述实施例提供的视频修复方法,本申请实施例还提供了一种视频修复装置,参见图4所示的一种视频修复装置的结构示意图,该装置可以包括以下部分:
待修复视频获取模块402,设置为获取待修复视频。
修复模块404,设置为将所述待修复视频输入至视频修复模型,得到所述视频修复模型输出的修复视频;其中,所述视频修复模型是根据视频训练集对初始视频修复模型进行训练获得的,所述视频训练集包括原始视频和与所述原始视频相对应的目标视频,所述目标视频是对所述原始视频的质量影响参数进行调整后获得的,所述目标视频的视频质量低于所述原始视频的视频质量,所述质量影响参数包括以下中的至少两个:噪声参数、亮度参数和清晰度参数。
在一种实施方式中,上述视频修复装置还包括:图像提取模块,设置为获取第一视频中的原始图像帧;图像分析模块,设置为在所述原始图像帧中确定出第一区域,其中,所述原始图像帧包括第一区域和第二区域,所述原始图像帧中的第一区域处于欠曝状态,所述原始图像帧中的第二区域未处于欠曝状态;增强处理模块,设置为对所述原始图像帧进行增强处理,得到目标图像帧,其中,所述增强处理用于将图像中不同区域的曝光状态调整至曝光正常状态,所述目标图像帧中的第一区域和第二区域均处于曝光正常状态;视频生成模块,设置为根据所述目标图像帧,生成第二视频。
在一种实施方式中,上述视频修复装置还包括:增强模块,设置为对待处理的原始图像进行对比度增强处理,得到初始增强图像,其中,所述原始图像包括第一区域和第二区域,所述原始图像中的第一区域处于欠曝状态,所述原始图像中的第二区域未处于欠曝状态,所述初始增强图像中的第一区域未处于欠曝状态,所述初始增强图像中的第二区域处于过曝状态;第一获取模块,设置为获取所述原始图像的光照权重信息,其中,所述光照权重信息设置为指示所述原始图像中的每个像素点对应的光照权重,所述光照权重与对应像素点的亮度相关;融合模块,设置为根据所述光照权重信息对所述原始图像和所述初始增强图像进行图像融合处理,得到目标增强图像,其中,所述目标增强图像中的第一区域和第二区域均处于曝光正常状态。
本申请实施例提供的视频修复装置相较于相关技术中将老片修复过程拆分为多个处理过程,每个处理过程都需要分别采用一种对应的神经网络模型实现,由于每个处理过程都会对修复结果产生一定影响,因此导致修复后的视频存在累计误差。而本申 请实施例可以仅通过一个神经网络对视频进行整体修复,可以有效缓解相关技术中的修复结果存在误差累计的问题。
本申请实施例所提供的装置,其实现原理及产生的技术效果和前述方法实施例相同,为简要描述,装置实施例部分未提及之处,可参考前述方法实施例中相应内容。
在修复老片的过程中,如果基于图像处理等算法对老旧影片进行自动处理,则需要提出一种能够从整体上提升老旧影片播放质量的技术方案。
基于此,本实施例还提供了一种视频处理方法。如图5所示,该方法包括以下步骤S502-S508。
步骤S502,获取第一视频中的原始图像帧。
本实施例中,第一视频是需要进行画质提升的视频,例如是老电影、老电视剧等类型的视频。
图像帧是组成视频的最小单位。一个图像帧是一幅静止图片。基于人眼的视觉暂留效应,将多个图像帧按顺序快速播放可以形成视频。本实施例中的原始图像帧是用于形成第一视频的图像帧,其数量为多个。
本实施例中,可以利用专门的视频处理工具,获取第一视频中的原始图像帧。例如,可以利用视频处理软件FFmpeg将视频切割为多个图像帧。FFmpeg是一套可以用来记录、转换数字音频、视频,并能将其转化为流的开源计算机程序。它提供了录制、转换以及流化音视频的完整解决方案。在一个例子中,通过FFmpeg软件中的“ffmpeg-ivideo.mpg image%d.jpg”命令,可以将视频分解成图片序列,从而得到第一视频中的原始图像帧。
步骤S504,在原始图像帧中确定出第一区域,其中,原始图像帧包括第一区域和第二区域,原始图像帧中的第一区域处于欠曝状态,原始图像帧中的第二区域未处于欠曝状态。
通常来说,由于拍摄技术、环境等因素的影响,拍摄得到的照片会出现欠曝或者过曝等非正常曝光的情况。其中,欠曝是指曝光不足,表现为图像中较暗区域的细节缺失,过曝是指曝光过度,表现为图像中较亮区域的细节缺失。老电影、老电视剧等类型的视频通常存在欠曝问题。
本实施例中,可以先将原始图像帧划分为多个子区域,再判断每一子区域的曝光状态,确定出处于欠曝状态的区域,即第一区域。
在划分子区域时,可以将原始图像帧均匀划分为多个网格,网格形状例如是正方形、正六边形等,将每个网格作为一个子区域。
在划分子区域时,还可以先确定出原始图像帧中的图像轮廓,将每个图像轮廓内的区域作为一个子区域。
在判断曝光状态时,可以结合拍摄环境、根据子区域内像素的平均值进行判断。例如,对于拍天拍摄的图像,如果子区域内像素的平均值超过120并且不超过150,则判断子区域处于曝光正常状态,如果子区域内像素的平均值不超过120,则判断子区域处于欠曝状态,如果子区域内像素的平均值超过150,则判断子区域处于过曝状态。
在判断曝光状态时,还可以根据子区域的亮度分布进行判断。例如,如果子区域的亮度直方图的峰值集中的右侧,则判断子区域处于过曝状态,如果子区域的亮度直方图的峰值集中在左侧,则判断子区域处于欠曝状态,如果子区域的亮度直方图的峰值分布均匀,则判断子区域处于曝光正常状态。
步骤S506,对原始图像帧进行增强处理,得到目标图像帧,其中,增强处理用于将图像中不同区域的曝光状态调整至曝光正常状态,目标图像帧中的第一区域和第二区域均处于曝光正常状态。
本实施例中,对原始图像帧进行增强处理,该增强处理可以将图像中国不同区域的曝光状态调整至曝光正常状态。目标图像帧中的第一区域与原始图像帧中的第一区域相对应,目标图像帧中的第二区域与原始图像帧中的第二区域相对应。
在一个实施例中,步骤S506还包括:通过对原始图像帧进行对比度增强处理,得到辅助图像帧,其中,辅助图像帧中的第一区域处于曝光正常状态,辅助图形帧中的第二区域处于过曝状态;对原始图像帧和辅助图像帧进行图像融合,得到目标图像帧。
比度增强处理的相关技术中常用的处理方法,这种方式在将原本处于欠曝状态的区域调整为曝光正常状态的同时,往往会将原本处于曝光正常状态的区域调整为过曝状态。
本实施例中,通过对原始图像帧进行对比度增强处理,得到辅助图像帧。其中,对比度增强处理可以采用直方图均衡化、灰度变换等方法。
本实施例中,通过对原始图像帧和辅助图像帧进行图像融合,得到目标图像帧。其中,图像融合是指综合两个图像的信息得到目标图像的过程。
在一个实施例中,图像融合的过程包括:获取原始图像帧的光照权重信息,其中,光照权重信息用于指示原始图像帧中的每个像素点对应的光照权重,光照权重与对应像素点的亮度相关;根据光照权重信息对原始图像帧和辅助图像帧进行图像融合,得到目标图像帧。
上述光照权重信息能够表征原始图像帧的明暗强度,光照权重与对应像素点的亮度可以成正相关或负相关。在一种实施方式中,光照权重信息可以用权重图像来表征,权重图像的大小与原始图像的大小一致;以光照权重与对应像素点的亮度成正相关为例,权重图像中与原始图像帧中较暗区域(第一区域)对应的光照权重较小,而与原始图像帧中较亮区域(第二区域)对应的光照权重则较大。
在一种实施方式中,上述原始图像帧和权重图像均可以表示为图像矩阵(二维矩阵)的形式,图像矩阵的行对应图像的高(单位为像素),图像矩阵的列对应图像的宽(单位为像素)。原始图像帧的图像矩阵中的元素对应原始图像帧的像素点的像素值,权重图像的图像矩阵中的元素(光照权重)与原始图像帧中像素点的亮度有关。原始图像帧的图像矩阵与权重图像的图像矩阵在行数和列数上均是相同的,二者在同一位置下的两个元素对应原始图像中的同一个像素点。
在对原始图像帧和辅助图像帧进行图像融合时,可以基于光照权重信息分别为原始图像帧和辅助图像帧分配不同的融合权重,使得对于光照较弱的区域(第一区域), 原始图像帧的融合权重较小,辅助图像帧的融合权重较大。而对于光照较强的区域(第二区域),原始图像帧的融合权重较大,辅助图像帧的融合权重较小。这样能够保证原始图像中曝光正常区域或过曝区域的部分图像亮度保持不变,而较暗区域的部分图像得到增强,也即保证原始图像中较暗区域(第一区域)的增强,较亮区域(第二区域)在图像增强后不过度曝光,图像色彩不失真。从而有效改善了相关技术中存在的图像中较暗区域的增强效果不足,较亮区域又容易过度增强,导致颜色失真的问题。
在一个例子中,可以逐像素对辅助图像帧和原始图像帧进行融合。例如,对于原始图像帧中的像素A1(该像素的像素值记为a1)和辅助图像帧中的像素A2(该像素的像素值记为a2),假设像素A1的光照权重为p,像素A1和像素A2融合得到像素A3,则像素A3的像素值a3=a1*p+a2*(1-p)。
本实施例中,光照权重与对应像素点的亮度相关,例如,亮度较大的像素点对应的光照权重也较大。在一个例子中,原始图像帧中第一区域内像素的光照权重为0,第二区域内像素的光照权重为1。在另外一个例子中,原始图像帧中第一区域内像素的光照权重为0.2,第二区域内像素的光照权重为0.7。
容易理解,本实施例中的光照权重使得在融合过程中,原始图像帧中第一区域对目标图像帧的贡献较小,原始图像帧中第二区域对目标图像帧的贡献较大。因此,目标图像帧在提高了原始图像帧中第一区域亮度的同时,还能够有效保留原始图像帧中第二区域的亮度信息,防止出现过曝现象。
在步骤S508中,根据目标图像帧生成第二视频。
本实施例中,将目标图像帧按顺序拼接为视频,即获得了处理后的视频即第二视频。
本实施例中的视频处理方法,通过对原始图像帧进行增强处理,将原始图像帧中的欠曝区域调整为曝光正常状态的同时,还防止原始图像帧中曝光正常状态的区域出现过曝,有利于提升视频的图像质量。
在一个实施例中,在通过对原始图像帧进行对比度增强处理,得到辅助图像帧之前,上述视频处理方法还包括:对原始图像帧进行去噪处理,得到更新后的原始图像帧,其中,去噪处理用于降低图像的噪声;基于更新后的原始图像帧,执行通过对原始图像帧进行对比度增强处理,得到辅助图像帧的步骤。
图像噪声是指存在于图像数据中的不必要的或多余的干扰信息。图像噪声产生的原因通常涉及多个方面。例如,在图像采集阶段,由于光电自身特性、设备机械运动、器材材料、设备电路等内部原因,以及电磁波干扰等外部原因,都会引起图像噪声。图像采集完成后,在图像数据的传输、解压等环节还会带来新的噪声。图像噪声会影响老旧影片的画面质量。
本实施例中,针对原始图像帧中的噪声进行去噪处理。可以采用均值滤波器、自适应维纳滤波器、中值滤波器、形态学噪声滤除器、小波去噪等手段去除图像中的噪声。
在一个实施例中,对原始图像帧进行减少图像噪声的去噪处理,包括:通过预先 训练的去噪模型,对原始图像帧进行减少图像噪声的去噪处理。
本实施例中,基于神经网络模型,对图像帧中由采集噪声、压缩噪声等形成的混合噪声进行去噪处理。
在一个实施例中,在进行去噪处理之前,先生成去噪模型。生成去噪模型的过程包括:在原始图像中加入随机强度的噪声,获得噪声图像;根据原始图像和噪声图像对卷积神经网络模型进行训练,获得去噪模型。
本实施例中,原始图像是画面质量较高的图片。在原始图像中加入的随机噪声例如是高斯噪声、椒盐噪声等。
本实施例中,采用卷积神经网络的形式生成去噪模型。卷积神经网络(Convolutional Neural Networks,CNN)是一类包含卷积计算且具有深度结构的前馈神经网络,是深度学习(deep learning)的代表算法之一。
本实施例中,根据原始图像和噪声图像形成训练数据集,对卷积神经网络进行训练,其中,噪声图像作为卷积神经网络的输入,原始图像作为神经网络的输出。训练好的卷积神经网络模型即为去噪模型。
在一个实施例中,在通过对原始图像帧进行对比度增强处理,得到辅助图像帧之前,上述视频处理方法还包括:对原始图像帧进行边缘增强处理,得到更新后的原始图像帧,其中,边缘增强处理用于提升图像中轮廓边缘的清晰度;基于更新后的原始图像帧,执行通过对原始图像帧进行对比度增强处理,得到辅助图像帧的步骤。
早期视频中,不同区域的边缘界限较为模糊。本实施例中对图像帧进行边缘增强处理,有利于提高视频的清晰度。
本实施例中,可以通高通滤波、空域微分等方法进行边缘增强处理。在一个例子中,采用空域微分法进行边缘增强处理,通过梯度模算子计算梯度值,由于边缘处灰度的变化较大,对应的梯度值也较大,因此加强梯度值大的像素灰度值就可以突出边缘处细节,从而达到边缘增强的目的。
在一个实施例中,在对原始图像帧和辅助图像帧进行融合,得到目标图像帧之后,上述视频处理方法还包括:对目标图像帧进行超分辨率处理,其中,超分辨率处理用于提高图像的分辨率。
受硬件条件的限制,早期视频的分辨率大小有限。本实施例中对图像帧进行超分辨率处理,获得分辨率更高的图像,有利于适应现在的显示设备。
本实施例中,可以基于稀疏编码方法、自模范方法、贝叶斯方法、金字塔算法、深度学习方法等,对图像帧进行超分辨率处理。
在本申请的一个实施例中,超分辨率处理包括:通过预先训练的超分辨率模型,对暗场增强处理后的目标图像帧进行提高图像分辨率的超分辨率处理。
本实施例中,基于神经网络模型对目标图像帧进行超分辨率处理。
在一个实施例中,在进行超分辨率处理之前,先生成超分辨率模型。生成超分辨率模型的过程包括:对样本图像进行压缩,获得低分辨率图像;根据样本图像和低分辨率图像对卷积神经网络模型进行训练,获得超分辨率模型。
本实施例中,选择高分辨率图片作为样本图像。
本实施例中,通过对样本图像进行质量压缩,获得低分辨率图像。
本实施例中,根据样本图像和低分辨率图像形成训练数据集,对卷积神经网络进行训练,其中,低分辨率图像作为卷积神经网络的输入,样本图像作为神经网络的输出。训练好的卷积神经网络模型即为超分辨率模型。
在一个实施例中,在根据目标图像帧,生成第二视频之后,还包括:对第二视频进行超帧率处理,得到用于播放的第三视频,其中,超帧率处理用于提高视频的帧率。
早期影片的帧率通常较低,这会对视频播放的流畅度造成影响。本实施例中对视频进行超帧率处理,使视频播放更加流畅。
本实施例中,可以采用简单的帧率提升算法、含有运动补偿的帧率提升算法、基于自回归模型的帧率提升算法等方法进行超帧率处理。
在一个例子中,采用简单帧率提升算法中的帧平均方法进行超帧率处理,即,将第二视频的相邻两帧的加权平均结果作为插值帧并增加到两帧之间,将第二视频的原有帧和插值处理得到的插值帧按顺序拼接,从而得到第三视频。
图6示出了本实施例中视频处理方法实施的例子。参见图6,电子设备首先利用FFmpeg视频处理工具将视频切割为多个图像帧,即执行步骤S601。对于每个图像帧,电子设备将该图像帧输入预先训练的卷积神经网络模型,获得去噪后的图像帧,即执行步骤S602。对于去噪后的每个图像帧,电子设备采用基于滤波或者矩阵的边缘锐化方法,对该图像帧进行边缘锐化处理,即执行步骤S603。对于边缘增强后的每个图像帧,电子设备通过现有的图像增强方法构建辅助图像,并将图像帧和辅助图形按照特定权重进行融合,提高图像帧中弱亮度区域的图像质量,保持图像帧中强亮度区域的亮度不变,即执行步骤S605。对于暗场增强处理后的每个图像帧,电子设备将该图像帧输入预先训练的卷积神经网络,获得分辨率更高的图像帧,即执行步骤S606。对于超分辨率处理后的多个图像帧,电子设备基于自回归模型的帧率提升算法在其中插入新的图像帧,即执行步骤S607。对于插帧处理后的多个图像帧,电子设备将这些图像帧按顺序拼接为视频,从而获得处理后的视频,即执行步骤S608。本实施例提供一种视频处理装置,包括图像提取模块、图像分析模块、增强处理模块和视频生成模块。
图像提取模块,设置为获取第一视频中的原始图像帧。
图像分析模块,设置为在原始图像帧中确定出第一区域,其中,原始图像帧包括第一区域和第二区域,原始图像帧中的第一区域处于欠曝状态,原始图像帧中的第二区域未处于欠曝状态。
增强处理模块,设置为对原始图像帧进行增强处理,得到目标图像帧,其中,增强处理用于将图像中不同区域的曝光状态调整至曝光正常状态,目标图像帧中的第一区域和第二区域均处于曝光正常状态。
视频生成模块,设置为根据目标图像帧,生成第二视频。
在一个实施例中,增强处理模块在对原始图像帧进行增强处理,得到目标图像帧时,设置为:通过对原始图像帧进行对比度增强处理,得到辅助图像帧,其中,辅助 图像帧中的第一区域处于曝光正常状态,辅助图形帧中的第二区域处于过曝状态;对原始图像帧和辅助图像帧进行图像融合,得到目标图像帧。
在一个实施例中,增强处理模块在对原始图像帧和辅助图像帧进行融合,得到目标图像帧时,设置为:获取原始图像的光照权重信息,其中,光照权重信息用于指示原始图像中的每个像素点对应的光照权重,光照权重与对应像素点的亮度相关;根据光照权重信息对原始图像和辅助图像帧进行图像融合,得到目标图像帧。
在一个实施例中,视频处理装置还包括去噪处理模块,该去噪处理模块设置为:在通过对原始图像帧进行对比度增强处理,得到辅助图像帧之前,对原始图像帧进行去噪处理,得到更新后的原始图像帧,其中,去噪处理用于降低图像的噪声。增强处理模块基于更新后的原始图像帧,执行通过对原始图像帧进行对比度增强处理,得到辅助图像帧的步骤。
在一个实施例中,视频处理装置还包括边缘增强处理模块,该边缘增强处理模块设置为:在通过对原始图像帧进行对比度增强处理,得到辅助图像帧之前,对原始图像帧进行边缘增强处理,得到更新后的原始图像帧,其中,边缘增强处理用于提升图像中轮廓边缘的清晰度。增强处理模块基于更新后的原始图像帧,执行通过对原始图像帧进行对比度增强处理,得到辅助图像帧的步骤。
在一个实施例中,视频处理装置还包括超分辨率模块,该超分辨率模块设置为:在对原始图像帧和辅助图像帧进行融合,得到目标图像帧之后,对目标图像帧进行超分辨率处理,其中,超分辨率处理用于提高图像的分辨率。
在一个实施例中,视频处理装置还包括超帧率模块,该超帧率模块设置为:在根据目标图像帧,生成第二视频之后,对第二视频进行超帧率处理,得到用于播放的第三视频,其中,超帧率处理用于提高视频的帧率。
由于在拍摄过程中采集设备的限制,老片(例如二十世纪八九十年代拍摄的视频图像)的局部区域容易过暗,导致用户观看体验差,这些过暗区域属于老片中的暗场。目前采用的图像增强方法对老片进行暗场增强时,容易出现老片的视频图像中较暗区域的增强效果不足,较亮区域又过度增强的情况,导致增强后的视频图像颜色失真,影响最终的增强效果。
基于此,本申请实施例还提供的一种图像增强方法、装置、电子设备及计算机可读存储介质,可以提高增强效果,缓解图像增强导致的颜色失真问题。
需要说明的是,上述对老片的图像增强处理仅是本申请实施例的一个示例性应用场景,本申请实施例的保护范围不限于此,在其他实施例中,该图像增强方法、装置、电子设备及计算机可读存储介质还可以应用于其他暗区域较多或明暗区域不均匀的图像。
为便于对本实施例进行理解,首先对本申请实施例所公开的一种图像增强方法进行详细介绍。
本申请实施例提供了一种图像增强方法,该方法可以由具有图像处理能力的电子设备执行,该电子设备可以但不限于为以下中的任一种:台式电脑、笔记本电脑、平板电脑和智能手机等。
参见图7所示的一种图像增强方法的流程示意图,该方法主要包括以下步骤S702至步骤S706:
步骤S702,对待处理的原始图像进行对比度增强处理,得到初始增强图像,其中,原始图像包括第一区域和第二区域,原始图像中的第一区域处于欠曝状态,原始图像中的第二区域未处于欠曝状态,初始增强图像中的第一区域未处于欠曝状态,初始增强图像中的第二区域处于过曝状态。
上述待处理的原始图像为需要进行图像增强处理的图像,可以是老片中一帧帧(以帧为单位)的视频图片,也可以是暗区域较多或明暗区域不均匀的其他图像。原始图像中存在处于欠曝状态的第一区域(较暗区域)和未处于欠曝状态的第二区域,第二区域为处于曝光正常状态的曝光正常区域或处于过曝状态的过曝区域。经对比度增强处理后,在初始增强图像中原来处于欠曝状态的第一区域变为未处于欠曝状态,也即处于曝光正常状态或过曝状态;在初始增强图像中原来未处于欠曝状态的第二区域变为处于过曝状态。
在一种实施方式中,可以采用直方图均衡化方法、Retinex算法或暗通道去雾算法对待处理的原始图像进行对比度增强处理,得到初始增强图像。下面分别对上述直方图均衡化方法、Retinex算法和暗通道去雾算法进行相应介绍。
上述直方图均衡化方法计算简单,易于实现。直方图均衡化方法的原理如下:直方图均衡化实质上是对原始图像进行非线性拉伸,重新分配图像像素值,使一定灰度范围内象元值的数量大致相等。这样,原来直方图中间的峰顶部分的对比度得到增强,而两侧的谷底部分的对比度降低,从而图像整体得到增强。
在一种实施方式中,上述直方图均衡化方法包括:DHE(Dynamic histogram equalization,动态直方图均衡化)或CLAHE(Contrast limited adaptive histogram equalization,对比度受限的自适应直方图均衡化)。其中,DHE通过将原始图像对应的直方图划分成不同的部分,在每个直方图子集中执行直方图均衡化操作。CLAHE自适应的限制了直方图均衡化对比度增强的程度。
Retinex算法(Retinex模型):Retinex算法在反射率图像未知的条件下,根据原始图像估计光照图像。Retinex算法可以很好地增强暗区域的对比度,显示更多的细节。
暗通道去雾算法(暗通道去雾模型):将原始图像取反后,用暗通道去雾算法对其进行处理,然后将结果再次取反得到初始增强图像。
步骤S704,获取原始图像的光照权重信息,其中,光照权重信息设置为指示原始图像中的每个像素点对应的光照权重,光照权重与对应像素点的亮度相关。
上述光照权重信息能够表征原始图像的明暗强度,光照权重与对应像素点的亮度可以成正相关或负相关。在一种实施方式中,光照权重信息可以用权重图像来表征,权重图像的大小与原始图像的大小一致;以光照权重与对应像素点的亮度成正相关为例,权重图像中与原始图像中较暗区域(第一区域)对应的光照权重较小,而与原始图像中较亮区域(第二区域)对应的光照权重则较大。
在一种实施方式中,上述原始图像和权重图像均可以表示为图像矩阵(二维矩阵)的形式,图像矩阵的行对应图像的高(单位为像素),图像矩阵的列对应图像的宽(单位为像 素)。原始图像的图像矩阵中的元素对应原始图像的像素点的像素值,权重图像的图像矩阵中的元素(光照权重)与原始图像中像素点的亮度有关。原始图像的图像矩阵与权重图像的图像矩阵在行数和列数上均是相同的,二者在同一位置下的两个元素对应原始图像中的同一个像素点。
步骤S706,根据光照权重信息对原始图像和初始增强图像进行图像融合处理,得到目标增强图像,其中,目标增强图像中的第一区域和第二区域均处于曝光正常状态。
在对原始图像和初始增强图像进行图像融合时,考虑到原始图像中的第一区域处于欠曝状态,原始图像中的第二区域未处于欠曝状态,而初始增强图像中的第一区域未处于欠曝状态,初始增强图像中的第二区域处于过曝状态,因此可以基于光照权重信息分别为原始图像和初始增强图像分配不同的融合权重,使得对于光照较弱的区域(第一区域),原始图像的融合权重较小,初始增强图像的融合权重较大。而对于光照较强的区域(第二区域),原始图像的融合权重较大,初始增强图像的融合权重较小。这样能够保证原始图像中曝光正常区域或过曝区域的部分图像亮度保持不变,而较暗区域的部分图像得到增强,也即保证原始图像中较暗区域(第一区域)的增强,较亮区域(第二区域)在图像增强后不过度曝光,图像色彩不失真。从而有效改善了相关技术中存在的图像中较暗区域的增强效果不足,较亮区域又容易过度增强,导致颜色失真的问题。
在一种可能的实现方式中,上述光照权重与对应像素点的亮度成正相关,通过以下公式对原始图像和初始增强图像进行图像融合处理,得到目标增强图像:
D=I*W 1+E*(P-W 1),
其中,D表示目标增强图像的图像矩阵,I表示原始图像的图像矩阵,W 1表示光照权重与对应像素点的亮度成正相关时的权重矩阵,权重矩阵根据光照权重信息所指示的原始图像中的各个像素点对应的光照权重确定,E表示初始增强图像的图像矩阵,P表示原始图像对应的单位矩阵,原始图像对应的单位矩阵为与原始图像的图像矩阵的行数和列数均相同的单位矩阵。需要说明的是,上述权重矩阵可以由各个光照权重按照对应像素点的位置排列组成,例如权重矩阵可以是上述权重图像的图像矩阵。
在另一种可能的实现方式中,上述光照权重与对应像素点的亮度成负相关,通过以下公式对原始图像和初始增强图像进行图像融合处理,得到目标增强图像:
D=I*(P-W 2)+E*W 2
其中,D表示目标增强图像的图像矩阵,I表示原始图像的图像矩阵,W 2表示光照权重与对应像素点的亮度成负相关时的权重矩阵,权重矩阵根据光照权重信息所指示的原始图像中的各个像素点对应的光照权重确定,E表示初始增强图像的图像矩阵,P表示原始图像对应的单位矩阵。
需要说明的是,上述步骤S702与步骤S704之间无先后执行顺序:在图1所示实施例中,先执行步骤S702,再执行步骤S704;但在其他实施例中,也可以先执行步骤S704,再执行步骤S702。
本申请实施例中,对待处理的原始图像进行对比度增强处理,得到初始增强图像;获取原始图像对应的权重图像,该权重图像包括原始图像中每个像素点对应的权重,该权重 与像素点的亮度有关;根据权重图像对原始图像和初始增强图像进行图像融合处理,得到目标增强图像。上述方式在对图像进行增强时,充分考虑到图像中不同区域的像素点亮度不同,从而在对原始图像和经对比度增强处理得到的初始增强图像进行图像融合时,不同亮度的像素点对应的增强权重不同,进而有效改善了相关技术中存在的图像中较暗区域的增强效果不足,较亮区域(曝光正常区域或过曝区域)又容易过度增强,导致颜色失真的问题。
为了便于理解,下面将以原始图像为待处理视频中的图像帧,原始图像的色彩模式为RGB模式,光照权重信息由权重图像来表征为例,参照图8对上述图像增强方法进行示例性描述。
参见图8所示的另一种图像增强方法的流程示意图,该方法包括以下步骤:
步骤S802,获取待处理视频。
这里的待处理视频可以但不限于为上述的老片。
步骤S804,对待处理视频进行切帧处理,得到原始图像。
通过对上述待处理视频进行切帧处理,将待处理视频分为一帧帧(以帧为单位)的视频图片,这些一帧帧的视频图片即为原始图像。
步骤S806,将原始图像的色彩模式转换为HSV模式,得到HSV通道图像。
上述原始图像的色彩模式为RGB模式,需要先将原始图像的色彩模式转换为HSV模式,得到HSV通道图像。HSV通道图像中每一种颜色都是由色相(Hue,简称为H),饱和度(Saturation,简称为S)和色明度(Value,简称为V)所表示的,HSV通道图像包括H通道、S通道和V通道三个通道,HSV通道图像中的每个像素点的像素值均由三个通道的颜色值表示,其中V通道的颜色值也称为灰度值。
步骤S808,对HSV通道图像中的V通道进行增强处理,得到增强后的HSV通道图像,其中,该增强处理用于根据HSV通道图像中的各像素点在V通道上的灰度分布状况对HSV通道图像中的各像素点的灰度值进行均衡化调整,使得HSV通道图像中灰度值最大的像素点的灰度值被调整至预设灰度区间的灰度上限,且增强后的HSV通道图像中的各像素点的灰度值在该预设灰度区间内分布均匀。
上述预设灰度区间可以根据实际需求设置,可以与原始图像对应的灰度区间相同,也可以与原始图像对应的灰度区间不同,例如,预设灰度区间设置为0~255,此时,HSV通道图像中灰度值最大的像素点的灰度值被调整至255。
在一种实施方式中,可以采用直方图均衡化方法对HSV通道图像中的V通道进行增强处理,得到增强后的HSV通道图像。
步骤S810,将增强后的HSV通道图像的色彩模式转换为RGB模式,得到初始增强图像。
这样对于较暗区域(第一区域)较多的原始图像,其HSV通道图像中各个像素点的灰度等级绝大部分处于低等级的状态(多数像素点的灰度值较小),通过灰度值的均衡化调整尽可能保证增强后的HSV通道图像中的各像素点的灰度值在各个灰度等级出现频率一样,使得原始图像中各像素点的灰度值得到提高,从而使得原始图像中的处于欠曝状态的第一 区域在初始增强图像变为未处于欠曝状态,原始图像中的处于未处于欠曝状态的第二区域在初始增强图像变为处于过曝状态。
步骤S812,获取上述原始图像的光照图像。
在一些可能的实施例中,可以采用Retinex算法求取原始图像的反射图像,然后根据该反射图像和原始图像,确定原始图像的光照图像。其中,Retinex算法可以采用单尺度的Retinex模型、多尺度的Retinex模型或改进的Retinex模型。
基于Retinex算法获取原始图像的光照图像的原理如下:原始图像I可以看成是光照图像L和反射图像R构成,入射光照射在反射物体上,通过反射物体的反射形成反射光进入人眼,就是人眼所看到的原始图像I,光照图像L直接决定了原始图像中像素点的亮度值,反射图像R表示原始图像的内在属性,I、R、L三者直接的关系可以通过以下公式表示。
I(x,y)=R(x,y)*L(x,y),
其中,I(x,y)表示原始图像I中像素点(x,y)处的像素值,R(x,y)表示反射图像R中像素点(x,y)处的像素值,L(x,y)表示光照图像L中像素点(x,y)处的像素值。通过Retinex算法估计反射图像R,从而求得光照图像L。
以单尺度的Retinex模型为例,通过以下公式求解光照图像L:
Figure PCTCN2020127717-appb-000001
r(x,y)=lgI(x,y)-lg[F(x,y)*I(x,y)],    (2)
Figure PCTCN2020127717-appb-000002
∫∫F(x,y)dxdy=1,     (4)
其中,r(x,y)表示以10为底R(x,y)的对数,x表示像素点的行数,y表示像素点的列数,β为系数,c表示高斯环绕尺度。c为预先设置的参数,c可以根据实际需要设置,例如将c设置为3、5或7。
在一种实施例方式中,可以先通过公式(3)和(4)确定β的值,再根据公式(1)和(2)求解出R(x,y),进而求得L(x,y)。
步骤S814,对上述光照图像进行归一化处理,得到原始图像对应的权重图像。
上述权重图像用于表征原始图像的光照权重信息。可以采用最大最小归一化方法或标准归一化方法对上述光照图像进行像素值的归一化处理,得到原始图像对应的权重图像。光照图像中每个像素点的像素值经像素值归一化后得到的像素值即为权重图像中的权重值(光照权重的大小)。该权重图像中光照权重与对应像素点的亮度成正相关。
以最大最小归一化方法为例,可以通过以下公式计算归一化后的像素值:
Figure PCTCN2020127717-appb-000003
其中,p表示光照图像L中的像素值(即归一化前的像素值),q表示归一化后的像素值(即权重图像中的权重值)。
步骤S816,根据上述权重图像对原始图像和初始增强图像进行图像融合处理,得到目标增强图像。
可以通过以下公式对原始图像和初始增强图像进行图像融合处理,得到目标增强图像:
D=I*W+E*(P-W),
其中,D表示目标增强图像的图像矩阵,I表示原始图像的图像矩阵,W表示权重图像的图像矩阵,E表示初始增强图像的图像矩阵,P表示原始图像对应的单位矩阵。
步骤S818,根据待处理视频中的每个图像帧对应的目标增强图像,生成用于播放的目标视频。
通过对待处理视频中每个图像帧进行上述过程的图像增强处理,可以得到每个图像帧对应的目标增强图像,将每个图像帧对应的目标增强图像按照原始的时间顺序进行合并,即可得到用于播放的目标视频。
本申请实施例中,对待处理视频进行切帧处理,得到原始图像;采用直方图均衡化方法获取初始增强图像,基于Retinex算法获取的光照图像来获取权重图像,进而将原始图像和初始增强图像按照权重图像进行融合,得到目标增强图像。该方式实现了对视频图像的自适应暗场增强,可以使原始图像上的暗区域得到增强,防止原始图像上较亮区域在图像增强后过曝,保证图像色彩不失真,提升视频图像的的质量。另外,该方式计算简单,易于实现。
对应于上述的图像增强方法,本申请实施例还提供了一种图像增强装置,参见图9所示的一种图像增强装置的结构示意图,该装置包括:
增强模块92,设置为对待处理的原始图像进行对比度增强处理,得到初始增强图像,其中,原始图像包括第一区域和第二区域,原始图像中的第一区域处于欠曝状态,原始图像中的第二区域未处于欠曝状态,初始增强图像中的第一区域未处于欠曝状态,初始增强图像中的第二区域处于过曝状态;
第一获取模块94,设置为获取原始图像的光照权重信息,其中,光照权重信息用于指示原始图像中的每个像素点对应的光照权重,光照权重与对应像素点的亮度相关;
融合模块96,设置为根据光照权重信息对原始图像和初始增强图像进行图像融合处理,得到目标增强图像,其中,目标增强图像中的第一区域和第二区域均处于曝光正常状态。
本申请实施例中,增强模块92对待处理的原始图像进行对比度增强处理,得到初始增强图像,其中,原始图像包括第一区域和第二区域,原始图像中的第一区域处于欠曝状态,原始图像中的第二区域未处于欠曝状态,初始增强图像中的第一区域未处于欠曝状态,初始增强图像中的第二区域处于过曝状态;第一获取模块94获取原始图像的光照权重信息,其中,光照权重信息用于指示原始图像中的每个像素点对应的光照权重,光照权重与对应像素点的亮度相关;融合模块96根据光照权重信息对原始图像和初始增强图像进行图像融合处理,得到目标增强图像,其中,目标增强图像中的第一区域和第二区域均处于曝光正常状态。上述方式在对原始图像和经对比度增强处理得到的初始增强图像进行图像融合处 理,得到目标增强图像时,充分考虑到了图像中不同区域的像素点亮度不同(光照权重信息),使得所得到的目标增强图像中的第一区域和第二区域均处于曝光正常状态,从而有效改善了相关技术中存在的图像中较暗区域的增强效果不足,较亮区域又容易过度增强,导致颜色失真的问题。
在一种实施方式中,在原始图像的色彩模式为RGB模式的情况下,上述增强模块92设置为:将原始图像的色彩模式转换为HSV模式,得到HSV通道图像;对HSV通道图像中的V通道进行增强处理,得到增强后的HSV通道图像,其中,该增强处理用于根据HSV通道图像中的各像素点在V通道上的灰度分布状况对HSV通道图像中的各像素点的灰度值进行均衡化调整,使得HSV通道图像中灰度值最大的像素点的灰度值被调整至预设灰度区间的灰度上限,且增强后的HSV通道图像中的各像素点的灰度值在预设灰度区间内分布均匀;将增强后的HSV通道图像的色彩模式转换为RGB模式,得到初始增强图像。
参见图10所示的另一种图像增强装置的结构示意图,在图9的基础上,上述第一获取模块94包括:
获取单元941,设置为获取原始图像的光照图像;
处理单元942,设置为对光照图像进行归一化处理,得到原始图像对应的权重图像,其中,权重图像用于表征原始图像的光照权重信息。
在一种实施方式中,上述获取单元941设置为:获取原始图像的反射图像;根据反射图像和原始图像,确定原始图像的光照图像。
在一种实施方式中,上述处理单元942设置为:采用最大最小归一化方法或标准归一化方法对光照图像进行归一化处理,得到原始图像对应的权重图像。
在一种实施方式中,上述光照权重与对应像素点的亮度成正相关;上述融合模块96设置为:通过以下公式对原始图像和初始增强图像进行图像融合处理,得到目标增强图像:
D=I*W+E*(P-W),
其中,D表示目标增强图像的图像矩阵,I表示原始图像的图像矩阵,W表示权重矩阵,权重矩阵根据光照权重信息所指示的原始图像中的各个像素点对应的光照权重确定,E表示初始增强图像的图像矩阵,P表示原始图像对应的单位矩阵。
在一种实施方式中,在原始图像为待处理视频中的图像帧的情况下,如图10所示,上述装置还包括:
第二获取模块1002,设置为获取待处理视频;对待处理视频进行切帧处理,得到原始图像;
生成模块1004,设置为根据待处理视频中的每个图像帧对应的目标增强图像,生成用于播放的目标视频。
本实施例所提供的装置,其实现原理及产生的技术效果和前述方法实施例相同,为简要描述,装置实施例部分未提及之处,可参考前述方法实施例中相应内容。
本申请实施例还提供一种设备,该设备为一种电子设备,该电子设备包括处理器和存储装置;存储装置上存储有计算机程序,计算机程序在被所述处理器运行时执行如上所述实施方式的任一项所述的方法。
图11为本申请实施例提供的一种电子设备的结构示意图,该电子设备100包括:处理器50,存储器51,总线52和通信接口53,所述处理器50、通信接口53和存储器51通过总线52连接;处理器50设置为执行存储器51中存储的可执行模块,例如计算机程序。
其中,存储器51可能包含高速随机存取存储器(RAM,Random Access Memory),也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。通过至少一个通信接口53(可以是有线或者无线)实现该系统网元与至少一个其他网元之间的通信连接,可以使用互联网,广域网,本地网,城域网等。
总线52可以是ISA总线、PCI总线或EISA总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图5中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
其中,存储器51设置为存储程序,所述处理器50在接收到执行指令后,执行所述程序,前述本申请实施例任一实施例揭示的流过程定义的装置所执行的方法可以应用于处理器50中,或者由处理器50实现。
处理器50可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器50中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器50可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(Digital Signal Processing,简称DSP)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、现成可编程门阵列(Field-Programmable Gate Array,简称FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器51,处理器50读取存储器51中的信息,结合其硬件完成上述方法的步骤。
本申请实施例所提供的可读存储介质的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可设置为执行前面方法实施例中所述的方法,具体实现可参见前述方法实施例,在此不再赘述。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光 盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本申请的具体实施方式,用以说明本申请的技术方案,而非对其限制,本申请的保护范围并不局限于此,尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本申请实施例技术方案的精神和范围,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。
工业实用性
基于本申请实施例提供的视频处理方法、视频修复方法、装置及电子设备,第一方面,能够有效解决难以获取可直接对老片进行整体修复的视频修复模型的训练集的问题;第二方面,仅通过一个神经网络对视频进行整体修复,可以有效缓解相关技术中的修复结果存在误差累计的问题;第三方面,通过对原始图像帧进行增强处理,将原始图像帧中的欠曝区域调整为曝光正常状态的同时,还防止原始图像帧中曝光正常状态的区域出现过曝,有利于提升视频的图像质量;第四方面,有效改善了现有相关技术中存在的图像中较暗区域(第一区域)的增强效果不足,较亮区域(第二区域)又容易过度增强,导致颜色失真的问题。

Claims (15)

  1. 一种视频的处理方法,包括:
    获取原始视频;
    调整所述原始视频的质量影响参数,得到所述原始视频对应的目标视频,所述目标视频的视频质量低于所述原始视频的视频质量;其中,所述质量影响参数包括以下中的至少两个:噪声参数、亮度参数和清晰度参数;
    基于所述原始视频和所述目标视频构造视频训练集;其中,所述视频训练集保存有所述目标视频与所述原始视频的对应关系,所述视频训练集用以训练视频修复模型,训练后的视频修复模型用于对视频进行修复。
  2. 根据权利要求1所述的方法,其中,所述调整所述原始视频的质量影响参数,得到所述原始视频对应的目标视频的步骤,包括:
    在所述质量影响参数包括所述噪声参数的情况下,向所述原始视频中的每个图像帧添加随机噪声;
    在所述质量影响参数包括所述亮度参数的情况下,对所述原始视频中的每个图像帧进行随机亮度调节;
    在所述质量影响参数包括所述清晰度参数的情况下,将所述原始视频中的每张图像帧的第一分辨率调整至目标分辨率后,再调整为所述第一分辨率;其中,所述目标分辨率小于所述第一分辨率;
    将调整所述质量影响参数后的原始视频,作为所述原始视频对应的所述目标视频。
  3. 根据权利要求2所述的方法,其中,所述向所述原始视频中的每个图像帧添加随机噪声,包括:
    对所述原始视频中的每个图像帧执行以下操作:
    在预先配置的随机噪声区间内确定第一随机噪声水平;
    向所述原始视频中的图像帧中添加所述第一随机噪声水平的压缩噪声。
  4. 根据权利要求2所述的方法,其中,所述对所述原始视频中的每个图像帧进行随机亮度调节,包括:
    在所述原始视频中的图像帧的颜色格式为YUV格式以外的颜色格式的情况下,将所述原始视频中的每个图像帧的颜色图像格式转换为YUV格式;
    在Y通道下对每个格式转换后的图像帧的亮度参数进行随机数值的下调。
  5. 根据权利要求2所述的方法,其中,在所述将所述原始视频中的每张图像帧的第一分辨率调整至目标分辨率后,再调整为所述第一分辨率,包括:
    对所述原始视频中的每个图像帧执行以下操作:
    在分辨率集合中随机选择一个分辨率,并将随机选择的分辨率确定为所述目标分辨率;
    将所述原始视频中的每张图像帧的第一分辨率调整至目标分辨率后,再调整为所述第一分辨率。
  6. 根据权利要求1至5中任一项所述的方法,其中,在基于所述原始视频和所述 目标视频构造视频训练集之后,所述方法还包括:
    根据所述视频训练集对视频修复模型进行训练,获得训练后的视频修复模型,其中,所述训练后的视频修复模型用于对视频中的每帧图像进行修复;
    将待修复视频输入至所述训练后的视频修复模型,获得所述训练后的视频修复模型输出的修复视频。
  7. 根据权利要求1至6中任一项所述的方法,其中,所述方法还包括:
    获取第一视频中的原始图像帧;
    在所述原始图像帧中确定出第一区域,其中,所述原始图像帧包括第一区域和第二区域,所述原始图像帧中的第一区域处于欠曝状态,所述原始图像帧中的第二区域未处于欠曝状态;
    对所述原始图像帧进行增强处理,得到目标图像帧,其中,所述增强处理用于将图像中不同区域的曝光状态调整至曝光正常状态,所述目标图像帧中的第一区域和第二区域均处于曝光正常状态;
    根据所述目标图像帧,生成第二视频。
  8. 根据权利要求1至6中任一项所述的方法,其中,所述方法还包括:
    对待处理的原始图像进行对比度增强处理,得到初始增强图像,其中,所述原始图像包括第一区域和第二区域,所述原始图像中的第一区域处于欠曝状态,所述原始图像中的第二区域未处于欠曝状态,所述初始增强图像中的第一区域未处于欠曝状态,所述初始增强图像中的第二区域处于过曝状态;
    获取所述原始图像的光照权重信息,其中,所述光照权重信息设置为指示所述原始图像中的每个像素点对应的光照权重,所述光照权重与对应像素点的亮度相关;
    根据所述光照权重信息对所述原始图像和所述初始增强图像进行图像融合处理,得到目标增强图像,其中,所述目标增强图像中的第一区域和第二区域均处于曝光正常状态。
  9. 一种视频修复方法,包括:
    获取待修复视频;
    将所述待修复视频输入至视频修复模型,得到所述视频修复模型输出的修复视频;其中,所述视频修复模型是根据视频训练集对初始视频修复模型进行训练获得的,所述视频训练集包括原始视频和与所述原始视频相对应的目标视频,所述目标视频是对所述原始视频的质量影响参数进行调整后获得的,所述目标视频的视频质量低于所述原始视频的视频质量,所述质量影响参数包括以下中的至少两个:噪声参数、亮度参数和清晰度参数。
  10. 根据权利要求9所述的方法,其中,所述方法还包括:
    获取第一视频中的原始图像帧;
    在所述原始图像帧中确定出第一区域,其中,所述原始图像帧包括第一区域和第二区域,所述原始图像帧中的第一区域处于欠曝状态,所述原始图像帧中的第二区域未处于欠曝状态;
    对所述原始图像帧进行增强处理,得到目标图像帧,其中,所述增强处理用于将 图像中不同区域的曝光状态调整至曝光正常状态,所述目标图像帧中的第一区域和第二区域均处于曝光正常状态;
    根据所述目标图像帧,生成第二视频。
  11. 根据权利要求9所述的方法,其中,所述方法还包括:
    对待处理的原始图像进行对比度增强处理,得到初始增强图像,其中,所述原始图像包括第一区域和第二区域,所述原始图像中的第一区域处于欠曝状态,所述原始图像中的第二区域未处于欠曝状态,所述初始增强图像中的第一区域未处于欠曝状态,所述初始增强图像中的第二区域处于过曝状态;
    获取所述原始图像的光照权重信息,其中,所述光照权重信息设置为指示所述原始图像中的每个像素点对应的光照权重,所述光照权重与对应像素点的亮度相关;
    根据所述光照权重信息对所述原始图像和所述初始增强图像进行图像融合处理,得到目标增强图像,其中,所述目标增强图像中的第一区域和第二区域均处于曝光正常状态。
  12. 一种视频的处理装置,包括:
    原始视频获取模块,设置为获取原始视频;
    参数调整模块,设置为调整所述原始视频的质量影响参数,得到所述原始视频对应的目标视频,所述目标视频的视频质量低于所述原始视频的视频质量;其中,所述质量影响参数包括以下中的至少两个:噪声参数、亮度参数和分辨率参数;
    训练集构造模块,设置为基于所述原始视频和所述目标视频构造视频训练集;其中,所述视频训练集保存有所述目标视频与所述原始视频的对应关系,所述视频训练集用以训练视频修复模型,训练后的所述视频修复模型用于对视频进行修复。
  13. 一种视频修复装置,包括:
    待修复视频获取模块,设置为获取待修复视频;
    修复模块,设置为将所述待修复视频输入至视频修复模型,得到所述视频修复模型输出的修复结果;其中,所述视频修复模型是根据视频训练集对初始视频修复模型进行训练获得的,所述视频训练集包括原始视频和与所述原始视频相对应的目标视频,所述目标视频是对所述原始视频的质量影响参数进行调整后获得的,所述目标视频的视频质量低于所述原始视频的视频质量,所述质量影响参数包括以下中的至少两个:噪声参数、亮度参数和清晰度参数。
  14. 一种电子设备,包括存储器、处理器,所述存储器中存储有可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1-8中任一项所述的方法,或者,执行所述计算机程序时实现权利要求9-11任一项所述的方法。
  15. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行权利要求1-8中任一项所述的方法,或者,执行权利要求9-11任一项所述的方法。
PCT/CN2020/127717 2019-11-15 2020-11-10 视频处理方法、视频修复方法、装置及设备 WO2021093718A1 (zh)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201911126554.5 2019-11-15
CN201911126554.5A CN112822474A (zh) 2019-11-15 2019-11-15 视频的处理方法、视频修复方法、装置及电子设备
CN201911118706.7A CN112819699A (zh) 2019-11-15 2019-11-15 视频处理方法、装置及电子设备
CN201911118706.7 2019-11-15
CN201911126297.5A CN112819702B (zh) 2019-11-15 2019-11-15 图像增强方法、装置、电子设备及计算机可读存储介质
CN201911126297.5 2019-11-15

Publications (1)

Publication Number Publication Date
WO2021093718A1 true WO2021093718A1 (zh) 2021-05-20

Family

ID=75911784

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/127717 WO2021093718A1 (zh) 2019-11-15 2020-11-10 视频处理方法、视频修复方法、装置及设备

Country Status (1)

Country Link
WO (1) WO2021093718A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116193231A (zh) * 2022-10-24 2023-05-30 成都与睿创新科技有限公司 用于处理微创手术视野异常的方法及系统
CN118247181A (zh) * 2024-05-28 2024-06-25 杭州海康威视数字技术股份有限公司 图像复原模型训练方法、电子设备和图像复原方法
CN118297820A (zh) * 2024-03-27 2024-07-05 北京智象未来科技有限公司 图像生成模型的训练方法、图像生成方法、装置、设备、存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231792A (zh) * 2011-06-29 2011-11-02 南京大学 基于特征匹配的电子稳像方法
WO2015056828A1 (ko) * 2013-10-14 2015-04-23 한국해양과학기술원 투명 디스플레이를 이용한 선박용 영상 개선 시스템
CN206249426U (zh) * 2016-09-30 2017-06-13 宁波市东望智能系统工程有限公司 一种图像复原系统
CN107507134A (zh) * 2017-09-21 2017-12-22 大连理工大学 基于卷积神经网络的超分辨率方法
CN108198145A (zh) * 2017-12-29 2018-06-22 百度在线网络技术(北京)有限公司 用于点云数据修复的方法和装置
CN108364255A (zh) * 2018-01-16 2018-08-03 辽宁师范大学 基于稀疏表示和偏微分模型的遥感图像放大方法
CN108765338A (zh) * 2018-05-28 2018-11-06 西华大学 基于卷积自编码卷积神经网络的空间目标图像复原方法
CN109410123A (zh) * 2018-10-15 2019-03-01 深圳市能信安科技股份有限公司 基于深度学习的去除马赛克的方法、装置及电子设备
CN109961397A (zh) * 2018-04-12 2019-07-02 华为技术有限公司 图像重建方法及设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231792A (zh) * 2011-06-29 2011-11-02 南京大学 基于特征匹配的电子稳像方法
WO2015056828A1 (ko) * 2013-10-14 2015-04-23 한국해양과학기술원 투명 디스플레이를 이용한 선박용 영상 개선 시스템
CN206249426U (zh) * 2016-09-30 2017-06-13 宁波市东望智能系统工程有限公司 一种图像复原系统
CN107507134A (zh) * 2017-09-21 2017-12-22 大连理工大学 基于卷积神经网络的超分辨率方法
CN108198145A (zh) * 2017-12-29 2018-06-22 百度在线网络技术(北京)有限公司 用于点云数据修复的方法和装置
CN108364255A (zh) * 2018-01-16 2018-08-03 辽宁师范大学 基于稀疏表示和偏微分模型的遥感图像放大方法
CN109961397A (zh) * 2018-04-12 2019-07-02 华为技术有限公司 图像重建方法及设备
CN108765338A (zh) * 2018-05-28 2018-11-06 西华大学 基于卷积自编码卷积神经网络的空间目标图像复原方法
CN109410123A (zh) * 2018-10-15 2019-03-01 深圳市能信安科技股份有限公司 基于深度学习的去除马赛克的方法、装置及电子设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116193231A (zh) * 2022-10-24 2023-05-30 成都与睿创新科技有限公司 用于处理微创手术视野异常的方法及系统
CN116193231B (zh) * 2022-10-24 2023-07-18 成都与睿创新科技有限公司 用于处理微创手术视野异常的方法及系统
CN118297820A (zh) * 2024-03-27 2024-07-05 北京智象未来科技有限公司 图像生成模型的训练方法、图像生成方法、装置、设备、存储介质
CN118247181A (zh) * 2024-05-28 2024-06-25 杭州海康威视数字技术股份有限公司 图像复原模型训练方法、电子设备和图像复原方法

Similar Documents

Publication Publication Date Title
US11995800B2 (en) Artificial intelligence techniques for image enhancement
WO2021093718A1 (zh) 视频处理方法、视频修复方法、装置及设备
Rao et al. A Survey of Video Enhancement Techniques.
US9076218B2 (en) Method and image processing device for image dynamic range compression with local contrast enhancement
CN108235037B (zh) 对图像数据进行编码和解码
CN110706172B (zh) 基于自适应混沌粒子群优化的低照度彩色图像增强方法
Kuo et al. Content-adaptive inverse tone mapping
CN110717868B (zh) 视频高动态范围反色调映射模型构建、映射方法及装置
CN111105376B (zh) 基于双分支神经网络的单曝光高动态范围图像生成方法
DE102020200310A1 (de) Verfahren und System zur Dunstreduzierung für die Bildverarbeitung
Steffens et al. Cnn based image restoration: Adjusting ill-exposed srgb images in post-processing
CN113096029A (zh) 基于多分支编解码器神经网络的高动态范围图像生成方法
Yu et al. Adaptive inverse hyperbolic tangent algorithm for dynamic contrast adjustment in displaying scenes
CN117916765A (zh) 用于去噪和低精度图像处理的非线性图像强度变换的系统和方法
Feng et al. Low-light image enhancement algorithm based on an atmospheric physical model
CN114240767A (zh) 一种基于曝光融合的图像宽动态范围处理方法及装置
CN112819699A (zh) 视频处理方法、装置及电子设备
CN112153240B (zh) 一种调整图像画质方法、装置及可读存储介质
CN113947553B (zh) 一种图像亮度增强方法及设备
JP5410378B2 (ja) 映像信号補正装置および映像信号補正プログラム
Ye et al. Single exposure high dynamic range image reconstruction based on deep dual-branch network
CN116468636A (zh) 低照度增强方法、装置、电子设备和可读存储介质
WO2018010026A1 (en) Method of presenting wide dynamic range images and a system employing same
Choudhury et al. Perceptually motivated automatic color contrast enhancement
KR20230165686A (ko) 이미지 데이터에 대한 디노이징 처리를 수행하는 방법 및 전자 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20886443

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20886443

Country of ref document: EP

Kind code of ref document: A1