CN113034384A - Video processing method, video processing device, electronic equipment and storage medium - Google Patents

Video processing method, video processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113034384A
CN113034384A CN202110220084.XA CN202110220084A CN113034384A CN 113034384 A CN113034384 A CN 113034384A CN 202110220084 A CN202110220084 A CN 202110220084A CN 113034384 A CN113034384 A CN 113034384A
Authority
CN
China
Prior art keywords
video
target
scene
processed
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110220084.XA
Other languages
Chinese (zh)
Inventor
李兴龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110220084.XA priority Critical patent/CN113034384A/en
Publication of CN113034384A publication Critical patent/CN113034384A/en
Priority to PCT/CN2022/072089 priority patent/WO2022179335A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The application discloses a video processing method and device, electronic equipment and a storage medium, and relates to the technical field of electronic equipment. The method is applied to an electronic device, the electronic device comprises an image sensor, and the method comprises the following steps: the method comprises the steps of obtaining a to-be-processed video collected through an image sensor, inputting the to-be-processed video into a trained scene detection model, obtaining a target scene type corresponding to a collection scene of the to-be-processed video output by the trained scene detection model, determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the to-be-processed video based on the target algorithm, wherein the video enhancement processing improves the video quality of the to-be-processed video by processing an image in the to-be-processed video through the target algorithm. According to the video enhancement method and device, the scene type corresponding to the acquisition scene of the video to be processed is identified, and the corresponding algorithm is selected for video enhancement processing, so that the video enhancement effect can be improved.

Description

Video processing method, video processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
Video enhancement is a technology for effectively improving the image quality and color of a video, and the covered algorithms are wide, such as a contrast enhancement algorithm, a saturation enhancement algorithm, a dryness removal algorithm, a super-resolution reconstruction algorithm and the like of the video, and all belong to the category of video enhancement algorithms. However, the currently adopted video enhancement algorithm intelligently solves specific problems, and the application scene is single, so that the video enhancement effect is poor.
Disclosure of Invention
In view of the above problems, the present application provides a video processing method, an apparatus, an electronic device, and a storage medium to solve the above problems.
In a first aspect, an embodiment of the present application provides a video processing method, which is applied to an electronic device, where the electronic device includes an image sensor, and the method includes: acquiring a video to be processed acquired by the image sensor; inputting the video to be processed into a trained scene detection model, and obtaining a target scene type corresponding to an acquisition scene of the video to be processed output by the trained scene detection model; determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm, wherein the video enhancement processing improves the video quality of the video to be processed by processing images in the video to be processed through the target algorithm.
In a second aspect, an embodiment of the present application provides a video processing apparatus, which is applied to an electronic device including an image sensor, and the apparatus includes: the to-be-processed video acquisition module is used for acquiring the to-be-processed video acquired by the image sensor; a target scene type obtaining module, configured to input the video to be processed into a trained scene detection model, and obtain a target scene type corresponding to an acquisition scene of the video to be processed output by the trained scene detection model; and the video enhancement processing module is used for determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm and carrying out video enhancement processing on the video to be processed based on the target algorithm, wherein the video enhancement processing improves the video quality of the video to be processed by processing images in the video to be processed through the target algorithm.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, the memory being coupled to the processor, the memory storing instructions, and the processor performing the above method when the instructions are executed by the processor.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the above method.
The video processing method, the video processing device, the electronic device and the storage medium provided by the embodiment of the application acquire a to-be-processed video acquired through an image sensor, input the to-be-processed video into a trained scene detection model, acquire a target scene type corresponding to an acquisition scene of the to-be-processed video output by the trained scene detection model, determine an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and perform video enhancement processing on the to-be-processed video based on the target algorithm, wherein the video enhancement processing improves the video quality of the to-be-processed video by processing an image in the to-be-processed video through the target algorithm, so that the scene type corresponding to the acquisition scene of the to-be-processed video is identified, and the algorithm corresponding to the scene is selected for video enhancement processing, and the video enhancement effect can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating a video processing method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating a video processing method according to another embodiment of the present application;
fig. 3 shows a flow chart of step S240 of the video processing method shown in fig. 2 of the present application;
fig. 4 is a schematic flow chart illustrating a video processing method according to still another embodiment of the present application;
fig. 5 is a schematic flow chart illustrating a video processing method according to another embodiment of the present application;
fig. 6 is a flow chart illustrating a video processing method according to still another embodiment of the present application;
fig. 7 is a schematic flow chart illustrating a video processing method according to yet another embodiment of the present application;
fig. 8 is a flow chart illustrating a video processing method according to yet another embodiment of the present application;
fig. 9 is a schematic flow chart illustrating a video processing method according to yet another embodiment of the present application;
FIG. 10 is a flow chart illustrating step S810 of the video processing method illustrated in FIG. 9 of the present application;
fig. 11 shows a block diagram of a video processing apparatus provided in an embodiment of the present application;
fig. 12 is a block diagram of an electronic device for executing a video processing method according to an embodiment of the present application;
fig. 13 illustrates a storage unit for storing or carrying program codes for implementing a video processing method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Video enhancement processing can be divided into pre-processing and post-processing, wherein the pre-processing is a means for processing video signals before encoding and decoding in a video generation stage, and the post-processing is a means for reprocessing coded videos to improve the quality of the coded videos. The denoising algorithm, the super-resolution reconstruction algorithm, the restoration algorithm and the like can be used as a video preprocessing means, most denoising algorithms firstly analyze the noise characteristics of each frame in a video signal and then perform denoising processing with corresponding intensity on the video according to the analysis result; the super-resolution reconstruction algorithm is used for amplifying a video according to a given zoom magnification, predicting pixel values as accurately as possible while improving the resolution, and performing equal-resolution video image quality improvement processing.
At present, video preprocessing schemes can only solve specific problems at an imaging end of electronic equipment, for example, a noise reduction algorithm can only solve noise introduced in a video imaging process, a super-resolution reconstruction algorithm can only be used for improving the resolution of a video, and the like, and the application scenes of the schemes are single. However, in the use process of a camera of an electronic device, the randomness of a video imaging process is strong, the shooting and recording scenes of a large number of users are not controllable, and the problems to be solved are complicated. For example, the problem that the outdoor bright scene in the daytime is mainly solved by light, brightness, color and the like instead of noise, the problem that the outdoor dark scene at night needs to be solved by brightness, noise and video atmosphere is solved, and the multi-scene problem cannot be solved by a single algorithm.
In view of the above problems, the inventors have found through long-term research and provide a video processing method, a video processing device, an electronic device, and a storage medium provided in the embodiments of the present application, and can improve a video enhancement effect by identifying a scene type corresponding to a captured scene of a video to be processed and selecting an algorithm corresponding to the scene type for video enhancement processing. The specific video processing method is described in detail in the following embodiments.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a video processing method according to an embodiment of the present application. The video processing method is used for identifying the scene type corresponding to the acquisition scene of the video to be processed and selecting the corresponding algorithm for video enhancement processing, so that the video enhancement effect can be improved. In a specific embodiment, the video processing method is applied to the video processing apparatus 200 shown in fig. 11 and the electronic device 100 (fig. 12) equipped with the video processing apparatus 200. The specific process of the present embodiment will be described below by taking an electronic device as an example, and it is understood that the electronic device applied in the present embodiment may include a smart phone, a tablet computer, a wearable electronic device, and the like, which is not limited herein. As will be described in detail with reference to the flow shown in fig. 1, in this embodiment, the electronic device includes an image sensor, and the video processing method may specifically include the following steps:
step S110: and acquiring the video to be processed acquired by the image sensor.
In this embodiment, the electronic device includes an image sensor, wherein the image sensor may include a camera, a video camera, or other sensors for image acquisition. As one way, the electronic device may perform video capture as the to-be-processed video through the image sensor, for example, the electronic device may perform video capture as the to-be-processed video in a video recording mode through the image sensor.
In some embodiments, when the image sensor is a camera, the video to be processed may be acquired by a front camera of the electronic device, for example, the video of the user at the time of self-timer shooting is acquired by the front camera as the video to be processed; the method comprises the steps that a video to be processed can be collected through a rear camera of the electronic equipment, for example, the video of a user when the user takes a picture is collected through the rear camera and serves as the video to be processed; the video to be processed can also be collected through the rotatable camera of the electronic equipment, and it can be understood that the electronic equipment can collect the video during self-shooting or the video during shooting by rotating the rotatable camera through the rotatable camera of the electronic equipment, and the limitation is not made here.
In some embodiments, the video to be processed may include people, animals, buildings, sky, sea, grass, and the like, without limitation.
Step S120: and inputting the video to be processed into a trained scene detection model, and obtaining a target scene type corresponding to the acquisition scene of the video to be processed output by the trained scene detection model.
Further, after the electronic device acquires a to-be-processed video acquired by an image sensor, the to-be-processed video may be input into a trained scene detection model, where the trained scene detection model is obtained through machine learning, specifically, a training data set is acquired first, where attributes or characteristics of one type of data in the training data set are different from those of another type of data, and then a neural network (scene detection network) is trained and modeled by using the acquired training data set according to a preset algorithm, so that a rule is aggregated based on the training data to obtain a trained scene detection model. In the present embodiment, the training data set may be, for example, a plurality of videos and a plurality of scene types having correspondence.
It is to be understood that the trained scene detection model may be stored locally on the electronic device after pre-training is completed. Based on this, after the electronic device acquires the video to be processed, the trained scene detection model may be directly called locally, for example, an instruction may be directly sent to the scene detection model to instruct the trained scene detection model to read the video to be processed in the target storage area, or the electronic device may directly input the video to be processed into the trained scene detection model stored locally, thereby effectively avoiding reduction in the speed of inputting the video to be processed into the trained scene detection model due to the influence of network factors, so as to improve the speed of acquiring the video to be processed by the trained scene detection model, and improve user experience.
In addition, the trained scene detection model may be stored in a server in communication connection with the electronic device after being trained in advance. Based on this, after the electronic device collects the video to be processed, the electronic device may send an instruction to the trained scene detection model stored in the server through the network to instruct the trained scene detection model to read the video to be processed collected by the electronic device through the network, or the electronic device may send the video to be processed to the trained scene detection model stored in the server through the network, so that the occupation of the storage space of the electronic device is reduced and the influence on the normal operation of the electronic device is reduced by storing the trained scene detection model in the server.
In some embodiments, in this embodiment, the trained scene detection model outputs corresponding information based on the read video to be processed, and then the electronic device obtains the information output by the trained scene detection model, specifically, the electronic device may obtain a target scene type corresponding to a capture scene of the video to be processed output by the trained scene detection model. It can be understood that, if the trained scene detection model is stored locally in the mobile terminal, the electronic device directly obtains the information output by the trained scene detection model; if the trained scene detection model is stored in the server, the electronic device may obtain information output by the trained scene detection model from the server through a network.
In some embodiments, the target scene type corresponding to the capture scene of the video to be processed may include: indoor HDR, outdoor HDR, indoor night scene, outdoor night scene, normal indoor scene, normal outdoor scene, etc., which are not limited herein.
Step S130: determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm, wherein the video enhancement processing improves the video quality of the video to be processed by processing images in the video to be processed through the target algorithm.
In some embodiments, the electronic device may be preset and store a plurality of algorithms, each algorithm of the plurality of algorithms is used for performing different video enhancement processing on the video to be processed, wherein the plurality of algorithms may include: HDR algorithms, enhancement algorithms, contrast and saturation processing algorithms, night scene algorithms, etc., which are not limited herein. In this embodiment, after the target scene type corresponding to the acquisition scene of the video to be processed is obtained, an algorithm corresponding to the target scene type may be determined from a plurality of preset algorithms as a target algorithm, and the video enhancement processing is performed on the video to be processed based on the target algorithm. For example, when the target scene type is an indoor DHR, the HDR algorithm corresponding to the indoor HDR may be determined as the target algorithm from the HDR algorithm, the enhancement algorithm, the contrast and saturation processing algorithm, and the night scene algorithm, and the video enhancement processing is performed on the video to be processed based on the HDR algorithm, so that a video quality effect of providing the video to be processed is achieved through the video enhancement processing.
The video image quality includes definition, sharpness, lens distortion, color, resolution, color gamut range, purity, etc., and different combinations thereof may have different video enhancement effects. It should be noted that the video enhancement processing of the video to be processed may also be understood as a series of operations performed before formal processing of the video to be processed, including image enhancement, image restoration, and the like, where the image enhancement is to add some information or transform data to an original image by a certain means, selectively highlight interesting features in the image or suppress some unnecessary features in the image, so that the image is matched with target optimization parameters, thereby improving image quality and enhancing visual effect. It can be understood that the video enhancement processing can be performed on the video resource to be played through the optimization parameter and the optimization mode corresponding to the video enhancement processing performed by the target algorithm. Taking the optimization parameter as an example, performing video enhancement processing on the video to be processed may include at least one of exposure enhancement, dessication, edge sharpening, contrast increase, or saturation increase on the video to be processed.
Specifically, a video to be processed displayed by the electronic device is decoded image content, and since the decoded image content is data in an RGBA format, in order to optimize the image content, the data in the RGBA format needs to be converted into an HSV format, specifically, a histogram of the image content is obtained, a parameter for converting the data in the RGBA format into the HSV format is obtained by performing statistics on the histogram, and the data in the RGBA format is converted into the HSV format according to the parameter.
In order to enhance the brightness of an image by enhancing the exposure, the luminance value of an area where the luminance values intersect may be increased by a histogram of the image, or the luminance of the image may be increased by nonlinear superposition, specifically, if I denotes a dark image to be processed and T denotes a comparatively bright image after the processing, the exposure may be enhanced by T (x) I (x) (1-I (x)). Wherein, T and I are both [0, 1] valued images. The algorithm can iterate multiple times if one is not effective.
The image content is denoised to remove noise of the image, and particularly, the image is degraded due to interference and influence of various noises in the generation and transmission processes, which adversely affects the processing of subsequent images and the image visual effect. The noise is of many kinds, such as: electrical noise, mechanical noise, channel noise and other noise. Therefore, in order to suppress noise, improve image quality, and facilitate higher-level processing, it is necessary to perform denoising preprocessing on an image. From the probability distribution of noise, there are gaussian noise, rayleigh noise, gamma noise, exponential noise and uniform noise.
Specifically, the image can be denoised by a gaussian filter, wherein the gaussian filter is a linear filter, and can effectively suppress noise and smooth the image. The principle of action is similar to that of an averaging filter, and the average value of pixels in a filter window is taken as output. The coefficients of the window template are different from those of the average filter, and the template coefficients of the average filter are all the same and are 1; while the coefficients of the template of the gaussian filter decrease with increasing distance from the center of the template. Therefore, the gaussian filter blurs the image to a lesser extent than the mean filter.
For example, a 5 × 5 gaussian filter window is generated, and sampling is performed with the center position of the template as the origin of coordinates. And substituting the coordinates of each position of the template into a Gaussian function, wherein the obtained value is the coefficient of the template. And then the Gaussian filter window is convolved with the image to denoise the image.
Wherein edge sharpening is used to sharpen the blurred image. There are generally two methods for image sharpening: one is a differential method, and the other is a high-pass filtering method.
In particular, contrast stretching is a method for enhancing an image, and also belongs to a gray scale transformation operation. By stretching the grey value through the grey scale transformation to the whole interval 0-255, the contrast is clearly greatly enhanced. The following formula can be used to map the gray value of a certain pixel to a larger gray space:
I(x,y)=[(I(x,y)-Imin)/(Imax-Imin)](MAX-MIN)+MIN;
where Imin, Imax are the minimum and maximum grayscale values of the original image, and MIN and MAX are the minimum and maximum grayscale values of the grayscale space to be stretched.
Therefore, the optimization parameters of the display enhancement processing based on the target algorithm may include one or more of the above optimization parameters, and the video to be processed may be processed according to the optimization parameters included in the target algorithm, so as to obtain a video enhancement effect matched with the target scene type corresponding to the acquisition scene of the video to be processed.
The video processing method provided by one embodiment of the application includes the steps of obtaining a to-be-processed video collected through an image sensor, inputting the to-be-processed video into a trained scene detection model, obtaining a target scene type corresponding to a collection scene of the to-be-processed video output by the trained scene detection model, determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the to-be-processed video based on the target algorithm, wherein the video enhancement processing improves video image quality of the to-be-processed video by processing an image in the to-be-processed video through the target algorithm, so that a video enhancement effect can be improved by identifying the scene type corresponding to the collection scene of the to-be-processed video and selecting the algorithm corresponding to the scene type for video enhancement processing.
Referring to fig. 2, fig. 2 is a flow chart illustrating a video processing method according to another embodiment of the present application. The method is applied to the electronic device including the image sensor, and as will be described in detail with respect to the flow shown in fig. 2, the video processing method may specifically include the following steps:
step S210: and acquiring the video to be processed acquired by the image sensor.
Step S220: and inputting the video to be processed into a trained scene detection model, and obtaining a target scene type corresponding to the acquisition scene of the video to be processed output by the trained scene detection model.
For detailed description of steps S210 to S220, please refer to steps S110 to S1230, which are not described herein again.
Step S230: and acquiring detection parameters of the image sensor.
In the present embodiment, the detection parameters of the image sensor may be acquired, wherein the detection parameters may include at least one of an exposure parameter AE, a white balance parameter AWB, and a face detection parameter FD.
In some embodiments, an exif function of the electronic device may be turned on, and when a to-be-processed video is collected by an image sensor, the to-be-processed video may be collected in a liveshot manner, where the collected to-be-processed video includes a multi-frame image, each frame of the multi-frame image may be recorded as img1, and then a detection parameter of the image sensor may be obtained based on img 1.
Step S240: and verifying the target scene type based on the detection parameters to obtain a verification result.
In this embodiment, after obtaining the detection parameters of the image sensor, the target scene type output by the trained scene detection model may be verified based on the detection parameters, and a verification result is obtained. For example, when the detection parameter is an exposure parameter, the target scene type may be verified based on the exposure parameter to obtain a verification result; when the detection parameters are white balance parameters, the target scene type can be verified based on the white balance parameters to obtain a verification result; when the detection parameters are the exposure parameters and the white balance parameters, the type of the target scene can be verified based on the exposure parameters and the white balance parameters, and a verification result is obtained.
Referring to fig. 3, fig. 3 is a flowchart illustrating step S240 of the video processing method illustrated in fig. 2 according to the present application. As will be explained in detail with respect to the flow shown in fig. 3, the method may specifically include the following steps:
step S241: and calculating the detection parameters based on a preset formula corresponding to the target scene type to obtain a score value.
In some embodiments, the electronic device may preset and store preset formulas corresponding to the respective scene types, and therefore, in this embodiment, the preset formulas corresponding to the target scene types may be determined based on the preset formulas corresponding to the respective scene types preset by the electronic device, and the detection parameters are calculated based on the preset formulas corresponding to the target scene types to obtain the score values.
For example, taking the target scene type as a backlight scene and the detection parameter as an exposure parameter as an example, the main impact parameters of the exposure parameter are lux _ index (the smaller the value is, the higher the scene brightness) and drcGain (the larger the value is, the higher the scene dynamic is); the preset formula corresponding to the backlight scene may be:
Figure BDA0002954429510000071
wherein, luxbaseAnd DrcbaseAnd the basic parameters are set in advance for the electronic equipment according to the backlight scene, lux and Drc are obtained according to the exposure parameters of the image sensor, and omega represents the weight of lux _ index when lux _ index and drcGAin are weighted, and (1-omega) represents the weight of drcGAin when lux _ index and drcGAin are weighted. Thus, after obtaining the exposure parameters, it can be based on
Figure BDA0002954429510000072
The lux and Drc obtained from the exposure parameters were calculated to obtain fractional values.
Step S242: and comparing the score value with a score threshold corresponding to the target scene type to obtain a comparison result.
In some embodiments, the electronic device may preset and store a score threshold corresponding to each scene type, and therefore, in this embodiment, the score threshold corresponding to the target scene type may be determined based on the score threshold corresponding to each scene type preset by the electronic device, and the score value obtained through calculation may be compared with the score threshold to obtain the comparison result. As a mode, when the electronic device sets the score threshold corresponding to each scene type in advance, the electronic device may perform adjustment of the size of the score threshold according to the processing condition of the video to be processed, for example, because the video to be processed is subjected to ISP processing before the trained scene detection model is input, cropping of an image may be involved in the ISP processing, and information of exposure parameter statistics is a video frame with full resolution, the confidence of the exposure parameter may be reduced, and therefore, when the judgment is made through the exposure parameter, the score threshold corresponding to the target scene type may be set lower, and the influence weight of the trained scene detection model on the classification calibration is indirectly increased.
Wherein the comparison result of the score value and the score threshold comprises: the score value is greater than a score threshold, the score value is equal to the score threshold, and the score value is less than the score threshold. In some embodiments, when the comparison result of the score value and the score threshold corresponding to the target scene type indicates that the score value is greater than the score threshold, it may be determined that the verification result satisfies the scene condition corresponding to the target scene type, an algorithm corresponding to the target scene type may be determined from a plurality of preset algorithms as a target algorithm, and video enhancement processing is performed on the video to be processed based on the target algorithm. For example, assume the Score threshold is ScorethrThen, when
Figure BDA0002954429510000081
And then, determining that the verification result meets the scene condition corresponding to the target scene type.
Step S250: and when the verification result meets the scene condition corresponding to the target scene type, determining an algorithm corresponding to the target scene type from the preset multiple algorithms as a target algorithm.
In some embodiments, the electronic device may preset and store scene conditions corresponding to each scene type, for example, assuming that the scene types include indoor HDR, outdoor HDR, indoor night scene, outdoor night scene, common indoor scene, and common outdoor scene, the electronic device may preset and store a first scene condition corresponding to the indoor HDR, a second scene condition corresponding to the outdoor HDR, a third scene condition corresponding to the indoor night scene, a fourth scene condition corresponding to the outdoor night scene, a fifth scene condition corresponding to the common indoor scene, and a sixth scene condition corresponding to the common outdoor scene. Therefore, in this embodiment, the scene condition corresponding to the target scene type may be determined based on the scene condition corresponding to each scene type preset by the electronic device, and the verification result may be compared with the scene condition corresponding to the target scene type to determine whether the verification result satisfies the scene condition corresponding to the target scene type, where when the verification result satisfies the scene condition corresponding to the target scene type, it is determined that the accuracy of the target scene type output by the trained scene detection model is high, and then the algorithm corresponding to the target scene type may be determined from a plurality of preset algorithms as the target algorithm.
Step S260: and adjusting the parameters of the target algorithm based on the detection parameters and the target scene type to obtain the adjusted target algorithm.
In this embodiment, after obtaining the detection parameters of the image sensor and determining the target algorithm corresponding to the target scene type, the parameters (control parameters, hyper-parameters) of the target algorithm may be adjusted based on the detection parameters and the target scene type, so as to obtain the adjusted target algorithm.
The detection parameters can self-adaptively adjust the parameters of the algorithm (the super parameters provided by some algorithms), so as to adjust the processing strength of the specific algorithm to the video to be processed; for example, taking the detection parameter as an exposure parameter and the scene type as a dim light scene as an example, the brightness enhancement degree and the denoising degree in the night scene algorithm are determined according to the lux _ index and gain values of the exposure parameter; taking the detection parameter as an exposure parameter and the scene type as a backlight scene as an example, the suppression degree of highlight, the brightness enhancement degree of a dark area, and the like are determined according to the lux _ index, drcGain, and darkBoostGain values of the exposure parameter. The same white balance parameters may assist the color, saturation processing related algorithms.
Step S270: and performing video enhancement processing on the video to be processed based on the adjusted target algorithm.
In this embodiment, after the adjusted target algorithm is obtained, video enhancement processing may be performed on the video to be processed based on the adjusted target algorithm, so that the effect of video enhancement processing may be improved.
Compared with the video processing method shown in fig. 1, in the video processing method provided in another embodiment of the present application, detection parameters of the image sensor are further obtained, the target scene type output by the trained scene detection model is verified based on the detection parameters of the image sensor, a verification result is obtained, and when the verification result meets the scene condition corresponding to the target scene type, a corresponding target algorithm is obtained to perform video enhancement processing on the video to be processed, so that accuracy of scene type identification and judgment can be improved, and an effect of video enhancement processing is improved. In addition, in this embodiment, parameters of the target algorithm are adjusted according to the detection parameters of the image sensor and the type of the target scene, and the video enhancement processing is performed on the video to be processed through the target algorithm after the parameters are adjusted, so that the video enhancement effect can be improved.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a video processing method according to still another embodiment of the present application. The method is applied to the electronic device including the image sensor, and as will be described in detail with respect to the flow shown in fig. 4, the video processing method may specifically include the following steps:
step S310: and acquiring the video to be processed acquired by the image sensor.
For detailed description of step S310, please refer to step S110, which is not described herein again.
Step S320: and acquiring a first image to be processed from a plurality of frame images included in the video to be processed.
In some embodiments, when the scene type is detected by the trained scene detection model, a single-frame image in the video to be processed is detected, and therefore, in this embodiment, a single-frame image may be acquired from multiple-frame images included in the video to be processed as the first image to be processed. As one mode, after the to-be-processed video acquired by the image sensor is acquired, IPS processing may be performed on the to-be-processed video, and a first to-be-processed image may be acquired from a multi-frame image included in the to-be-processed video subjected to the IPS processing. As another mode, after the to-be-processed video acquired by the image sensor is acquired, dump IPS processing may be performed on the to-be-processed video, and the first to-be-processed image may be acquired from a multi-frame image included in the to-be-processed video subjected to dump IPS processing.
Step S330: and performing downsampling processing on the first image to be processed to obtain a target image to be processed.
In this embodiment, after the first to-be-processed image is obtained, the first to-be-processed image may be downsampled to obtain a target to-be-processed image, and the target to-be-processed image may be used as an input parameter of the trained scene detection model. As one way, the size of the target to-be-processed image may be: 256 × 256, of course, the size of the target to-be-processed image may also be changed according to the scale and specific requirements of the trained scene detection model.
Step S340: and inputting the target image to be processed into the trained scene detection model, and obtaining a target scene type corresponding to the acquisition scene of the first image to be processed output by the trained scene detection model as the target scene type corresponding to the acquisition scene of the video to be processed.
Step S350: and determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm.
For detailed description of steps S340 to S350, please refer to steps S120 to S130, which are not described herein again.
Compared with the video processing method shown in fig. 1, the video processing method provided in another embodiment of the present application further performs downsampling on the to-be-processed image in the to-be-processed video to obtain a target to-be-processed image, and inputs the target to-be-processed image into the trained scene detection model as an input parameter to perform scene type detection, so that power consumption, memory, and detection time of the electronic device are saved by reducing the size of the input parameter.
Referring to fig. 5, fig. 5 is a flowchart illustrating a video processing method according to another embodiment of the present application. The method is applied to the electronic device including the image sensor, and as will be described in detail with respect to the flow shown in fig. 5, the video processing method may specifically include the following steps:
step S410: and acquiring the video to be processed acquired by the image sensor.
For detailed description of step S410, please refer to step S110, which is not described herein again.
Step S420: and sequentially acquiring a frame of image as a second image to be processed according to a preset frame number interval from a plurality of frames of images included in the video to be processed.
The video to be processed is composed of continuous multiframe images. In this embodiment, one frame of image may be sequentially acquired from multiple frames of images included in the video to be processed as a second image to be processed at preset frame number intervals, and the acquired second image to be processed is used as a trained scene detection model to detect the scene type, that is, the detection of the scene type may be performed once every N frames (for example, every 5 frames are sequentially performed), so that power consumption and resource consumption may be reduced, and meanwhile, the probability of a display flicker problem caused by frequent scene switching may also be reduced, and the use experience of the user may be improved.
For example, assuming that the video to be processed includes 10 frames of images, the 10 frames of images of the video to be processed may not be all input into the trained scene detection model for detecting the scene type, but a part of the frame images (for example, the 1 st frame image, the 5 th frame image, and the 10 th frame image) are selected from the 10 frames of images and input into the trained scene detection model for detecting the scene type, so as to reduce power consumption and resource consumption, and at the same time, the probability of display flicker problem caused by frequent scene switching may also be reduced, and the use experience of the user may be improved.
Step S430: and inputting the second image to be processed into the trained scene detection model, and obtaining a target scene type corresponding to the acquisition scene of the second image to be processed output by the trained scene detection model as a target scene type corresponding to the acquisition scene of the video to be processed.
Step S440: and determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm.
For detailed description of steps S430 to S440, please refer to steps S120 to S130, which are not described herein again.
Compared with the video processing method shown in fig. 1, in the video processing method provided in another embodiment of the present application, in this embodiment, one frame of image is sequentially obtained from multiple frames of images included in the video to be processed according to a preset frame number interval, and the image to be processed is input to the trained scene detection model as an input parameter to perform scene type detection, so that the probability of a display flicker problem caused by frequent scene switching is reduced, and the use experience of a user is improved.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating a video processing method according to yet another embodiment of the present application. The method is applied to the electronic device including the image sensor, and as will be described in detail with respect to the flow shown in fig. 6, the video processing method may specifically include the following steps:
step S510: and acquiring the video to be processed acquired by the image sensor.
Step S520: and inputting the video to be processed into a trained scene detection model, and obtaining a target scene type corresponding to the acquisition scene of the video to be processed output by the trained scene detection model.
Step S530: and determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm.
For the detailed description of steps S510 to S530, please refer to steps S110 to S130, which are not described herein again.
Step S540: when the scene type corresponding to the acquisition scene of the video to be processed changes, acquiring the change frequency of the scene type corresponding to the acquisition scene of the video to be processed.
In this embodiment, in the process of processing the video to be processed through the target algorithm corresponding to the target scene type, the scene type corresponding to the acquisition scene of the video to be processed may be continuously detected to determine whether the scene type corresponding to the acquisition scene of the video to be processed changes, where when it is detected that the scene type corresponding to the acquisition scene of the video to be processed changes (it is detected that the scene type corresponding to the acquisition scene of the video to be processed is not the target scene type), the change frequency of the scene type corresponding to the acquisition scene of the video to be processed may be obtained.
Step S550: and when the change frequency is greater than a frequency threshold value, continuing to perform video enhancement processing on the video to be processed based on the target algorithm.
In some embodiments, the electronic device may preset and store a frequency threshold, where the frequency threshold is used as a criterion for determining a change frequency of a scene type corresponding to a capture scene of the video to be processed. Therefore, in the present embodiment, when obtaining the variation frequency, the variation frequency may be compared with a frequency threshold to determine whether the variation frequency is greater than the frequency threshold, and a determination result is obtained.
When the change frequency represented by the judgment result is greater than the frequency threshold, it can be determined that the acquisition scene corresponding to the video to be processed is frequently switched, and the frequent switching of the scene may cause the phenomena of flickering and unsmooth display during the frequent switching.
When the judgment result indicates that the change frequency is not greater than the frequency threshold, the acquisition scene corresponding to the video to be processed can be determined to be normally switched, and the phenomena of flickering and unsmooth display during frequent switching can not be caused.
As one approach, a global queue Q may be created firstiWherein for QiThe results of the past M scene type detections are saved, so that the queue Q can be countediThe number of the detected Scene types is recorded as Scene num, if the Scene num is larger than the specified threshold value ScenethrIf the scene switching is considered to be frequent, namely the change frequency of the scene is greater than the frequency threshold, discarding the current detection result and outputting the last scene type detection result (but the queue Q)iWill be updated according to the current detection result) until SceneNum is less than the threshold ScenethrUntil now.
Compared with the video processing method shown in fig. 1, the video processing method provided in another embodiment of the present application further performs video enhancement processing on the video to be processed based on the target algorithm if the change frequency is greater than the frequency threshold when the scene type corresponding to the scene of the video to be processed changes, so as to reduce the probability of the display flicker problem caused by frequent scene switching and improve the user experience.
Referring to fig. 7, fig. 7 is a flowchart illustrating a video processing method according to yet another embodiment of the present application. The method is applied to the electronic device, which includes an image sensor, and will be described in detail with reference to a flow shown in fig. 7, in this embodiment, the target scene type is a first target scene type, and the target algorithm is a first target algorithm, where the video processing method specifically includes the following steps:
step S610: and acquiring the video to be processed acquired by the image sensor.
Step S620: and inputting the video to be processed into a trained scene detection model, and obtaining a target scene type corresponding to the acquisition scene of the video to be processed output by the trained scene detection model.
Step S630: and determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm.
For detailed description of steps S610 to S630, refer to steps S110 to S130, which are not described herein again.
Step S640: when the scene type corresponding to the collection scene of the video to be processed is changed from the first target scene type to a second target scene type, acquiring the times of changing the scene type from the first target scene type to the second target scene type continuously.
In this embodiment, in the process of processing the video to be processed through the first target algorithm corresponding to the first target scene type, the scene type corresponding to the acquisition scene of the video to be processed may be continuously detected to determine whether the scene type corresponding to the acquisition scene of the video to be processed changes, where when it is detected that the scene type corresponding to the acquisition scene of the video to be processed changes from the first target scene type to the second target scene type, the number of times that the scene type continuously changes from the first target scene type to the second target scene type may be obtained.
In some embodiments, when a current frame first detects that a scene type corresponding to an acquisition scene of a video to be processed changes from a first target scene type to a second target scene type, the current frame may record a change, and continue to perform video enhancement processing on the video to be processed based on a first target algorithm, and when a next frame of the current frame detects that the scene type corresponding to the acquisition scene of the video to be processed changes from the first target scene type to the second target scene type, the current frame may accumulate the change, and when the next frame of the current frame detects that the scene type corresponding to the acquisition scene of the video to be processed does not change, or changes from the first target scene type to another target scene type different from the second target scene type, the number of times of recording is cleared.
Step S650: and when the times reach preset times, determining an algorithm corresponding to the second target scene type from the preset multiple algorithms as a second target algorithm, and performing video enhancement processing on the video to be processed based on the second target algorithm.
In some embodiments, the electronic device may preset and store a preset number of times, where the preset number of times is used as a basis for determining the number of times that the scene type continuously changes from the first target scene type to the second target scene type. Therefore, in this embodiment, after the number of times that the scene type continuously changes from the first target scene type to the second target scene type, the number of times may be compared with a preset number of times, where when the number of times reaches a number threshold, an algorithm corresponding to the second target scene type may be determined from a plurality of preset algorithms as the second target algorithm, and video enhancement processing may be performed on the video to be processed based on the second target algorithm, and when the number of times does not reach the number threshold, video enhancement processing may be continuously performed on the video to be processed based on the first target algorithm.
Compared with the video processing method shown in fig. 1, in the video processing method provided in another embodiment of the present application, when the scene type corresponding to the capture scene of the video to be processed changes from the first target scene type to the second target scene type, the frequency of continuously changing the scene type from the first target scene type to the second target scene type is obtained, and when the frequency reaches the preset frequency, the video enhancement processing is performed on the video to be processed by obtaining the second target algorithm corresponding to the second target scene type, so that the probability of the display flicker problem caused by frequent scene switching is reduced, and the use experience of a user is improved.
Referring to fig. 8, fig. 8 is a schematic flowchart illustrating a video processing method according to yet another embodiment of the present application. The method is applied to the electronic device, which includes an image sensor, and the following will describe in detail with respect to a flow shown in fig. 8, where in this embodiment, the target scene type is a first target scene type, and the target algorithm is a first target algorithm, and the video processing method may specifically include the following steps:
step S710: and acquiring the video to be processed acquired by the image sensor.
Step S720: and inputting the video to be processed into a trained scene detection model, and obtaining a target scene type corresponding to the acquisition scene of the video to be processed output by the trained scene detection model.
Step S730: and determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm.
For the detailed description of steps S710 to S730, refer to steps S110 to S130, which are not described herein again.
Step S740: when the scene type corresponding to the collection scene of the video to be processed is changed from the first target scene type to a second target scene type, determining an algorithm corresponding to the second target scene type from the preset multiple algorithms as a second target algorithm.
In this embodiment, in the process of processing the video to be processed through the first target algorithm corresponding to the first target scene type, the scene type corresponding to the captured scene of the video to be processed may be continuously detected to determine whether the scene type corresponding to the captured scene of the video to be processed changes, where when it is detected that the scene type corresponding to the captured scene of the video to be processed changes from the first target scene type to the second target scene type, an algorithm corresponding to the second target scene type may be determined from a plurality of preset algorithms as the second target algorithm. The first target scene type is different from the second target scene type, and the first target algorithm is different from the second target algorithm.
Step S750: and performing video enhancement processing on the video to be processed based on the first target algorithm, and gradually reducing the weight corresponding to the output parameter of the target algorithm.
In order to prevent the problem of flicker introduced by the hard handover algorithm (switching the first target algorithm to the second target algorithm), a smooth transition is required. In this embodiment, a policy of fusing the algorithms input and output may be adopted to perform inter-algorithm transition, and the specific address may continue to perform video enhancement processing on the video to be processed based on the first target algorithm, and gradually reduce the weight corresponding to the output parameter of the first target algorithm. As a mode, the video enhancement processing may be continuously performed on the video to be processed based on the first target algorithm, and the weight corresponding to the output parameter of the first target algorithm is gradually reduced and the weight corresponding to the input parameter of the first target algorithm is increased, where a sum of the weight corresponding to the output parameter of the first target algorithm and the weight corresponding to the input parameter of the first target algorithm is always equal to 1.
Step S760: and when the weight corresponding to the output parameter of the first target algorithm is reduced to a preset weight, performing video enhancement processing on the video to be processed based on the second target algorithm, and gradually increasing the weight corresponding to the output parameter corresponding to the second target algorithm.
In some embodiments, the electronic device may preset and store a preset weight, where the preset weight is used as a reduction reference value of the weight corresponding to the output parameter of the first target algorithm, and therefore, in this embodiment, in a process of reducing the weight corresponding to the output parameter of the first target algorithm, it may be detected whether the weight corresponding to the output parameter of the first target algorithm is reduced to the preset weight, where when the weight corresponding to the output parameter of the first target algorithm is reduced to the preset weight, the video enhancement processing on the video to be processed based on the first target algorithm may be stopped, and the video enhancement processing on the video to be processed based on the second target algorithm may be started, and the weight corresponding to the output parameter of the second target algorithm is gradually increased. As a mode, the video enhancement processing may be performed on the video to be processed based on the second target algorithm, and the weight corresponding to the output parameter of the second target algorithm is gradually increased and the weight corresponding to the input parameter of the second target algorithm is gradually decreased, where a sum of the weight corresponding to the output parameter of the second target algorithm and the weight corresponding to the input parameter of the second target algorithm is 1.
In some embodiments, the predetermined weight is 0. When the weight corresponding to the output parameter of the first target algorithm is reduced to 0, the video enhancement processing of the video to be processed based on the first target algorithm may be stopped, the video enhancement processing of the video to be processed based on the second target algorithm may be started, and the weight corresponding to the output parameter of the second target algorithm may be gradually increased until the weight corresponding to the output parameter of the second target algorithm is increased to 1.
In some embodiments, a global state variable S (value ranges 0, 1, and 2) and a global frame number statistical variable F (value range 0 to N × N, that is, 15 (performing scene type detection once every 5 frames, and performing switching when 3 consecutive frames are switched from one scene type to another scene)); for the state variable S and the frame number statistical variable F: when S is 0, the scene type switching does not occur, the current state is kept, meanwhile, the frame number statistical variable F is added with 1 frame by frame, and then F clip is between 0 and 15 (namely F is Min (F, 15), and F is Max (F, 0)); when S is equal to 1, the scene type detection result is changed, due to the adoption of a delay mechanism, algorithm switching cannot be carried out immediately, at the moment, a frame number statistical variable F starts to reduce 1 frame by frame, and then F clip is changed to be between 0 and 15; and when S is 2, the scene detection results of N times of continuous scenes are all another scene, the algorithm needs to be switched to the algorithm corresponding to the another scene, meanwhile, the frame number statistical variable F starts to add 1 frame by frame, then the clip is between 0 and 15, and S is set to be zero.
The Fusion-based algorithm transition logic mainly comprises the steps of recording input parameters of an algorithm and output parameters of the algorithm for weighted Fusion, gradually reducing the output ratio of the previous algorithm when one algorithm is switched to another algorithm, correspondingly switching the algorithm to another mode when the output ratio of the previous algorithm is reduced to zero, and gradually increasing the output ratio of a new algorithm mode until the weight of the algorithm is 1. As shown in table 1, N × N is 6 as an example.
TABLE 1
Figure BDA0002954429510000141
Compared with the video processing method shown in fig. 1, in the video processing method provided in another embodiment of the present application, when the scene type corresponding to the acquisition scene of the video to be processed changes from the first target scene type to the second target scene type, the video enhancement processing is performed on the video to be processed based on the first target algorithm, and the weight corresponding to the output parameter of the target algorithm is gradually reduced, and when the weight corresponding to the output parameter of the target algorithm is reduced to the preset weight, the video enhancement processing is performed on the video to be processed based on the second target algorithm corresponding to the second target scene type, and the weight corresponding to the output parameter corresponding to the second target algorithm is gradually increased, so as to improve the smooth switching of display.
Referring to fig. 9, fig. 9 is a schematic flowchart illustrating a video processing method according to yet another embodiment of the present application. The method is applied to the electronic device including the image sensor, and as will be described in detail with respect to the flow shown in fig. 9, the video processing method may specifically include the following steps:
step S810: the method comprises the steps of obtaining a training data set, wherein the training data set comprises a plurality of training videos and a plurality of training scene types, and the training videos correspond to the training scene types one by one.
In the present embodiment, a training data set is first acquired. The training data set may include a plurality of training videos and a plurality of training scene types, where the plurality of training videos and the plurality of training scene types correspond to one another. By one approach, the plurality of videos may be captured by an image sensor, and the plurality of training scene types may be manually annotated by a user.
In some embodiments, the training data set may be stored locally in the electronic device, may be stored and transmitted to the electronic device by other devices, may be stored and transmitted to the electronic device from a server, may be collected in real time by the electronic device, and the like, which is not limited herein.
Referring to fig. 10, fig. 10 is a flowchart illustrating step S810 of the video processing method illustrated in fig. 9 according to the present application. In this embodiment, the plurality of training videos include a target training video, the plurality of training scene types include a target training scene type, and the target training video corresponds to the target training scene type, which will be described in detail with reference to the flow shown in fig. 10, where the method specifically includes the following steps:
step S811: and acquiring the target training video and the scene type to be confirmed corresponding to the target training video.
In this embodiment, a target training video and a to-be-determined scene type corresponding to the target training video may be obtained, where the to-be-determined scene type corresponding to the target training video may be manually identified and labeled.
Step S812: and acquiring target detection parameters when the image sensor collects the target training video.
In this embodiment, a target detection parameter when the image sensor collects a target training video may be obtained, where the target detection parameter may include at least one of an exposure parameter AE, a white balance parameter AWB, and a face detection parameter FD.
In some embodiments, an exif function of the electronic device may be turned on, and when a target training video is collected by an image sensor, the target training video may be collected in a liveshot manner, where the collected target training video includes a multi-frame image, each frame of the multi-frame image may be recorded as img1, and then a target detection parameter of the image sensor may be obtained based on img 1.
Step S813: and verifying the scene type to be confirmed based on the target detection parameters to obtain a target verification result.
In this embodiment, after the target detection parameters of the image sensor are obtained, the type of the scene to be confirmed through manual annotation may be verified based on the target detection parameters, and a verification result is obtained. For example, when the target detection parameter is an exposure parameter, the type of the scene to be confirmed may be verified based on the exposure parameter to obtain a verification result; when the target detection parameter is a white balance parameter, verifying the scene type to be confirmed based on the white balance parameter to obtain a verification result; when the target detection parameters are the exposure parameters and the white balance parameters, the scene type to be confirmed can be verified based on the exposure parameters and the white balance parameters, and a verification result is obtained.
Step S814: and when the target verification result meets the scene condition corresponding to the scene type to be confirmed, determining the scene type to be confirmed as the target training scene type.
In some embodiments, the electronic device may preset and store scene conditions corresponding to various scene types. Therefore, in this embodiment, the scene conditions corresponding to the scene types to be confirmed may be determined based on the scene conditions corresponding to the scene types preset by the electronic device, and the verification result is compared with the scene conditions corresponding to the scene types to be confirmed to determine whether the verification result meets the scene conditions corresponding to the scene types to be confirmed, where when the verification result meets the scene conditions corresponding to the scene types to be confirmed, the scene types to be confirmed are determined as the target training scene types, so as to improve the accuracy of the training data set and improve the model training effect of the scene detection.
Step S820: and training the scene detection network by taking the training videos as input parameters and the training scene types as output parameters to obtain a trained scene detection model.
As one approach, after obtaining the plurality of training videos and the plurality of training scene types, the plurality of training videos and the plurality of training scene types may be used as a training data set to train the scene detection network to obtain a trained scene detection model. In some embodiments, a plurality of training videos may be used as input parameters, and a plurality of training scene types may be used as output parameters to train a scene detection network, so as to obtain a trained scene detection network. In addition, after the trained scene detection network is obtained, the accuracy of the trained scene detection network may be verified, and it is determined whether the scene type of the trained scene detection network based on the input video output meets the preset requirement, and when the scene type of the trained scene detection network based on the input video output does not meet the preset requirement, the training data set may be collected again to train the scene detection network, or a plurality of training data sets may be obtained again to correct the trained scene detection network, which is not limited herein.
Step S830: and acquiring the video to be processed acquired by the image sensor.
Step S840: and inputting the video to be processed into a trained scene detection model, and obtaining a target scene type corresponding to the acquisition scene of the video to be processed output by the trained scene detection model.
Step S850: and determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm.
For detailed description of steps S830 to S850, refer to steps S110 to S130, which are not described herein again.
In the video processing method provided in yet another embodiment of the present application, compared to the video processing method shown in fig. 1, in this embodiment, a training data set is further obtained, and a scene detection network is trained through the training data set to obtain a trained scene detection model, so that accuracy of scene type detection through the trained scene detection model is improved.
Referring to fig. 11, fig. 11 is a block diagram illustrating a video processing apparatus according to an embodiment of the present disclosure. The video processing apparatus 200 is applied to the above-mentioned electronic device including an image sensor, and will be explained with reference to the block diagram shown in fig. 11, the video processing apparatus 200 includes: a to-be-processed video obtaining module 210, a target scene type obtaining module 220, and a video enhancement processing module 230, wherein:
a to-be-processed video acquiring module 210, configured to acquire a to-be-processed video acquired by the image sensor.
Further, the to-be-processed video obtaining module 210 includes: training data set acquisition module and model training module, wherein:
the training data set acquisition module is used for acquiring a training data set, wherein the training data set comprises a plurality of training videos and a plurality of training scene types, and the training videos correspond to the training scene types one by one.
Further, the training data set acquisition module includes: a scene type to be confirmed obtaining submodule, a target detection parameter obtaining submodule, a target verification result obtaining submodule and a target training scene type determining submodule, wherein:
and the scene type to be confirmed obtaining submodule is used for obtaining the target training video and the scene type to be confirmed corresponding to the target training video.
And the target detection parameter acquisition submodule is used for acquiring target detection parameters when the image sensor acquires the target training video.
And the target verification result obtaining submodule is used for verifying the scene type to be confirmed based on the target detection parameters to obtain a target verification result.
And the target training scene type determining submodule is used for determining the scene type to be confirmed as the target training scene type when the target verification result meets the scene condition corresponding to the scene type to be confirmed.
And the model training module is used for training the scene detection network by taking the training videos as input parameters and the training scene types as output parameters to obtain a trained scene detection model.
A target scene type obtaining module 220, configured to input the video to be processed into the trained scene detection model, and obtain a target scene type corresponding to the acquisition scene of the video to be processed output by the trained scene detection model.
Further, the target scene type obtaining module 220 includes: the system comprises a first to-be-processed image acquisition submodule, a target to-be-processed image acquisition submodule and a first target scene type acquisition submodule, wherein:
and the first to-be-processed image acquisition submodule is used for acquiring a first to-be-processed image from a plurality of frames of images included in the to-be-processed video.
And the target image to be processed obtaining submodule is used for carrying out downsampling processing on the first image to be processed to obtain a target image to be processed.
And the first target scene type obtaining sub-module is used for inputting the target image to be processed into the trained scene detection model, obtaining a target scene type corresponding to the acquisition scene of the first image to be processed output by the trained scene detection model, and using the target scene type as the target scene type corresponding to the acquisition scene of the video to be processed.
Further, the target scene type obtaining module 220 includes: a second to-be-processed image acquisition sub-module and a second target scene type acquisition sub-module, wherein:
and the second image acquisition submodule to be processed is used for sequentially acquiring a frame of image as a second image to be processed according to a preset frame number interval from a plurality of frames of images included in the video to be processed.
And the second target scene type obtaining sub-module is used for inputting the second image to be processed into the trained scene detection model, and obtaining a target scene type corresponding to the acquisition scene of the second image to be processed output by the trained scene detection model as a target scene type corresponding to the acquisition scene of the video to be processed.
The video enhancement processing module 230 is configured to determine an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and perform video enhancement processing on the video to be processed based on the target algorithm, where the video enhancement processing improves video quality of the video to be processed by processing an image in the video to be processed through the target algorithm.
In further pairs, the video enhancement processing module 230 comprises: the video enhancement processing system comprises a detection parameter obtaining submodule, a verification result obtaining submodule and a first video enhancement processing submodule, wherein:
and the detection parameter acquisition submodule is used for acquiring the detection parameters of the image sensor.
And the verification result obtaining submodule is used for verifying the target scene type based on the detection parameters to obtain a verification result.
Further, the verification result obtaining sub-module includes: a score value obtaining unit and a comparison result obtaining unit, wherein:
and the score value obtaining unit is used for calculating the detection parameters based on a preset formula corresponding to the target scene type to obtain a score value.
And the comparison result obtaining unit is used for comparing the score value with a score threshold value corresponding to the target scene type to obtain a comparison result.
And the first video enhancement processing sub-module is used for determining an algorithm corresponding to the target scene type from the preset multiple algorithms as a target algorithm when the verification result meets the scene condition corresponding to the target scene type, and performing video enhancement processing on the video to be processed based on the target algorithm.
Further, the first video enhancement processing sub-module includes: a video enhancement processing unit, wherein:
and the video enhancement processing unit is used for determining an algorithm corresponding to the target scene type from the preset multiple algorithms as a target algorithm when the comparison result represents that the score value is greater than the score threshold value, and performing video enhancement processing on the video to be processed based on the target algorithm.
Further, the video enhancement processing module comprises: the video enhancement processing system comprises a target algorithm determining submodule, a target algorithm adjusting submodule and a second video enhancement processing submodule, wherein:
and the target algorithm determining submodule is used for determining an algorithm corresponding to the target scene type from the preset plurality of algorithms as the target algorithm.
And the target algorithm adjusting submodule is used for adjusting the parameters of the target algorithm based on the detection parameters and the target scene type to obtain the adjusted target algorithm.
And the second video enhancement processing submodule is used for carrying out video enhancement processing on the video to be processed based on the adjusted target algorithm.
Further, the video processing apparatus 200 further includes: a change frequency acquisition module and a video enhancement hold module, wherein:
and the change frequency acquisition module is used for acquiring the change frequency of the scene type corresponding to the acquisition scene of the video to be processed when the scene type corresponding to the acquisition scene of the video to be processed changes.
And the video enhancement maintaining module is used for continuously carrying out video enhancement processing on the video to be processed based on the target algorithm when the change frequency is greater than a frequency threshold value.
Further, the target scene type is a first target scene type, the target algorithm is a first target algorithm, and the video processing apparatus 200 further includes: the number of times obtains module and video enhancement and switches processing module, wherein:
the number obtaining module is configured to obtain the number of times that the scene type changes from the first target scene type to the second target scene type continuously when the scene type corresponding to the acquisition scene of the video to be processed changes from the first target scene type to the second target scene type.
And the video enhancement switching processing module is used for determining an algorithm corresponding to the second target scene type from the preset plurality of algorithms as a second target algorithm when the times reach preset times, and performing video enhancement processing on the video to be processed based on the second target algorithm.
Further, the target scene type is a first target scene type, the target algorithm is a first target algorithm, and the video processing apparatus 200 further includes: a second target algorithm determination module, a weight reduction module, and a weight improvement module, wherein:
and the second target algorithm determining module is used for determining an algorithm corresponding to the second target scene type from the preset multiple algorithms as a second target algorithm when the scene type corresponding to the acquisition scene of the video to be processed is changed from the first target scene type to the second target scene type.
And the weight reduction module is used for performing video enhancement processing on the video to be processed based on the first target algorithm and gradually reducing the weight corresponding to the output parameter of the target algorithm.
And the weight increasing module is used for performing video enhancement processing on the video to be processed based on the second target algorithm and gradually increasing the weight corresponding to the output parameter corresponding to the second target algorithm when the weight corresponding to the output parameter of the first target algorithm is reduced to a preset weight.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 12, a block diagram of an electronic device 100 according to an embodiment of the present disclosure is shown. The electronic device 100 may be a smart phone, a tablet computer, an electronic book, or other electronic devices capable of running an application. The electronic device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, an image sensor 130, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform the methods as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores, among other things. The processor 110 connects various parts within the overall electronic device 100 using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content to be displayed; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The data storage area may also store data created by the electronic device 100 during use (e.g., phone book, audio-video data, chat log data), and the like.
Referring to fig. 13, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 300 has stored therein a program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 300 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 300 includes a non-volatile computer-readable storage medium. The computer readable storage medium 300 has storage space for program code 310 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 310 may be compressed, for example, in a suitable form.
To sum up, the video processing method, the video processing apparatus, the electronic device, and the storage medium provided in the embodiments of the present application acquire a to-be-processed video acquired by an image sensor, input the to-be-processed video into a trained scene detection model, acquire a target scene type corresponding to an acquisition scene of the to-be-processed video output by the trained scene detection model, determine an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and perform video enhancement processing on the to-be-processed video based on the target algorithm, wherein, the video enhancement processing improves the video image quality of the video to be processed by processing the image in the video to be processed through a target algorithm, therefore, the scene type corresponding to the acquisition scene of the video to be processed is identified, and the corresponding algorithm is selected for video enhancement processing, so that the video enhancement effect can be improved.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A video processing method applied to an electronic device including an image sensor, the method comprising:
acquiring a video to be processed acquired by the image sensor;
inputting the video to be processed into a trained scene detection model, and obtaining a target scene type corresponding to an acquisition scene of the video to be processed output by the trained scene detection model;
determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm, wherein the video enhancement processing improves the video quality of the video to be processed by processing images in the video to be processed through the target algorithm.
2. The method according to claim 1, wherein the determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm comprises:
acquiring detection parameters of the image sensor;
verifying the target scene type based on the detection parameters to obtain a verification result;
and when the verification result meets the scene condition corresponding to the target scene type, determining an algorithm corresponding to the target scene type from the preset multiple algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm.
3. The method according to claim 2, wherein the verifying the target scene type based on the detection parameter to obtain a verification result comprises:
calculating the detection parameters based on a preset formula corresponding to the target scene type to obtain a score value;
comparing the score value with a score threshold corresponding to the target scene type to obtain a comparison result;
when the verification result meets the scene condition corresponding to the target scene type, determining an algorithm corresponding to the target scene type from the preset multiple algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm, including:
and when the comparison result represents that the score value is larger than the score threshold value, determining an algorithm corresponding to the target scene type from the preset multiple algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm.
4. The method according to claim 2, wherein the determining an algorithm corresponding to the target scene type from the preset plurality of algorithms as a target algorithm, and performing video enhancement processing on the video to be processed based on the target algorithm comprises:
determining an algorithm corresponding to the target scene type from the preset plurality of algorithms as the target algorithm;
adjusting parameters of the target algorithm based on the detection parameters and the target scene type to obtain an adjusted target algorithm;
and performing video enhancement processing on the video to be processed based on the adjusted target algorithm.
5. The method of claim 2, wherein the detection parameters comprise at least one of exposure parameters, white balance parameters, and face detection parameters.
6. The method according to claim 1, wherein the inputting the video to be processed into a trained scene detection model, and obtaining a target scene type corresponding to a capture scene of the video to be processed output by the trained scene detection model, comprises:
acquiring a first image to be processed from a plurality of frame images included in the video to be processed;
carrying out downsampling processing on the first image to be processed to obtain a target image to be processed;
and inputting the target image to be processed into the trained scene detection model, and obtaining a target scene type corresponding to the acquisition scene of the first image to be processed output by the trained scene detection model as the target scene type corresponding to the acquisition scene of the video to be processed.
7. The method according to claim 1, wherein the inputting the video to be processed into a trained scene detection model, and obtaining a target scene type corresponding to a capture scene of the video to be processed output by the trained scene detection model, comprises:
sequentially acquiring a frame of image from a plurality of frames of images included in the video to be processed as a second image to be processed according to a preset frame number interval;
and inputting the second image to be processed into the trained scene detection model, and obtaining a target scene type corresponding to the acquisition scene of the second image to be processed output by the trained scene detection model as a target scene type corresponding to the acquisition scene of the video to be processed.
8. The method according to claim 1, wherein after determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm and performing video enhancement processing on the video to be processed based on the target algorithm, the method further comprises:
when the scene type corresponding to the acquisition scene of the video to be processed changes, acquiring the change frequency of the scene type corresponding to the acquisition scene of the video to be processed;
and when the change frequency is greater than a frequency threshold value, continuing to perform video enhancement processing on the video to be processed based on the target algorithm.
9. The method according to claim 1, wherein the target scene type is a first target scene type, the target algorithm is a first target algorithm, and after determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as the target algorithm and performing video enhancement processing on the video to be processed based on the target algorithm, the method further comprises:
when the scene type corresponding to the acquisition scene of the video to be processed is changed from the first target scene type to a second target scene type, acquiring the times of changing the scene type from the first target scene type to the second target scene type continuously;
and when the times reach preset times, determining an algorithm corresponding to the second target scene type from the preset multiple algorithms as a second target algorithm, and performing video enhancement processing on the video to be processed based on the second target algorithm.
10. The method according to claim 1, wherein the target scene type is a first target scene type, the target algorithm is a first target algorithm, and after determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as the target algorithm and performing video enhancement processing on the video to be processed based on the target algorithm, the method further comprises:
when the scene type corresponding to the collection scene of the video to be processed is changed from the first target scene type to a second target scene type, determining an algorithm corresponding to the second target scene type from the preset multiple algorithms as a second target algorithm;
performing video enhancement processing on the video to be processed based on the first target algorithm, and gradually reducing the weight corresponding to the output parameter of the target algorithm;
and when the weight corresponding to the output parameter of the first target algorithm is reduced to a preset weight, performing video enhancement processing on the video to be processed based on the second target algorithm, and gradually increasing the weight corresponding to the output parameter corresponding to the second target algorithm.
11. The method according to any one of claims 1 to 10, wherein before inputting the to-be-processed video into the trained scene detection model and obtaining the target scene type corresponding to the capture scene of the to-be-processed video output by the trained scene detection model, the method further comprises:
acquiring a training data set, wherein the training data set comprises a plurality of training videos and a plurality of training scene types, and the plurality of training videos and the plurality of training scene types are in one-to-one correspondence;
and training the scene detection network by taking the training videos as input parameters and the training scene types as output parameters to obtain a trained scene detection model.
12. The method of claim 11, wherein the plurality of training videos includes a target training video, wherein the plurality of training scene types includes a target training scene type, wherein the target training video corresponds to the target training scene type, and wherein obtaining the training data set includes:
acquiring the target training video and a scene type to be confirmed corresponding to the target training video;
acquiring target detection parameters when the image sensor collects the target training video;
verifying the scene type to be confirmed based on the target detection parameters to obtain a target verification result;
and when the target verification result meets the scene condition corresponding to the scene type to be confirmed, determining the scene type to be confirmed as the target training scene type.
13. A video processing apparatus applied to an electronic device including an image sensor, the apparatus comprising:
the to-be-processed video acquisition module is used for acquiring the to-be-processed video acquired by the image sensor;
a target scene type obtaining module, configured to input the video to be processed into a trained scene detection model, and obtain a target scene type corresponding to an acquisition scene of the video to be processed output by the trained scene detection model;
and the video enhancement processing module is used for determining an algorithm corresponding to the target scene type from a plurality of preset algorithms as a target algorithm and carrying out video enhancement processing on the video to be processed based on the target algorithm, wherein the video enhancement processing improves the video quality of the video to be processed by processing images in the video to be processed through the target algorithm.
14. An electronic device comprising a memory and a processor, the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-12.
15. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 12.
CN202110220084.XA 2021-02-26 2021-02-26 Video processing method, video processing device, electronic equipment and storage medium Pending CN113034384A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110220084.XA CN113034384A (en) 2021-02-26 2021-02-26 Video processing method, video processing device, electronic equipment and storage medium
PCT/CN2022/072089 WO2022179335A1 (en) 2021-02-26 2022-01-14 Video processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110220084.XA CN113034384A (en) 2021-02-26 2021-02-26 Video processing method, video processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113034384A true CN113034384A (en) 2021-06-25

Family

ID=76462040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110220084.XA Pending CN113034384A (en) 2021-02-26 2021-02-26 Video processing method, video processing device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113034384A (en)
WO (1) WO2022179335A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780252A (en) * 2021-11-11 2021-12-10 深圳思谋信息科技有限公司 Training method of video processing model, video processing method and device
CN114266749A (en) * 2021-12-23 2022-04-01 上海卓繁信息技术股份有限公司 Trident Net based image processing method
CN114363579A (en) * 2022-01-21 2022-04-15 中国铁塔股份有限公司 Monitoring video sharing method and device and electronic equipment
WO2022179335A1 (en) * 2021-02-26 2022-09-01 Oppo广东移动通信有限公司 Video processing method and apparatus, electronic device, and storage medium
CN116246209A (en) * 2023-03-09 2023-06-09 彩虹鱼科技(广东)有限公司 Wide-angle lens biological target detection method based on offset convolution kernel
CN116866638A (en) * 2023-07-31 2023-10-10 联通沃音乐文化有限公司 Intelligent video processing method and system based on images
CN116958331A (en) * 2023-09-20 2023-10-27 四川蜀天信息技术有限公司 Sound and picture synchronization adjusting method and device and electronic equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894768B (en) * 2023-09-11 2023-11-21 成都航空职业技术学院 Target detection optimization method and system based on artificial intelligence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109525901A (en) * 2018-11-27 2019-03-26 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109685726A (en) * 2018-11-27 2019-04-26 Oppo广东移动通信有限公司 Scene of game processing method, device, electronic equipment and storage medium
US20200151858A1 (en) * 2017-07-27 2020-05-14 SZ DJI Technology Co., Ltd. Image contrast enhancement method and device, and storage medium
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111581433A (en) * 2020-05-18 2020-08-25 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and computer readable medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034384A (en) * 2021-02-26 2021-06-25 Oppo广东移动通信有限公司 Video processing method, video processing device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200151858A1 (en) * 2017-07-27 2020-05-14 SZ DJI Technology Co., Ltd. Image contrast enhancement method and device, and storage medium
CN109525901A (en) * 2018-11-27 2019-03-26 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and computer-readable medium
CN109685726A (en) * 2018-11-27 2019-04-26 Oppo广东移动通信有限公司 Scene of game processing method, device, electronic equipment and storage medium
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111581433A (en) * 2020-05-18 2020-08-25 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and computer readable medium

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022179335A1 (en) * 2021-02-26 2022-09-01 Oppo广东移动通信有限公司 Video processing method and apparatus, electronic device, and storage medium
CN113780252A (en) * 2021-11-11 2021-12-10 深圳思谋信息科技有限公司 Training method of video processing model, video processing method and device
CN114266749A (en) * 2021-12-23 2022-04-01 上海卓繁信息技术股份有限公司 Trident Net based image processing method
CN114363579A (en) * 2022-01-21 2022-04-15 中国铁塔股份有限公司 Monitoring video sharing method and device and electronic equipment
CN114363579B (en) * 2022-01-21 2024-03-19 中国铁塔股份有限公司 Method and device for sharing monitoring video and electronic equipment
CN116246209A (en) * 2023-03-09 2023-06-09 彩虹鱼科技(广东)有限公司 Wide-angle lens biological target detection method based on offset convolution kernel
CN116246209B (en) * 2023-03-09 2024-02-13 彩虹鱼科技(广东)有限公司 Wide-angle lens biological target detection method based on offset convolution kernel
CN116866638A (en) * 2023-07-31 2023-10-10 联通沃音乐文化有限公司 Intelligent video processing method and system based on images
CN116866638B (en) * 2023-07-31 2023-12-15 联通沃音乐文化有限公司 Intelligent video processing method and system based on images
CN116958331A (en) * 2023-09-20 2023-10-27 四川蜀天信息技术有限公司 Sound and picture synchronization adjusting method and device and electronic equipment
CN116958331B (en) * 2023-09-20 2024-01-19 四川蜀天信息技术有限公司 Sound and picture synchronization adjusting method and device and electronic equipment

Also Published As

Publication number Publication date
WO2022179335A1 (en) 2022-09-01

Similar Documents

Publication Publication Date Title
WO2022179335A1 (en) Video processing method and apparatus, electronic device, and storage medium
EP3611915B1 (en) Method and apparatus for image processing
CN109685726B (en) Game scene processing method and device, electronic equipment and storage medium
Rao et al. A Survey of Video Enhancement Techniques.
US11127117B2 (en) Information processing method, information processing apparatus, and recording medium
JP7266672B2 (en) Image processing method, image processing apparatus, and device
CN110602467B (en) Image noise reduction method and device, storage medium and electronic equipment
US10257449B2 (en) Pre-processing for video noise reduction
US20230080693A1 (en) Image processing method, electronic device and readable storage medium
WO2018136373A1 (en) Image fusion and hdr imaging
CN109640169B (en) Video enhancement control method and device and electronic equipment
CN112889069B (en) Methods, systems, and computer readable media for improving low light image quality
CN109587558B (en) Video processing method, video processing device, electronic equipment and storage medium
CN109618228B (en) Video enhancement control method and device and electronic equipment
WO2020108010A1 (en) Video processing method and apparatus, electronic device and storage medium
JP2004310475A (en) Image processor, cellular phone for performing image processing, and image processing program
WO2023137956A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN115984570A (en) Video denoising method and device, storage medium and electronic device
CN111192213A (en) Image defogging adaptive parameter calculation method, image defogging method and system
CN113011433A (en) Filtering parameter adjusting method and device
CN114584831B (en) Video optimization processing method, device, equipment and storage medium for improving video definition
CN114302226B (en) Intelligent cutting method for video picture
CN115471413A (en) Image processing method and device, computer readable storage medium and electronic device
CN115719314A (en) Smear removing method, smear removing device and electronic equipment
WO2021179764A1 (en) Image processing model generating method, processing method, storage medium, and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination