CN117793513A - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN117793513A
CN117793513A CN202311811769.7A CN202311811769A CN117793513A CN 117793513 A CN117793513 A CN 117793513A CN 202311811769 A CN202311811769 A CN 202311811769A CN 117793513 A CN117793513 A CN 117793513A
Authority
CN
China
Prior art keywords
pixel point
image frame
video
pixel
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311811769.7A
Other languages
Chinese (zh)
Inventor
刘杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311811769.7A priority Critical patent/CN117793513A/en
Publication of CN117793513A publication Critical patent/CN117793513A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a video processing method and a device thereof, belonging to the technical field of data processing. The method comprises the following steps: acquiring pixel point information in a first image frame in a first video, wherein the first video is an imaginary video, the first video comprises N image frames, and N is an integer greater than 1; based on pixel point information in a second image frame in the first video, carrying out frame stabilizing processing on the pixel point information in the first image frame to obtain second pixel point information, wherein the second image frame is a previous image frame of the first video, and the first image frame is one frame of N image frames; determining a spot area in the first image frame based on the second pixel point information; and carrying out spot enhancement processing on the spot area in each image frame in the N image frames to obtain a second video.

Description

Video processing method and device
Technical Field
The application belongs to the technical field of data processing, and particularly relates to a video processing method and a device thereof.
Background
With the continuous popularization of electronic devices, more and more users can record video through the electronic devices, and due to the limitation of lens hardware in the electronic devices, the electronic devices can hardly directly acquire videos with larger blurring degree and facula effect through video recording.
In the related art, since most of video recording modes of camera applications in electronic devices support the blurring function, the electronic devices can perform blurring processing on a recorded video through the blurring function in the camera applications in the video recording process, so as to obtain a blurred video. At this time, if the user requires the blurred video to have the light spot enhancing effect, the blurred video can be subjected to light spot enhancing processing by using a traditional light spot enhancing algorithm, so as to obtain the blurred video with the light spot effect.
However, when the electronic device processes the blurred video by using the conventional flare enhancement algorithm, a flare area required to be flare enhanced is generally determined by a highlight pixel point of each image frame in the blurred video, and because a large image difference may exist between each image frame in the blurred video, the positions of the flare areas in each image frame in the blurred video are different, so that a serious flare problem exists in a flare in the final processed video. Thus, the flare effect of the finally obtained video after blurring is poor.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method and a device thereof, which can improve the facula effect of a virtual video.
In a first aspect, an embodiment of the present application provides a video processing method, where the video processing method includes: acquiring pixel point information in a first image frame in a first video, wherein the first video is an imaginary video, the first video comprises N image frames, and N is an integer greater than 1; based on pixel point information in a second image frame in the first video, carrying out frame stabilizing processing on the pixel point information in the first image frame to obtain second pixel point information, wherein the second image frame is a previous image frame of the first video, and the first image frame is one frame of N image frames; determining a spot area in the first image frame based on the second pixel point information; and carrying out spot enhancement processing on the spot area in each image frame in the N image frames to obtain a second video.
In a second aspect, embodiments of the present application provide a video processing apparatus, including: the device comprises an acquisition module, a processing module and a determination module. The acquisition module is used for acquiring pixel point information in a first image frame in a first video, wherein the first video is an imaginary video, the first video comprises N image frames, and N is an integer greater than 1. The processing module is used for carrying out frame stabilizing processing on the pixel point information in the first image frame acquired by the acquisition module based on the pixel point information in the second image frame in the first video to obtain the second pixel point information, wherein the second image frame is the previous image frame of the first video, and the first image frame is one frame of the N image frames. And the determining module is used for determining the light spot area in the first image frame based on the second pixel point information of the processing module. And the processing module is also used for carrying out spot enhancement processing on the spot area in each of the N image frames to obtain a second video.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, pixel point information in a first image frame in a first video is acquired, the first video is a virtual video, the first video comprises N image frames, and N is an integer greater than 1; based on pixel point information in a second image frame in the first video, carrying out frame stabilizing processing on the pixel point information in the first image frame to obtain second pixel point information, wherein the second image frame is a previous image frame of the first video, and the first image frame is one frame of N image frames; determining a spot area in the first image frame based on the second pixel point information; and carrying out spot enhancement processing on the spot area in each image frame in the N image frames to obtain a second video. In the scheme, the pixel point information of the previous image frame of the current image frame is used for carrying out frame stabilizing treatment on the current image frame, so that the pixel point information in the previous image frame can be mapped into the current image frame, thereby reducing the image difference between the pixel point information in the adjacent image frames, further reducing the position difference of the facula area in each image frame in the first video, and improving the flicker phenomenon of facula in the first video; further improving the facula effect in the video after blurring.
Drawings
Fig. 1 is a schematic diagram of a video processing method according to an embodiment of the present application;
FIG. 2 is a second schematic diagram of a video processing method according to an embodiment of the present disclosure;
FIG. 3 is a third schematic diagram of a video processing method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a video processing method according to an embodiment of the present disclosure;
FIG. 5 is a fifth schematic diagram of a video processing method according to an embodiment of the present disclosure;
FIG. 6A is one of the exemplary diagrams of a video editing interface provided by embodiments of the present application;
FIG. 6B is a second exemplary diagram of a video editing interface provided by embodiments of the present application;
fig. 7 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 9 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The terms "at least one," "at least one," and the like in the description and in the claims of the present application mean that they encompass any one, any two, or a combination of two or more of the objects. For example, at least one of a, b, c (item) may represent: "a", "b", "c", "a and b", "a and c", "b and c" and "a, b and c", wherein a, b, c may be single or plural. Similarly, the term "at least two" means two or more, and the meaning of the expression is similar to the term "at least one".
The video processing method and the device provided by the embodiment of the application are described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
The video processing method and the video processing device can be applied to scenes with enhanced facula effects in the virtual video.
With the development of electronic devices, more and more users can record video through the electronic devices, and because of the limitation of lens hardware in the electronic devices, the electronic devices can hardly acquire images with larger blurring degree and light spots through video recording.
Most electronic devices currently only support video blurring and lack the speckle effect. The traditional light spot enhancement algorithm is to detect light spots in an image, namely to detect highlight light spots in the image, then screen out areas which do not meet the light spot formation requirement in a communication domain detection mode, and then perform light spot enhancement on the rest light spot areas to obtain light spot effects with clear boundaries and outstanding brightness.
However, this light spot enhancement method is not suitable for video blurring, because when the electronic device uses the traditional light spot enhancement algorithm to process the blurring video, the light spot area needing to be subjected to light spot enhancement is generally determined through the highlighting pixel point of each frame of image frame in the blurring video, and because there may be a large image difference between each frame of image frame in the blurring video, the positions of the light spot areas in each frame of image frame in the blurring video are different, so that serious flicker problem exists in the light spot in the finally processed video; and the performance of the connected domain detection algorithm is relatively large. The performance and effect flicker problem limits the feasibility of directly transplanting the photographing blurring spot enhancement algorithm to the video.
In the video processing method and the device thereof provided by the embodiment of the application, the pixel point information of the previous image frame of the current image frame is used for carrying out frame stabilizing processing on the current image frame, so that the pixel point information in the previous image frame can be mapped into the current image frame, thereby reducing the image difference between the pixel point information in the adjacent image frames, further reducing the position difference of the facula area in each image frame in the first video, and improving the flicker phenomenon of facula in the first video; further improving the facula effect in the video after blurring.
The main execution body of the video processing method provided in the embodiment of the present application may be a video processing apparatus, and the video processing apparatus may be an electronic device or a functional module in the electronic device. The technical solution provided in the embodiments of the present application will be described below by taking an electronic device as an example.
An embodiment of the present application provides a video processing method, and fig. 1 shows a flowchart of the video processing method provided in the embodiment of the present application. As shown in fig. 1, the video processing method provided in the embodiment of the present application may include the following steps 201 to 204.
Step 201, the electronic device obtains pixel point information in a first image frame in a first video.
In this embodiment of the present application, the first video is a video after blurring, where the first video includes N image frames, and N is an integer greater than 1.
In this embodiment of the present application, the first image frame is one of N image frames in the first video.
Optionally, in the embodiment of the present application, the first video may be a video shot by an electronic device; or video downloaded by the electronic device from the network through a third party application, such as a browser application; or video transmitted by other electronic devices.
In one example, the electronic device may record video through a blurring function in the camera application to obtain a first video after blurring.
In another example, the electronic device may perform blurring processing on the video acquired by the electronic device through video editing software to obtain a first video after blurring.
In this embodiment of the present application, the pixel point information in the first image frame is pixel point information of all pixel points in the first image frame.
Optionally, in an embodiment of the present application, the pixel point information includes at least one of the following: pixel luminance information, pixel position information, pixel gray level information, pixel hue information, pixel saturation information, and the like.
In the embodiment of the application, the electronic device may obtain the pixel point information in the first image frame in the first video through the first algorithm.
The first algorithm may be an artificial intelligence (Artificial Intelligence, AI) algorithm or a neural network algorithm, for example.
It should be noted that, for the pixel information of each pixel in the first image frame, the electronic device may obtain the pixel information of each pixel in the first image frame through the first algorithm, so that repetition is avoided, and no description is repeated here.
Alternatively, in the embodiment of the present application, after obtaining the pixel point information in each image frame in the first video, the electronic device may store the pixel point information in each image frame in the first video in the electronic device.
Optionally, in the embodiment of the present application, the electronic device may split the first video to obtain N image frames.
For example, the electronic device may split the first video through an image frame splitting algorithm to obtain N image frames.
Step 202, the electronic device performs frame stabilization processing on the pixel point information in the first image frame based on the pixel point information in the second image frame in the first video, so as to obtain the second pixel point information.
In this embodiment of the present application, the second image frame is a previous image frame of the first video.
In this embodiment of the present application, the pixel point information in the second image frame is pixel point information of all pixel points in the second image frame.
In this embodiment of the present application, the electronic device may obtain the pixel point information in the second image frame through the first algorithm.
In this embodiment of the present application, the electronic device may perform frame stabilization processing on pixel point information in the first image frame according to a difference in pixel brightness between a pixel point in the first image frame and a pixel point in the second image frame, so as to obtain second pixel point information.
That is, the second pixel information is pixel information after the frame stabilization process is performed on the pixel information in the first image frame.
It should be noted that, for each of the N image frames in the first video, the electronic device may determine the second pixel point information corresponding to each of the N image frames in the above manner.
Optionally, in the embodiment of the present application, the electronic device splits the first video to obtain N image frames, and the N image frames may be numbered according to an arrangement sequence of the N image frames in the first video, so that the electronic device may know that the second image frame is a previous image frame of the first video.
Step 203, the electronic device determines a spot area in the first image frame based on the second pixel information.
In this embodiment of the present application, the electronic device may determine the spot area in the first image frame by using the pixel brightness in the second pixel information.
It should be noted that, for each of the N image frames in the first video, the electronic device may determine the spot area in each image frame in the above manner.
Alternatively, in the embodiment of the present application, in a case where the first image frame is a first image frame of N image frames in the first video, the spot area in the first image frame may be determined directly by the brightness of the pixel point in the first image frame.
And 204, the electronic equipment performs spot enhancement processing on the spot area in each of the N image frames to obtain a second video.
In this embodiment of the present application, the electronic device may perform a spot enhancement process on each of the N image frames to obtain N image frames after the spot enhancement process, and then perform a merging process on the N image frames after the spot enhancement process to obtain the second video.
In the embodiment of the application, the electronic device may perform the spot enhancement processing on the spot area in each image frame based on the brightness of the pixel point of the spot area in each image frame, so as to obtain each image frame after the spot enhancement processing.
The electronic device may perform weighted average processing on pixels in one image frame to obtain weighted average pixel brightness, and replace pixel brightness of the spot area with the weighted average pixel brightness, so as to perform spot enhancement processing on pixels corresponding to the spot area in the one image frame, so as to obtain one image frame with the spot enhancement processing.
It should be noted that, for each of the N image frames, the electronic device may obtain the N image frames of the spot enhancement processing through the above method, so that repetition is avoided, and details are not repeated here.
In the video processing method provided by the embodiment of the application, pixel point information in a first image frame in a first video is acquired, the first video is a virtual video, the first video comprises N image frames, and N is an integer greater than 1; based on pixel point information in a second image frame in the first video, carrying out frame stabilizing processing on the pixel point information in the first image frame to obtain second pixel point information, wherein the second image frame is a previous image frame of the first video, and the first image frame is one frame of N image frames; determining a spot area in the first image frame based on the second pixel point information; and carrying out spot enhancement processing on the spot area in each image frame in the N image frames to obtain a second video. In the scheme, the pixel point information of the previous image frame of the current image frame is used for carrying out frame stabilizing treatment on the current image frame, so that the pixel point information in the previous image frame can be mapped into the current image frame, thereby reducing the image difference between the pixel point information in the adjacent image frames, further reducing the position difference of the facula area in each image frame in the first video, and improving the flicker phenomenon of facula in the first video; further improving the facula effect in the video after blurring.
Alternatively, in the embodiment of the present application, as shown in fig. 2 in conjunction with fig. 1, the above step 202 may be specifically implemented by the following step 202 a.
Step 202a, the electronic device performs frame stabilization processing on the pixel information of the first pixel based on the pixel information of the second pixel in the second image frame and the luminance difference information between the second pixel and the pixel information of the first pixel in the first image frame, so as to obtain the second pixel information.
In this embodiment of the present application, the first pixel point is a pixel point in the first image frame, where the pixel point is matched with a pixel point position of a second pixel point, and the second pixel point is one of the pixel points in the second image frame.
In this embodiment of the present application, the electronic device may perform frame stabilization processing on pixel information of the first pixel through luminance difference information between pixel luminance information of the second pixel in the second image frame and pixel luminance information of the first pixel in the first image frame, to obtain the second pixel information.
It should be noted that, for each pixel point in the first image frame, the electronic device may obtain the second pixel point information corresponding to each pixel point in the first image frame through the above method.
For example, the electronic device may obtain the luminance difference value between the second pixel point and the first pixel point through the following formula 1, which may specifically be:
q(px,py)=exp(-|s m (px,py)-s m-1→m (px,py)|) (1)
wherein q (px, py) is the brightness difference value between the second pixel point and the first pixel point, s m (px, py) is the second pixel point in the second image frame, s m-1→m (px, py) is the pixel value luminance magnitude of the second pixel point, mapped onto the first pixel point in the first image frame by the optical flow, exp represents the exponent based on e.
In the embodiment of the application, the electronic device maps the brightness of the pixel value of the second pixel point to the first pixel point in the first image frame through the optical flow, so that the highlight area in the first image frame is ensured to be consistent with the highlight area in the second image frame, and the situation that the positions of the facula areas in each frame of image frame in the video after blurring are different, so that the facula in the video after final processing has serious flickering, is avoided.
Optionally, in the embodiment of the present application, the luminance difference information is a first luminance difference weight between the second pixel point and a first pixel point in the first image frame.
Illustratively, in conjunction with fig. 2, as shown in fig. 3, the above step 202a may be implemented specifically by steps 202a1 to 202a3 described below.
Step 202a1, the electronic device weights the pixel information of the second pixel based on the first brightness difference weight, so as to obtain the weighted pixel information of the second pixel.
In this embodiment of the present application, the electronic device may multiply the luminance value of the pixel point of the second pixel point by using the first luminance difference weight, so as to obtain the weighted luminance value of the pixel point of the second pixel point.
For example, the electronic device may obtain the weighted pixel brightness value of the second pixel by using the following formula 2, which may specifically be:
d m-1 (px,py)*q(px,py) (2)
wherein d m-1 (px, py) is a second pixel point in the second image frame, and q (px, py) is a first luminance difference weight corresponding to the second pixel point.
Step 202a2, the electronic device weights the pixel information of the first pixel based on the second brightness difference weight, so as to obtain the weighted pixel information of the first pixel.
In this embodiment of the present application, the second luminance difference weight is a luminance difference weight obtained by subtracting the first luminance difference weight from 1.
In this embodiment of the present application, the electronic device may multiply the luminance value of the pixel point of the first pixel point by using the second luminance difference weight, so as to obtain the weighted luminance value of the pixel point of the first pixel point.
For example, the electronic device may obtain the weighted pixel brightness value of the first pixel through the following formula 3, which may specifically be:
d m (px,py)*(1-q(px,py)) (3)
wherein d m (px, py) is a first pixel point in the first image frame, and (1-q (px, py)) is a second luminance difference weight corresponding to the first pixel point.
Step 202a3, the electronic device adds the weighted pixel point information of the second pixel point to the weighted pixel point information of the first pixel point to obtain second pixel point information.
For example, the electronic device may obtain the second pixel information through the following formula 4, which may specifically be:
d m (px,py)=d m-1 (px,py)*q(px,py)+d m (px,py)*(1-q(px,py)) (4)
wherein d m (px, py) is second pixel point information.
It should be noted that, for each pixel point in the first image frame, the electronic device may obtain the second pixel point information corresponding to each pixel point through the above method.
In the embodiment of the application, the electronic device weights the pixel point information of the first pixel point to ensure that the highlight region in the first image frame is consistent with the highlight region in the second image frame, so that the problem that the light spots in the finally processed video have serious flickering due to different positions of the light spot region in each frame of image frame in the virtual video is avoided.
Alternatively, in the embodiment of the present application, as shown in fig. 4 in conjunction with fig. 1, the above step 203 may be specifically implemented by the following step 203 a.
Step 203a, the electronic device samples a pixel point in the first image frame from a center point of the first light spot template based on the template parameter corresponding to the first light spot template and the second pixel point information, and determines a light spot area in the first image frame.
In an embodiment of the present application, the template parameters include at least one of the following: sampling point positions, the number of sampling point positions and sampling weights of the sampling points; different spot templates correspond to different spot shapes.
In this embodiment, the electronic device performs weighted average on the pixel points in the first image frame from the center point of the first light spot template based on the template parameter and the second pixel point information corresponding to the first light spot template, so as to obtain a highlight region in the first image frame, and further determine the light spot region in the first image frame.
For example, the electronic device may perform weighted average on the pixel points in the first image frame from the center point of the first light spot template by using the following formula 5, which may specifically be:
wherein d (px, py) represents the pixel brightness value size at coordinate point px+kx in the weighted average image, s (px+kx, py+ky) represents the pixel value size at coordinate point (px+kx, py+ky) in the original image, kx represents the offset size on the abscissa, ky represents the offset size on the ordinate, M represents the maximum offset value of the filter kernel on the abscissa, N represents the maximum offset value of the filter kernel on the ordinate, and w (px+kx, py+ky) represents the weighting weight at coordinate point (px+kx, py+ky).
Wherein w (px+kx, py+ky) = (max (s (px+kx, py+ky) -a, 0)) 2 +b; a and b are constant parameters for controlling the degree of spot enhancement.
In the embodiment of the application, the electronic device determines the light spot area in the first image frame through weighted average, so that the efficiency of the electronic device can be improved.
Optionally, in an embodiment of the present application, as shown in fig. 5 in conjunction with fig. 4, before step 203a, the video processing method provided in the embodiment of the present application further includes the following steps 301 to 304.
Step 301, an electronic device receives a first input.
Optionally, in the embodiment of the present application, the first input may be a click input, a slide input, a preset track input, or the like of the user for the spot mode identifier. Specifically, the method can be determined according to actual use conditions, and the embodiment of the application is not limited.
Alternatively, in the embodiment of the present application, the first input may be an input of a user received by the electronic device in a case of running a camera application program.
Step 302, the electronic device responds to the first input to display at least one spot template identifier.
In this embodiment of the present application, one spot template identifier in the at least one spot template identifier corresponds to one spot template.
Optionally, in an embodiment of the present application, in response to the first input, the electronic device may display at least one spot template identifier in a video preview interface.
Optionally, in an embodiment of the present application, in response to the first input, the electronic device may display at least one spot template identifier in the video editing interface.
Step 303, the electronic device receives a second input of a first spot template identifier of the at least one spot template identifier.
In this embodiment of the present application, the second input is used for a first spot template identifier selected from at least one spot template identifier.
Optionally, in this embodiment of the present application, the first input may be a click input, a slide input, a preset track input, or the like of the user for the first spot template identifier. Specifically, the method can be determined according to actual use conditions, and the embodiment of the application is not limited.
Optionally, in an embodiment of the present application, each spot template corresponding to each spot template identifier in the at least one spot template identifier may correspond to a different shape.
Illustratively, the shape of the spot template may be any of the following: triangle, circle, hexagon, etc. Specifically, the method can be determined according to actual use conditions, and the embodiment of the application is not limited.
As illustrated in fig. 6A, the electronic device may display 3 spot template identifiers in the video editing interface 10 through user input, where the 3 spot template identifiers are respectively a hexagonal spot template identifier, a circular spot template identifier, and a pentagonal spot template identifier, and the user may input the hexagonal spot template identifier, as illustrated in fig. 6B, so that the electronic device may display, on a highlight area of each image frame in the first video, a spot template corresponding to the hexagonal spot template identifier, and it should be noted that only one image frame is taken as an example in fig. 6B.
Step 304, the electronic device responds to the second input, and takes the light spot template indicated by the first light spot template identifier as the first light spot template.
In this embodiment of the present application, the electronic device may map a shape corresponding to the first light spot template to each image frame in the first video.
In the embodiment of the application, the electronic equipment can determine the first light spot template through the input of the user, so that the flexibility of the electronic equipment in determining the first light spot template is improved.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution body may be a video processing apparatus, or an electronic device, or may also be a functional module or entity in the electronic device. In the embodiment of the present application, a video processing device is taken as an example to execute a video processing method by using the video processing device, and the video processing device provided in the embodiment of the present application is described.
Fig. 7 shows a schematic diagram of one possible configuration of a video processing apparatus involved in an embodiment of the present application. As shown in fig. 7, the video processing apparatus 70 may include: an acquisition module 71, a processing module 72 and a determination module 73.
The acquiring module 71 is configured to acquire pixel information in a first image frame in a first video, where the first video is an imaginary video, the first video includes N image frames, and N is an integer greater than 1. The processing module 72 is configured to perform frame stabilization processing on the pixel point information in the first image frame acquired by the acquiring module 71 based on the pixel point information in the second image frame in the first video, to obtain the second pixel point information, where the second image frame is a previous image frame of the first video, and the first image frame is one frame of the N image frames. A determining module 73, configured to determine a spot area in the first image frame based on the second pixel information obtained by the processing module 72. The processing module 72 is further configured to perform spot enhancement processing on the spot area in each of the N image frames, to obtain a second video.
In one possible implementation manner, the processing module 72 performs frame stabilization processing on the pixel information of the first pixel based on the pixel information of the second pixel in the second image frame and the luminance difference information between the second pixel and the first pixel in the first image frame to obtain the second pixel information; the first pixel point is a pixel point matched with the pixel point of the second pixel point in the first image frame, and the second pixel point is one of the pixel points in the second image frame.
In one possible implementation manner, the luminance difference information is a first luminance difference weight between the second pixel point and a first pixel point in the first image frame; the processing module 72 is specifically configured to weight the pixel information of the second pixel based on the first luminance difference weight, to obtain weighted pixel information of the second pixel; weighting the pixel point information of the first pixel point based on a second brightness difference weight, so as to obtain the weighted pixel point information of the first pixel point, wherein the second brightness difference weight is a brightness difference weight obtained by subtracting the first brightness difference weight from 1; and adding the weighted pixel point information of the second pixel point and the weighted pixel point information of the first pixel point to obtain second pixel point information.
In one possible implementation manner, the determining module 73 is specifically configured to sample a pixel point in the first image frame from a center point of the first light spot template based on the template parameter corresponding to the first light spot template and the second pixel point information, and determine a light spot area in the first image frame; wherein the template parameters include at least one of: sampling point positions, the number of sampling point positions and sampling weights of the sampling points; different spot templates correspond to different spot shapes.
In one possible implementation manner, the video processing apparatus 70 provided in the embodiment of the present application further includes: a receiving module and a display module. The receiving module is configured to receive a first input before the determining module 73 samples the second pixel information based on the template parameter corresponding to the first spot template and determines the spot area in the first image frame. And the display module is used for responding to the first input received by the receiving module and displaying at least one spot template identifier, wherein one spot template identifier corresponds to one spot template. The receiving module is further configured to receive a second input of a first spot template identifier of the at least one spot template identifier. The determining module 73 is further configured to, in response to the second input received by the receiving module, use the spot template indicated by the first spot template identifier as the first spot template.
The embodiment of the application provides a video processing device, which performs frame stabilizing processing on a current image frame by using pixel point information of a previous image frame of the current image frame, so that the pixel point information in the previous image frame can be mapped into the current image frame, thereby reducing image difference between pixel point information in adjacent image frames, further reducing position difference of light spot areas in each image frame in a first video, and improving a flicker phenomenon of light spots in the first video; further improving the facula effect in the video after blurring.
The video processing device in the embodiment of the application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the mobile electronic device may be a mobile phone, tablet, notebook, palmtop, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video processing device provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and in order to avoid repetition, details are not repeated here.
Optionally, as shown in fig. 8, the embodiment of the present application further provides an electronic device 90, including a processor 91 and a memory 92, where a program or an instruction capable of being executed on the processor 91 is stored in the memory 92, and the program or the instruction when executed by the processor 91 implements each step of the embodiment of the video processing method, and the steps can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 110 is configured to obtain pixel point information in a first image frame in a first video, where the first video is a video after blurring, the first video includes N image frames, and N is an integer greater than 1; based on pixel point information in a second image frame in the first video, carrying out frame stabilizing processing on the pixel point information in the first image frame to obtain second pixel point information, wherein the second image frame is a previous image frame of the first video, and the first image frame is one frame of N image frames; determining a spot area in the first image frame based on the second pixel point information; and carrying out spot enhancement processing on the spot area in each image frame in the N image frames to obtain a second video.
The embodiment of the application provides electronic equipment, which performs frame stabilizing processing on a current image frame by using pixel point information of a previous image frame of the current image frame, so that the pixel point information in the previous image frame can be mapped into the current image frame, thereby reducing image difference between pixel point information in adjacent image frames, further reducing position difference of light spot areas in each image frame in a first video, and improving a flicker phenomenon of light spots in the first video; further improving the facula effect in the video after blurring.
Optionally, in the embodiment of the present application, the processor 110 is specifically configured to perform frame stabilization processing on pixel information of a first pixel based on pixel information of a second pixel in a second image frame and luminance difference information between the second pixel and the first pixel in a first image frame, so as to obtain the second pixel information; the first pixel point is a pixel point matched with the pixel point of the second pixel point in the first image frame, and the second pixel point is one of the pixel points in the second image frame.
Optionally, in the embodiment of the present application, the luminance difference information is a first luminance difference weight between the second pixel point and a first pixel point in the first image frame; the processor 110 is specifically configured to weight the pixel information of the second pixel based on the first luminance difference weight, to obtain weighted pixel information of the second pixel; weighting the pixel point information of the first pixel point based on a second brightness difference weight, so as to obtain the weighted pixel point information of the first pixel point, wherein the second brightness difference weight is a brightness difference weight obtained by subtracting the first brightness difference weight from 1; and adding the weighted pixel point information of the second pixel point and the weighted pixel point information of the first pixel point to obtain second pixel point information.
Optionally, in this embodiment of the present application, the processor 110 is specifically configured to sample a pixel point in the first image frame from a center point of the first light spot template based on a template parameter corresponding to the first light spot template and the second pixel point information, and determine a light spot area in the first image frame; wherein the template parameters include at least one of: sampling point positions, the number of sampling point positions and sampling weights of the sampling points; different spot templates correspond to different spot shapes.
Optionally, in this embodiment of the present application, the user input unit 107 is configured to receive the first input before sampling the second pixel information based on the template parameter corresponding to the first light spot template to determine the light spot area in the first image frame. And a display unit 106, configured to display at least one spot template identifier in response to the first input, where one spot template identifier corresponds to one spot template. The user input unit 107 is further configured to receive a second input of a first spot template identifier of the at least one spot template identifiers. The processor 110 is further configured to use, in response to the second input, the spot template indicated by the first spot template identifier as the first spot template.
The electronic device provided in the embodiment of the present application can implement each process implemented by the above method embodiment, and can achieve the same technical effects, so that repetition is avoided, and details are not repeated here.
The beneficial effects of the various implementation manners in this embodiment may be specifically referred to the beneficial effects of the corresponding implementation manners in the foregoing method embodiment, and in order to avoid repetition, the description is omitted here.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory 109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implement each process of the embodiment of the method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, implementing each process of the above method embodiment, and achieving the same technical effect, so as to avoid repetition, and not repeated here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the video processing method, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. A method of video processing, the method comprising:
acquiring pixel point information in a first image frame in a first video, wherein the first video is a virtual video, the first video comprises N image frames, and N is an integer greater than 1;
performing frame stabilization processing on pixel point information in the first image frame based on the pixel point information in a second image frame in the first video to obtain second pixel point information, wherein the second image frame is a previous image frame of the first video, and the first image frame is one frame of the N image frames;
determining a spot area in the first image frame based on the second pixel point information;
and carrying out spot enhancement processing on the spot area in each of the N image frames to obtain a second video.
2. The method according to claim 1, wherein the performing frame stabilization processing on the pixel information in the first image frame based on the pixel information in the second image frame in the first video to obtain the second pixel information includes:
based on pixel point information of a second pixel point in the second image frame and brightness difference information between the second pixel point and a first pixel point in the first image frame, carrying out frame stabilization on the pixel point information of the first pixel point to obtain the second pixel point information;
The first pixel point is a pixel point matched with the pixel point of the second pixel point in the first image frame, and the second pixel point is one of the pixel points in the second image frame.
3. The method of claim 2, wherein the luminance difference information is a first luminance difference weight between the second pixel point and a first pixel point within the first image frame;
the performing frame stabilization processing on the pixel point information of the first pixel point based on the pixel point information of the second pixel point in the second image frame and the brightness difference information between the second pixel point and the first pixel point in the first image frame to obtain the second pixel point information includes:
weighting the pixel point information of the second pixel point based on the first brightness difference weight to obtain weighted pixel point information of the second pixel point;
weighting the pixel point information of the first pixel point based on a second brightness difference weight to obtain weighted pixel point information of the first pixel point, wherein the second brightness difference weight is a brightness difference weight obtained by subtracting the first brightness difference weight from 1;
And adding the weighted pixel point information of the second pixel point with the weighted pixel point information of the first pixel point to obtain the second pixel point information.
4. The method of claim 1, wherein the determining the spot area in the first image frame based on the second pixel point information comprises:
sampling pixel points in the first image frame from the center point of the first light spot template based on the template parameters corresponding to the first light spot template and the second pixel point information, and determining a light spot area in the first image frame;
wherein the template parameters include at least one of: sampling point positions, the number of sampling point positions and sampling weights of the sampling points; different spot templates correspond to different spot shapes.
5. The method of claim 4, wherein the sampling the second pixel information based on the template parameters corresponding to the first spot template, the method further comprising, prior to determining the spot area in the first image frame:
receiving a first input;
responding to the first input, displaying at least one spot template identifier, wherein one spot template identifier corresponds to one spot template;
Receiving a second input of a first spot template identifier of the at least one spot template identifier;
and responding to the second input, and taking the spot template indicated by the first spot template identifier as the first spot template.
6. A video processing apparatus, the apparatus comprising: the device comprises an acquisition module, a processing module and a determination module;
the acquisition module is used for acquiring pixel point information in a first image frame in a first video, wherein the first video is a virtual video, the first video comprises N image frames, and N is an integer greater than 1;
the processing module is configured to perform frame stabilization processing on the pixel point information in the first image frame acquired by the acquisition module based on the pixel point information in the second image frame in the first video, so as to obtain second pixel point information, where the second image frame is a previous image frame of the first video, and the first image frame is one frame of the N image frames;
the determining module is used for determining a facula area in the first image frame based on the second pixel point information obtained by the processing module;
and the processing module is also used for carrying out spot enhancement processing on the spot area in each of the N image frames to obtain a second video.
7. The apparatus of claim 6, wherein the processing module performs frame stabilization processing on the pixel information of the first pixel based on pixel information of a second pixel in the second image frame and luminance difference information between the second pixel and the first pixel in the first image frame to obtain the processed pixel information of the first pixel; the first pixel point is a pixel point matched with the pixel point of the second pixel point in the first image frame, and the second pixel point is one of the pixel points in the second image frame.
8. The apparatus of claim 7, wherein the luminance difference information is a first luminance difference weight between the second pixel point and a first pixel point within the first image frame;
the processing module is specifically configured to weight the pixel point information of the second pixel point based on the first luminance difference weight, so as to obtain weighted pixel point information of the second pixel point;
weighting the pixel point information of the first pixel point based on a second brightness difference weight to obtain weighted pixel point information of the first pixel point, wherein the second brightness difference weight is a brightness difference weight obtained by subtracting the first brightness difference weight from 1;
And adding the weighted pixel point information of the second pixel point with the weighted pixel point information of the first pixel point to obtain the second pixel point information.
9. The apparatus according to claim 6, wherein the determining module is specifically configured to determine the spot area in the first image frame by sampling a pixel point in the first image frame from a center point of the first spot template based on a template parameter corresponding to the first spot template and the second pixel point information;
wherein the template parameters include at least one of: sampling point positions, the number of sampling point positions and sampling weights of the sampling points; different spot templates correspond to different spot shapes.
10. The apparatus of claim 9, wherein the video processing apparatus further comprises: a receiving module and a display module;
the receiving module is configured to receive a first input before the determining module determines a light spot area in the first image frame, where the determining module samples the second pixel information based on a template parameter corresponding to a first light spot template;
the display module is used for responding to the first input received by the receiving module and displaying at least one spot template identifier, and one spot template identifier corresponds to one spot template;
The receiving module is further configured to receive a second input of a first spot template identifier in the at least one spot template identifier;
the determining module is further configured to respond to the second input received by the receiving module, and use the spot template indicated by the first spot template identifier as the first spot template.
CN202311811769.7A 2023-12-26 2023-12-26 Video processing method and device Pending CN117793513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311811769.7A CN117793513A (en) 2023-12-26 2023-12-26 Video processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311811769.7A CN117793513A (en) 2023-12-26 2023-12-26 Video processing method and device

Publications (1)

Publication Number Publication Date
CN117793513A true CN117793513A (en) 2024-03-29

Family

ID=90401388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311811769.7A Pending CN117793513A (en) 2023-12-26 2023-12-26 Video processing method and device

Country Status (1)

Country Link
CN (1) CN117793513A (en)

Similar Documents

Publication Publication Date Title
CN112532882B (en) Image display method and device
CN107564085B (en) Image warping processing method and device, computing equipment and computer storage medium
CN112508820A (en) Image processing method and device and electronic equipment
CN112734661A (en) Image processing method and device
WO2023001110A1 (en) Neural network training method and apparatus, and electronic device
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN112532904B (en) Video processing method and device and electronic equipment
CN117793513A (en) Video processing method and device
CN112446848A (en) Image processing method and device and electronic equipment
CN112511890A (en) Video image processing method and device and electronic equipment
CN113489901B (en) Shooting method and device thereof
CN114143448B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112367470B (en) Image processing method and device and electronic equipment
CN113923367B (en) Shooting method and shooting device
CN115797160A (en) Image generation method and device
CN116017146A (en) Image processing method and device
CN117156285A (en) Image processing method and device
CN117011124A (en) Gain map generation method and device, electronic equipment and medium
CN114979479A (en) Shooting method and device thereof
CN114125302A (en) Image adjusting method and device
CN117274097A (en) Image processing method, device, electronic equipment and medium
CN116128844A (en) Image quality detection method, device, electronic equipment and medium
CN117278842A (en) Shooting control method, shooting control device, electronic equipment and readable storage medium
CN113850739A (en) Image processing method and device
CN116320729A (en) Image processing method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination