CN117651199A - Video processing method, apparatus, device, computer readable storage medium and product - Google Patents
Video processing method, apparatus, device, computer readable storage medium and product Download PDFInfo
- Publication number
- CN117651199A CN117651199A CN202311651245.6A CN202311651245A CN117651199A CN 117651199 A CN117651199 A CN 117651199A CN 202311651245 A CN202311651245 A CN 202311651245A CN 117651199 A CN117651199 A CN 117651199A
- Authority
- CN
- China
- Prior art keywords
- video frame
- preset
- target
- pixels
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 41
- 239000002245 particle Substances 0.000 claims abstract description 115
- 230000033001 locomotion Effects 0.000 claims abstract description 107
- 238000010586 diagram Methods 0.000 claims abstract description 83
- 238000006073 displacement reaction Methods 0.000 claims abstract description 64
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000004422 calculation algorithm Methods 0.000 claims description 39
- 238000012545 processing Methods 0.000 claims description 33
- 230000002238 attenuated effect Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 74
- 230000003287 optical effect Effects 0.000 description 32
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 13
- 238000004590 computer program Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 230000008034 disappearance Effects 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 206010023230 Joint stiffness Diseases 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
Embodiments of the present disclosure provide a video processing method, apparatus, device, computer readable storage medium, and product, the method including: acquiring a video to be processed, and acquiring motion information corresponding to a current video frame based on the current video frame and a previous video frame of the video to be processed; determining at least one moving particle to be added based on the motion information; drawing the at least one motion particle on a preset canvas according to preset drawing parameters to obtain a displacement diagram, wherein the displacement diagram comprises pixel offset corresponding to a plurality of pixels in a current video frame; and performing offset operation on pixels in the current video frame according to the displacement diagram to obtain a target video frame, and obtaining a target video according to a plurality of target video frames corresponding to the video to be processed. Therefore, the current video frame can present the ripple effect matched with the motion of the picture main body in the video to be processed, and the fitting degree between the ripple effect and the video to be processed is improved.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, in particular to a video processing method, a video processing device, video processing equipment, a computer readable storage medium and a computer readable storage medium product.
Background
The user can edit the video according to the actual demand, so that the video presents richer visual effects, and the video quality is improved. For example, a user may specify a location in the video, process the video through a preset image processing algorithm, and add special effect content at the location specified by the user.
However, in the current special effect processing method, the special effect is generally generated based on the position designated by the user. The generated effect is single, the triggering operation of the user is dependent, and the relevance with the video is not strong, so that the video quality is poor.
Disclosure of Invention
The embodiment of the disclosure provides a video processing method, a device, equipment, a computer readable storage medium and a product, which are used for solving the technical problem of low relevance between the existing ripple special effect and a video.
In a first aspect, an embodiment of the present disclosure provides a video processing method, including:
acquiring a video to be processed, and acquiring motion information corresponding to a current video frame based on the current video frame and a previous video frame of the video to be processed;
Determining at least one moving particle to be added based on the motion information;
drawing the at least one motion particle on a preset canvas according to preset drawing parameters to obtain a displacement diagram, wherein the displacement diagram comprises pixel offset corresponding to a plurality of pixels in a current video frame;
and performing offset operation on pixels in the current video frame according to the displacement diagram to obtain a target video frame, and obtaining a target video according to a plurality of target video frames corresponding to the video to be processed.
In a second aspect, embodiments of the present disclosure provide a video processing apparatus, including:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a video to be processed and acquiring motion information corresponding to a current video frame based on the current video frame and a previous video frame of the video to be processed;
a determining module for determining at least one moving particle to be added based on the motion information;
the drawing module is used for drawing the at least one motion particle on a preset canvas according to preset drawing parameters to obtain a displacement diagram, wherein the displacement diagram comprises pixel offset corresponding to a plurality of pixels in a current video frame;
and the offset module is used for performing offset operation on pixels in the current video frame according to the displacement diagram to obtain a target video frame, and obtaining a target video according to a plurality of target video frames corresponding to the video to be processed. In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
The memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the video processing method as described above in the first aspect and the various possible designs of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable storage medium having stored therein computer executable instructions which, when executed by a processor, implement the video processing method according to the first aspect and the various possible designs of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the video processing method according to the first aspect and the various possible designs of the first aspect.
According to the video processing method, the device, the equipment, the computer readable storage medium and the product, the motion information of the picture main body in the video to be processed can be clearly determined by calculating the light flow diagram corresponding to each video frame in the video to be processed. And determining at least one moving particle based on the optical flow diagram, and performing drawing operation on the at least one moving particle to generate a displacement diagram, wherein the displacement diagram comprises pixel offset corresponding to a plurality of pixels, so that the pixel offset operation can be performed on the current video frame based on the displacement diagram, the current video frame can present a ripple effect matched with the movement of a picture main body in the video to be processed, and the fitting degree between the ripple effect and the video to be processed is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the description of the prior art, it being obvious that the drawings in the following description are some embodiments of the present disclosure, and that other drawings may be obtained from these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the disclosure;
fig. 2 is a schematic view of an application scenario provided in an embodiment of the present disclosure;
fig. 3 is a flowchart of a video processing method according to another embodiment of the disclosure;
FIG. 4 is a diagrammatic illustration of a permutation provided by an embodiment of the present disclosure;
fig. 5 is a flowchart of a video processing method according to another embodiment of the disclosure;
fig. 6 is a flowchart of a video processing method according to another embodiment of the disclosure;
fig. 7 is a flowchart of a video processing method according to another embodiment of the disclosure;
fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
In order to solve the technical problem of low relevance between the existing ripple special effect and the video, the present disclosure provides a video processing method, device, equipment, computer readable storage medium and product.
It should be noted that the video processing method, apparatus, device, computer readable storage medium and product provided in the present disclosure may be applied to any application scenario for performing special effect processing on video.
In the current special effect processing method, the trigger position of the user is generally obtained, and the special effect is added at the trigger position of the user. However, since the addition of the special effect content often depends on the triggering operation of the user, the correlation between the generated special effect and the video is insufficient, so that the special effect processing effect is not true enough, and the video processing effect is not good.
In solving the above technical problems, the inventor finds through research that, in order to improve the relevance between the added special effect content and the video to be processed, motion information can be determined based on two adjacent video frames in the video to be processed, where the motion information includes, but is not limited to, information such as a light flow graph, which can describe the motion trend of the content main body. So that the motion trend of the content body in the video frame can be determined based on the motion information. And then, at least one motion particle to be added can be determined based on the motion information, and rendering operation is carried out on the at least one motion particle, so that a displacement map comprising pixel offset is obtained. And performing pixel offset processing on the video frames in the video to be processed through the pixel offset in the displacement graph, so that the processed target video can show a ripple effect matched with the motion trend of the content main body.
The system architecture on which the present disclosure is based at least comprises a terminal device and a server, wherein the terminal device is in communication connection with the server. The user can trigger a ripple special effect processing request on the terminal device, and accordingly, the server can acquire the video to be processed based on the ripple special effect processing request. And calculating an optical flow diagram corresponding to the video to be processed, and adding ripple effects associated with the motion trend of the content main body on the video to be processed based on the optical flow diagram.
Fig. 1 is a flow chart of a video processing method according to an embodiment of the disclosure, as shown in fig. 1, the method includes:
step 101, acquiring a video to be processed, and obtaining motion information corresponding to a current video frame based on the current video frame and a previous video frame of the video to be processed.
The execution subject of the present embodiment is a video processing apparatus. The video processing device can be coupled to a server which can be in communication connection with the terminal device, so that an optical flow diagram corresponding to the video to be processed can be calculated based on a ripple special effect processing request triggered by a user on the terminal device, and ripple effects associated with the motion trend of the content main body can be added on the video to be processed based on the optical flow diagram.
Or, the video processing device may be coupled to the terminal device, so as to obtain the video to be processed in response to a triggering operation of the user on the terminal device, calculate an optical flow diagram corresponding to the video to be processed, and add a ripple effect associated with a motion trend of the content body on the video to be processed based on the optical flow diagram.
In this embodiment, the user may trigger the ripple special effect processing request on the terminal device. For example, the user may select a special ripple effect from video processing software preset in the terminal device, and trigger the special ripple effect to generate a special ripple effect processing request. Wherein the moire effect includes, but is not limited to, a water moire effect. For example, the special ripple effect may be based on the movement of the content body, adding the effect of water ripple at the location where the movement is generated.
Accordingly, the video processing apparatus can acquire the moire special effect processing request and acquire the video to be processed. The video to be processed may be acquired by the user in real time, may be uploaded by the user in a preset storage path, or may be selected by the user from a plurality of preset videos, which is not limited in the disclosure.
Further, in order to be able to present a ripple effect on the video to be processed, for each video frame of the video to be processed, the video frame may be determined as a current video frame, and motion information corresponding to the current video frame may be calculated based on the current video frame and its previous video frame.
Wherein the motion information includes, but is not limited to, information such as a light flow graph that can describe the motion trend of the content body. Any optical flow calculation method may be used to implement calculation of the optical flow graph, which is not limited in this disclosure.
Step 102, determining at least one moving particle to be added based on the motion information.
In this embodiment, the video to be processed may include a moving content body. For example, the video to be processed may be a person video, in which the content body may be a person, and the content body may have sports activities such as limb movements, actions, and the like. The motion information can accurately represent the motion trend of the content body, wherein the motion trend includes, but is not limited to, a moving direction, a motion amplitude, and the like.
After obtaining the motion information corresponding to the current video frame, at least one motion particle to be added may be determined based on the motion information. To describe the motion area of the content body by moving particles. Wherein the moving particles may move according to a movement parameter, which may be determined based on the movement information.
And 103, drawing the at least one motion particle on a preset canvas according to preset drawing parameters to obtain a displacement diagram, wherein the displacement diagram comprises pixel offset corresponding to a plurality of pixels in the current video frame.
In this embodiment, after determining at least one moving particle to be added, the at least one moving particle may be drawn on a preset canvas. The preset canvas may be a blank canvas, or may be a canvas set by a user according to actual requirements, which is not limited in the present disclosure.
Further, drawing parameters may be preset, including, but not limited to, a first radius and a second radius for drawing the moving particle associated offset region, and at least one assignment parameter in a preset assignment algorithm.
Therefore, at least one motion particle can be drawn on a preset canvas according to preset drawing parameters to obtain a replacement diagram, wherein the replacement diagram comprises pixel offsets corresponding to a plurality of pixels in a current video frame.
And 104, performing offset operation on pixels in the current video frame according to the displacement map to obtain a target video frame, and obtaining a target video according to a plurality of target video frames corresponding to the video to be processed.
In this embodiment, after constructing the displacement map based on the light flow map, the pixels in the current video frame may be subjected to an offset operation based on the displacement map, to obtain the target video frame.
Optionally, for each pixel in the replacement diagram, a target pixel in the current video frame that matches the pixel may be determined, and the pixel offset value is superimposed on the target pixel, so as to implement an offset operation on the pixel in the current video frame.
Further, after the target video frames corresponding to each video frame in the video frames to be processed are obtained respectively, a plurality of target video frames may be combined according to a time sequence to obtain a target video. In the target video, the effect of the ripple may be presented around the content body with the movement of the content body. The ripple can diffuse and disappear along with time, so that the ripple is more fit with the movement trend of the real ripple, and the authenticity of the ripple in the target video is improved.
Fig. 2 is a schematic view of an application scenario provided in an embodiment of the present disclosure, as shown in fig. 2, in a previous video frame 21, a content body 22 may be in a standing posture. In the current video frame 23, the content body 22 may switch the posture to the hand-up posture. In response to the posture change of the content body 22, a moire effect 24 may be added to the arm region where the change occurs, and in addition, a refraction effect of the water surface may be simulated so that the arm region of the content body 22 exhibits a wavy effect. Therefore, the adding of the water ripple effect can be matched with the motion area of the content main body, and the relevance between the ripple effect and the video to be processed can be improved.
According to the video processing method provided by the embodiment, by calculating the optical flow diagram corresponding to each video frame in the video to be processed, determining at least one motion particle based on the optical flow diagram, and performing drawing operation on the at least one motion particle to generate the displacement diagram, wherein the displacement diagram comprises pixel offset corresponding to a plurality of pixels, so that the pixel offset operation can be performed on the current video frame based on the displacement diagram.
Further, based on any of the above embodiments, step 102 includes:
and determining at least one target pixel point of which the motion amplitude meets a preset condition in the current video frame based on the motion information.
And adding moving particles at the position of the target pixel point, wherein the moving direction of the moving particles is the same as the moving direction of the target pixel point, the moving speed of the moving particles is in direct proportion to the moving amplitude of the pixel point, and a preset life cycle is set for the moving particles.
In this embodiment, after the motion information is obtained, since the motion information can represent the motion trend of the pixel point in the current video frame, at least one target pixel point in the current video frame whose motion amplitude meets the preset condition can be determined based on the motion information.
For example, when any object moves in water, the larger the movement amplitude, the larger the water ripple generated. Therefore, after determining at least one target pixel point where the motion amplitude satisfies the preset condition, the motion particles may be added at the position of the target pixel point to add the moire effect at the pixel point based on the motion particles. The preset condition may be that the motion amplitude is greater than a preset amplitude threshold. Alternatively, the preset condition may be that the moving distance is greater than a preset distance threshold, or the like, which is not limited by the present disclosure.
Further, a motion parameter may be set for the moving particle according to the motion information, so that the moving particle moves according to the motion parameter. The motion direction of the motion particles is the same as the motion direction of the target pixel point, the motion speed of the motion particles is in direct proportion to the motion amplitude of the pixel point, and a preset life cycle is set for the motion particles.
Taking the motion information as an optical flow map, for example, after obtaining an optical flow map corresponding to the current video frame, the optical flow map may include an optical flow direction and an optical flow size corresponding to each pixel, which may represent a motion direction and a motion amplitude of the content body.
Alternatively, each pixel point in the optical flow chart may be traversed, a sum of an optical flow value of the pixel point in a vertical direction and an optical flow value in a horizontal direction is calculated, and whether the sum of the optical flow values is greater than a preset optical flow value threshold value is detected. The optical flow value threshold may be 10, or may be a value set by the user according to an actual requirement, which is not limited in the disclosure.
Further, if the sum of the light values corresponding to any pixel point is detected to be larger than a preset light flow threshold value, it can be determined that the moving particles are added at the position of the pixel point. The motion direction of the motion particles is the same as the optical flow direction of the pixel points, the motion speed of the motion particles is in direct proportion to the optical flow value of the pixel points, and the greater the optical flow value is, the greater the motion speed of the motion particles is, and a preset life cycle is set for the motion particles. The longer the life cycle, the longer the survival time of the moving particles.
As one implementation, the same lifecycle may be set for all moving particles, or the corresponding lifecycle may be set for moving particles based on the sum of the corresponding light values of the pixels, which is not limited by the present disclosure.
According to the video processing method provided by the embodiment, the preset conditions are preset, so that the moving particles can be accurately added at the position with larger movement amplitude of the content main body based on the preset conditions, and further generation of the ripple effect can be realized through rendering operation of the moving particles. The ripple effect can be added in the region of the motion amplitude of the content main body, so that the ripple effect is more attached to the motion trend of the content main body, and the video quality is improved.
Further, on the basis of any one of the foregoing embodiments, further, after step 102, the method further includes:
and updating the display position and the life cycle of the moving particles according to a preset time period.
In this embodiment, in order to simulate the effects of the expansion of the water wave and the gradual disappearance of the water wave, the display position and the life cycle of the moving particles may be updated according to a preset time period.
The preset time period may be an update operation for each video frame. Alternatively, the user may set the time period according to the actual requirement, which is not limited in this disclosure.
According to the video processing method provided by the embodiment, the display positions and the life cycle of the moving particles are updated, so that the display effect of gradually dissipating the water wave can be simulated, and the authenticity of the ripple effect is improved.
Further, on the basis of any one of the above embodiments, the updating the display position and the life cycle of the moving particle according to the preset time period includes:
and updating the display position of the moving particles according to a preset time period based on the moving speed and the moving direction of the moving particles.
And carrying out attenuation operation on the life cycle of the moving particles according to a preset attenuation speed until the life cycle of the moving particles is attenuated to a preset life cycle threshold.
In this embodiment, in order to simulate the effects of the expansion of the water wave and the gradual disappearance of the water wave, the display position and the life cycle of the moving particles may be updated according to a preset time period.
Optionally, since the movement direction and the movement speed corresponding to each moving particle are determined when the moving particle to be added is determined based on the optical flow diagram, the display position of the moving particle in the current frame can be determined according to the display position, the movement direction and the movement speed of the moving particle in the previous frame and the time period, so that the display position of the moving particle can be updated.
Further, the life cycle of the moving particle may be attenuated according to a preset time period and a preset attenuation speed until the life cycle of the moving particle is attenuated to a preset life cycle threshold.
Alternatively, the preset time interval may be an update of the moving particle position and the lifecycle for each video frame. The predetermined decay rate may then be the life cycle-1 for the moving particle at each update. Or, the user may adjust the preset time interval and the attenuation speed according to the actual requirement, which is not limited in the disclosure.
For example, the life cycle corresponding to the previous video frame may be 10, and the life cycle of the moving particle in the current video frame may be-1, which results in the life cycle of the moving particle in the current video frame being 9.
According to the video processing method provided by the embodiment, the display position of the moving particle is updated based on the moving speed and the moving direction of the moving particle, so that the particle display position can be more attached to the moving trend of the content main body. And the life cycle of the moving particles is attenuated according to the preset attenuation speed, so that the display effect of gradual dissipation of the water wave can be simulated, and the authenticity of the ripple effect is improved.
Further, based on any of the above embodiments, step 104 includes:
and determining a plurality of target motion particles with the life cycle larger than a life cycle threshold value in a plurality of video frames corresponding to the video to be processed.
And drawing the plurality of target moving particles on a preset canvas according to preset drawing parameters.
In this embodiment, in the process of drawing moving particles, since each video frame in the video to be processed corresponds to a plurality of moving particles, the life cycles of the moving particles corresponding to different video frames are different, so that in order to simulate the effect of gradually dissipating the ripple, only the target moving particles with the life cycle greater than the preset life cycle threshold can be drawn.
Optionally, a plurality of target motion particles whose lifecycle is greater than a lifecycle threshold can be determined in a plurality of video frames corresponding to the video to be processed. The life cycle threshold may be 0, or a value set by the user according to an actual requirement, which is not limited in the disclosure. And drawing a plurality of target moving particles on a preset canvas according to preset drawing parameters.
According to the video processing method, the target moving particles with the life cycle being larger than the life cycle threshold value are rendered, so that the ripple effect can follow the moving content main body, and the visual effect of ripple disappearance can be accurately simulated.
Fig. 3 is a flow chart of a video processing method according to another embodiment of the disclosure, where, based on any of the foregoing embodiments, as shown in fig. 3, step 103 includes:
step 301, determining, for each moving particle, a position where the moving particle is located as a center position.
And 302, drawing two concentric circles according to the circle center position, the preset first radius and the preset second radius.
And 303, performing assignment operation on pixels in the region where the hollow ring formed by the two concentric circles is located according to a preset assignment algorithm to obtain pixel offset corresponding to the pixels in the region where the hollow ring is located, and obtaining the replacement diagram.
In this embodiment, the preset drawing parameters include, but are not limited to, a first radius, a second radius, and parameters in an assignment algorithm.
After determining a plurality of moving particles to be added, for each moving particle, determining the moving particle as a circle center, drawing a first area according to the circle center and a preset first radius, and drawing a second area according to the circle center and a preset second radius. Wherein the second radius is greater than the first radius. The first area and the second area are concentric circles, and the part of the first area is subtracted from the second area to obtain a hollow circular ring. And performing assignment operation on the pixels in the hollow ring to obtain pixel offset corresponding to the pixels in the region where the hollow ring is positioned, and obtaining a replacement diagram.
An offset algorithm may be preset, and assignment operation may be performed on pixels in the hollow ring based on the offset algorithm, so as to obtain a replacement diagram.
Fig. 4 is a schematic diagram of replacement provided in the embodiment of the present disclosure, as shown in fig. 4, a first area 42 may be drawn according to a preset first radius, a second area 43 may be drawn according to a preset second radius, a circular area 44 formed by the first area 42 and the second area 43 is determined as an area needing to be assigned, and an assignment operation may be performed on pixels 45 in the area. For example, the pixel 45 may be shifted to the position of the sampling point 46 by the assignment operation.
Further, based on any of the above embodiments, step 303 includes:
and determining the polar coordinates corresponding to the pixels in the area of the hollow circular ring according to the Cartesian coordinates corresponding to the pixels in the area of the hollow circular ring.
And carrying out assignment operation on the polar coordinates of the pixels in the region where the hollow circular ring is located based on a preset assignment algorithm.
In this embodiment, the assignment operation to the pixel may be implemented in polar coordinates.
Alternatively, the polar coordinates corresponding to the pixels in the area where the hollow ring is located may be determined according to the cartesian coordinates corresponding to the pixels in the area where the hollow ring is located. After determining a plurality of pixels within the hollow circular ring, for each pixel, a value assignment operation may be performed on the pixel based on the polar coordinates of the pixel and a preset value assignment algorithm. After assignment is completed, the polar coordinates may also be converted to Cartesian coordinates.
The assignment algorithm may include parameters for adjusting the range of the hollow ring, parameters for adjusting the size of the hollow ring, and parameters for adjusting the position of the sampling point, and the user may adjust the parameters in the assignment algorithm according to actual requirements so as to present different ripple effects.
According to the video processing method provided by the embodiment, the position of the moving particle is determined to be the circle center position, two concentric circles are drawn according to the circle center position, the preset first radius and the preset second radius, and the assignment operation is carried out on the pixels in the region where the hollow ring formed by the two concentric circles is located according to the preset assignment algorithm, so that the pixels in the hollow ring can be subjected to the offset operation, and the current video frame can be subjected to the pixel offset processing based on the pixel offset to display the ripple effect.
Further, based on any of the above embodiments, step 104 includes:
for a pixel in the replacement graph, a target pixel in the current video frame that matches the pixel is determined.
And superposing pixel offset corresponding to the pixel in the displacement diagram on the target pixel to obtain the target video frame.
In the present embodiment, after obtaining the replacement map, since the replacement map includes pixel offsets corresponding to a plurality of pixels, the pixel offset operation can be performed on the current video frame based on the replacement map.
Alternatively, for each pixel in the displacement map, a target pixel in the current video frame that matches the pixel may be determined. And superposing the pixel offset corresponding to the pixel in the displacement diagram on the target pixel to obtain the target video frame.
It should be noted that, the image frame generally has an RGB three-channel image or an RGBW four-channel image, and the pixel offset is generally two-dimensional data. Thus, in generating the permutation map, only the R and G channels may be assigned values. Further, the target pixel can be offset based on the two-dimensional pixel offset in the displacement map.
For example, the pixel offset may be an offset vector (1, 2), which may represent that X is offset one pixel to the right and Y is offset two pixels downward. So that the pixel shifting operation can be accurately performed on the current video frame based on the pixel shift amount.
According to the video processing method provided by the embodiment, the pixel offset is superposed on the pixel of the current video frame, so that the pixel in the current video frame can be subjected to position offset, and the display effect of water wave diffusion is presented.
Fig. 5 is a schematic flow chart of a video processing method according to another embodiment of the disclosure, where, on the basis of any of the foregoing embodiments, as shown in fig. 5, after step 103, the method further includes:
Step 501, performing fuzzy operation on the replacement diagram through a preset fuzzy algorithm to obtain a fuzzy replacement diagram.
Step 104 comprises:
and 502, performing offset operation on pixels in the current video frame according to the blurred displacement diagram to obtain a target video frame.
In this embodiment, after obtaining the displacement map, if the pixel shifting operation is performed on the current image frame directly based on the displacement map, the generated moire effect may be hard and unrealistic. Therefore, after obtaining the replacement diagram, the replacement diagram can be subjected to fuzzy operation through a preset fuzzy algorithm, and the fuzzy replacement diagram is obtained. Wherein, any kind of fuzzy algorithm can be adopted to realize the fuzzy operation to the target area, and the disclosure does not limit the fuzzy operation.
Optionally, the blurring algorithm may include preset blurring parameters, and the blurring degree may be adjusted by adjusting the blurring parameters. The adjusting control of the blurring parameter may be displayed at the front end, so that the user may adjust the blurring effect based on the adjusting control, which is not limited in this disclosure.
Further, after obtaining the blurred displacement map, the pixels in the current video frame may be subjected to an offset operation based on the blurred displacement map, so as to obtain the target video frame.
According to the video processing method, the blurred operation is carried out on the displacement diagram, and the pixel shifting operation is carried out on the current video frame based on the blurred displacement diagram, so that the hard pixel shifting effect can be avoided, the transition between the shifted pixels and other positions in the current video frame is more natural, and the video quality is improved.
Fig. 6 is a flow chart of a video processing method according to another embodiment of the disclosure, where, based on any of the foregoing embodiments, as shown in fig. 6, step 104 includes:
and 601, performing identification operation on a target area in the current video frame through a preset identification algorithm.
And 602, performing offset operation on pixels in the areas except the target area in the current video frame according to the displacement map to obtain the target video frame.
In the present embodiment, when the generation of the moire effect is performed based on the optical flow map, the moire effect can be generated in the entire map. However, when the video to be processed is a video including a person, an animal, or the like, generating a moire effect in the full view may affect the video effect. For example, a person's face may be occluded.
Therefore, in order to avoid shielding the important area on the basis of generating the ripple effect, the target area in the current video frame can be identified by a preset identification algorithm. The recognition algorithm includes, but is not limited to, a face recognition algorithm, a preset object recognition algorithm, a gesture recognition algorithm, a limb recognition algorithm, and the like. Alternatively, the target area may be specified by the user according to actual requirements, which is not limited by the disclosure.
Further, after the target area is determined, the pixels in the area except the target area in the current video frame may be subjected to an offset operation according to the displacement map, so as to obtain the target video frame.
According to the video processing method provided by the embodiment, the pixels in the areas except the target area in the current video frame are subjected to offset operation according to the displacement diagram, so that the display content in the target area is enabled not to be subjected to pixel offset, and the display content in the target area is enabled not to be deformed.
Further, on the basis of any one of the above embodiments, after step 601, the method further includes:
and carrying out fuzzy operation on the target area through a preset fuzzy algorithm to obtain a fuzzy target area.
Step 104 comprises:
and performing offset operation on pixels in the area except the blurred target area in the current video frame according to the displacement diagram to obtain the target video frame.
In this embodiment, after the target area is obtained, the pixel shift operation may be performed on the area other than the target area so that the other area exhibits the moire effect without the target area being changed.
However, performing the pixel shift operation only on the region other than the target region, and not processing the target region may result in a stiff joint between the target region and the other region, resulting in an insufficient moire effect.
Therefore, the fuzzy operation can be performed on the target area through a preset fuzzy algorithm, and the fuzzy target area is obtained. And performing offset operation on pixels in the area except the blurred target area in the current video frame according to the displacement diagram to obtain the target video frame.
Wherein, any kind of fuzzy algorithm can be adopted to realize the fuzzy operation to the target area, and the disclosure does not limit the fuzzy operation.
Optionally, the blurring algorithm may include preset blurring parameters, and the blurring degree may be adjusted by adjusting the blurring parameters. The adjusting control of the blurring parameter may be displayed at the front end, so that the user may adjust the blurring effect based on the adjusting control, which is not limited in this disclosure.
According to the video processing method provided by the embodiment, the fuzzy operation is carried out on the target area through the preset fuzzy algorithm, so that the fuzzy target area is obtained, the target area and other display areas can be more smoothly transited, and the harder display effect is avoided.
Fig. 7 is a schematic flow chart of a video processing method according to another embodiment of the disclosure, where, on the basis of any of the foregoing embodiments, as shown in fig. 7, after step 103, the method further includes:
Step 701, determining, for a pixel in the replacement diagram, a target pixel in the current video frame that matches the pixel.
Step 702, performing an offset operation on the color channel of the target pixel according to the pixel offset corresponding to the pixel in the replacement diagram.
In this embodiment, in order to improve the authenticity of the video processing, the moire effect is presented while the moire region is color-adjusted to achieve a visual effect that changes in color with the moire flow.
Alternatively, the color difference effect is generated by shifting different channels in the RGB image to different extents. Thus, after the displacement map is obtained, for each pixel in the displacement map, a target pixel in the current video frame corresponding to that pixel may be determined. And performing offset operation on the color channel of the target pixel according to the pixel offset corresponding to the pixel.
For example, after the pixel offset P is obtained, the R channel may be offset according to the offset P, the G channel may be offset according to the offset P1.5 times, and the B channel may be offset according to the offset P2 times, so as to implement color adjustment based on the pixel offset, so that the color transformation is more fit to the motion trend of the content body.
According to the video processing method, the color channel of the target pixel is subjected to offset operation based on the pixel offset, so that the corrugated area can be subjected to color adjustment while the corrugated effect is presented, and the authenticity of the processed target video is improved.
Fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the disclosure, as shown in fig. 8, where the apparatus includes: an acquisition module 81, a determination module 82, a rendering module 83, and an offset module 84. The obtaining module 81 is configured to obtain a video to be processed, and obtain motion information corresponding to a current video frame based on the current video frame and a previous video frame of the video to be processed. A determining module 82 for determining at least one moving particle to be added based on the motion information. And the drawing module 83 is configured to draw the at least one motion particle on a preset canvas according to a preset drawing parameter, and obtain a displacement map, where the displacement map includes pixel offsets corresponding to a plurality of pixels in the current video frame. And the offset module 84 is configured to perform an offset operation on the pixels in the current video frame according to the displacement map, obtain a target video frame, and obtain a target video according to a plurality of target video frames corresponding to the video to be processed.
Further, on the basis of any one of the foregoing embodiments, the determining module is configured to: and determining at least one target pixel point of which the motion amplitude meets a preset condition in the current video frame based on the motion information. And adding moving particles at the position of the target pixel point, wherein the moving direction of the moving particles is the same as the moving direction of the target pixel point, the moving speed of the moving particles is in direct proportion to the moving amplitude of the pixel point, and a preset life cycle is set for the moving particles.
Further, on the basis of any one of the foregoing embodiments, the apparatus further includes: and the updating module is used for updating the display position and the life cycle of the moving particles according to a preset time period.
Further, on the basis of any one of the above embodiments, the update module is configured to: and updating the display position of the moving particles according to a preset time period based on the moving speed and the moving direction of the moving particles. And carrying out attenuation operation on the life cycle of the moving particles according to a preset attenuation speed until the life cycle of the moving particles is attenuated to a preset life cycle threshold.
Further, on the basis of any one of the above embodiments, the drawing module is configured to: and determining a plurality of target motion particles with the life cycle larger than a life cycle threshold value in a plurality of video frames corresponding to the video to be processed. And drawing the plurality of target moving particles on a preset canvas according to preset drawing parameters.
Further, on the basis of any one of the above embodiments, the drawing module is configured to: and determining the position of each moving particle as the center position of the circle. And drawing two concentric circles according to the circle center position, the preset first radius and the preset second radius. And carrying out assignment operation on pixels in the region where the hollow ring formed by the two concentric circles is located according to a preset assignment algorithm to obtain pixel offset corresponding to the pixels in the region where the hollow ring is located, and obtaining the displacement map.
Further, on the basis of any one of the above embodiments, the drawing module is configured to: and determining the polar coordinates corresponding to the pixels in the area of the hollow circular ring according to the Cartesian coordinates corresponding to the pixels in the area of the hollow circular ring. And carrying out assignment operation on the polar coordinates of the pixels in the region where the hollow circular ring is located based on a preset assignment algorithm.
Further, on the basis of any one of the foregoing embodiments, the offset module is configured to: for a pixel in the replacement graph, a target pixel in the current video frame that matches the pixel is determined. And superposing pixel offset corresponding to the pixel in the displacement diagram on the target pixel to obtain the target video frame.
Further, on the basis of any one of the foregoing embodiments, the apparatus further includes: and the blurring module is used for carrying out blurring operation on the replacement diagram through a preset blurring algorithm to obtain a blurred replacement diagram. The offset module is used for: and performing offset operation on pixels in the current video frame according to the blurred displacement diagram to obtain a target video frame.
Further, on the basis of any one of the foregoing embodiments, the offset module is configured to: and carrying out identification operation on the target area in the current video frame through a preset identification algorithm. And performing offset operation on pixels in the areas except the target area in the current video frame according to the displacement diagram to obtain the target video frame.
Further, on the basis of any one of the foregoing embodiments, the apparatus further includes: and the blurring module is used for carrying out blurring operation on the target area through a preset blurring algorithm to obtain a blurred target area. The offset module is used for: and performing offset operation on pixels in the area except the blurred target area in the current video frame according to the displacement diagram to obtain the target video frame.
Further, on the basis of any one of the foregoing embodiments, the apparatus further includes: and the determining module is used for determining target pixels matched with the pixels in the current video frame aiming at the pixels in the replacement diagram. And the offset module is used for performing offset operation on the color channel of the target pixel according to the pixel offset corresponding to the pixel in the replacement diagram.
The device provided in this embodiment may be used to execute the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
In order to implement the above embodiments, the embodiments of the present disclosure further provide a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the video processing method according to any of the above embodiments.
To achieve the above embodiments, the embodiments of the present disclosure further provide a computer program product, including a computer program, which when executed by a processor implements the video processing method according to any of the above embodiments.
In order to achieve the above embodiments, the embodiments of the present disclosure further provide an electronic device, including: a processor and a memory;
The memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory, causing the processor to perform the video processing method as described in any of the embodiments above.
Fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the disclosure, where the electronic device 900 may be a terminal device or a server. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 9 is merely an example, and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 9, the electronic apparatus 900 may include a processing device (e.g., a central processor, a graphics processor, or the like) 901, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage device 908 into a random access Memory (Random Access Memory, RAM) 903. In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are also stored. The processing device 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
In general, the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 907 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage 908 including, for example, magnetic tape, hard disk, etc.; and a communication device 909. The communication means 909 may allow the electronic device 900 to communicate wirelessly or by wire with other devices to exchange data. While fig. 9 shows an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 909, or installed from the storage device 908, or installed from the ROM 902. When executed by the processing device 901, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.
Claims (14)
1. A video processing method, comprising:
acquiring a video to be processed, and acquiring motion information corresponding to a current video frame based on the current video frame and a previous video frame of the video to be processed;
determining at least one moving particle to be added based on the motion information;
drawing the at least one motion particle on a preset canvas according to preset drawing parameters to obtain a displacement diagram, wherein the displacement diagram comprises pixel offset corresponding to a plurality of pixels in a current video frame;
and performing offset operation on pixels in the current video frame according to the displacement diagram to obtain a target video frame, and obtaining a target video according to a plurality of target video frames corresponding to the video to be processed.
2. The method of claim 1, wherein the determining at least one moving particle to be added based on the motion information comprises:
Determining at least one target pixel point in the current video frame, wherein the motion amplitude of the target pixel point meets a preset condition, based on the motion information;
and adding moving particles at the position of the target pixel point, wherein the moving direction of the moving particles is the same as the moving direction of the target pixel point, the moving speed of the moving particles is in direct proportion to the moving amplitude of the pixel point, and a preset life cycle is set for the moving particles.
3. The method of claim 1, wherein after determining at least one moving particle to be added based on the motion information, further comprising:
and updating the display position and the life cycle of the moving particles according to a preset time period.
4. A method according to claim 3, wherein said updating the display position and life cycle of the moving particles according to a predetermined time period comprises:
updating the display position of the moving particles according to a preset time period based on the moving speed and the moving direction of the moving particles;
and carrying out attenuation operation on the life cycle of the moving particles according to a preset attenuation speed until the life cycle of the moving particles is attenuated to a preset life cycle threshold.
5. The method of claim 1, wherein drawing the at least one moving particle on a preset canvas according to preset drawing parameters to obtain a displacement map comprises:
determining the position of the moving particle as the center position;
drawing two concentric circles according to the circle center position, a preset first radius and a preset second radius;
and carrying out assignment operation on pixels in the region where the hollow ring formed by the two concentric circles is located according to a preset assignment algorithm to obtain pixel offset corresponding to the pixels in the region where the hollow ring is located, and obtaining the displacement map.
6. The method according to claim 5, wherein the assigning operation of the pixels in the area where the hollow ring formed by the two concentric circles is located according to a preset assigning algorithm includes:
determining polar coordinates corresponding to the pixels in the area of the hollow circular ring according to the Cartesian coordinates corresponding to the pixels in the area of the hollow circular ring;
and carrying out assignment operation on the polar coordinates of the pixels in the region where the hollow circular ring is located based on a preset assignment algorithm.
7. The method according to claim 1, wherein said performing an offset operation on pixels in the current video frame according to the displacement map to obtain a target video frame comprises:
For a pixel in the replacement diagram, determining a target pixel in the current video frame that matches the pixel;
and superposing pixel offset corresponding to the pixel in the displacement diagram on the target pixel to obtain the target video frame.
8. The method according to any one of claims 1-7, wherein the drawing the at least one moving particle on a preset canvas according to preset drawing parameters, after obtaining the displacement map, further comprises:
performing fuzzy operation on the replacement diagram through a preset fuzzy algorithm to obtain a fuzzy replacement diagram;
performing offset operation on pixels in the current video frame according to the replacement diagram to obtain a target video frame, including:
and performing offset operation on pixels in the current video frame according to the blurred displacement diagram to obtain a target video frame.
9. The method according to any one of claims 1-7, wherein said performing an offset operation on pixels in the current video frame according to the replacement map to obtain a target video frame comprises:
performing identification operation on a target area in the current video frame through a preset identification algorithm;
And performing offset operation on pixels in the areas except the target area in the current video frame according to the displacement diagram to obtain the target video frame.
10. The method according to claim 9, wherein after the identifying the target area in the current video frame by a preset identifying algorithm, the method further comprises:
performing fuzzy operation on the target area through a preset fuzzy algorithm to obtain a fuzzy target area;
and performing offset operation on pixels in an area except the target area in the current video frame according to the replacement diagram to obtain a target video frame, wherein the offset operation comprises the following steps:
and performing offset operation on pixels in the area except the blurred target area in the current video frame according to the displacement diagram to obtain the target video frame.
11. The method according to any one of claims 1-7, wherein the drawing the at least one moving particle on a preset canvas according to preset drawing parameters, after obtaining the displacement map, further comprises:
for a pixel in the replacement diagram, determining a target pixel in the current video frame that matches the pixel;
And performing offset operation on the color channel of the target pixel according to the pixel offset corresponding to the pixel in the displacement diagram.
12. A video processing apparatus, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a video to be processed and acquiring motion information corresponding to a current video frame based on the current video frame and a previous video frame of the video to be processed;
a determining module for determining at least one moving particle to be added based on the motion information;
the drawing module is used for drawing the at least one motion particle on a preset canvas according to preset drawing parameters to obtain a displacement diagram, wherein the displacement diagram comprises pixel offset corresponding to a plurality of pixels in a current video frame;
and the offset module is used for performing offset operation on pixels in the current video frame according to the displacement diagram to obtain a target video frame, and obtaining a target video according to a plurality of target video frames corresponding to the video to be processed.
13. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executing computer-executable instructions stored in the memory, causing the processor to perform the video processing method of any one of claims 1 to 11.
14. A computer readable storage medium having stored therein computer executable instructions which, when executed by a processor, implement the video processing method of any of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311651245.6A CN117651199A (en) | 2023-12-04 | 2023-12-04 | Video processing method, apparatus, device, computer readable storage medium and product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311651245.6A CN117651199A (en) | 2023-12-04 | 2023-12-04 | Video processing method, apparatus, device, computer readable storage medium and product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117651199A true CN117651199A (en) | 2024-03-05 |
Family
ID=90044798
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311651245.6A Pending CN117651199A (en) | 2023-12-04 | 2023-12-04 | Video processing method, apparatus, device, computer readable storage medium and product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117651199A (en) |
-
2023
- 2023-12-04 CN CN202311651245.6A patent/CN117651199A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111242881A (en) | Method, device, storage medium and electronic equipment for displaying special effects | |
CN110062176B (en) | Method and device for generating video, electronic equipment and computer readable storage medium | |
CN110874853B (en) | Method, device, equipment and storage medium for determining target movement | |
CN112073748B (en) | Panoramic video processing method and device and storage medium | |
CN110059623B (en) | Method and apparatus for generating information | |
CN111368668B (en) | Three-dimensional hand recognition method and device, electronic equipment and storage medium | |
CN116310036A (en) | Scene rendering method, device, equipment, computer readable storage medium and product | |
US11880919B2 (en) | Sticker processing method and apparatus | |
CN114842120B (en) | Image rendering processing method, device, equipment and medium | |
US11494961B2 (en) | Sticker generating method and apparatus, and medium and electronic device | |
CN114742856A (en) | Video processing method, device, equipment and medium | |
CN114900625A (en) | Subtitle rendering method, device, equipment and medium for virtual reality space | |
CN111833459B (en) | Image processing method and device, electronic equipment and storage medium | |
CN112270242B (en) | Track display method and device, readable medium and electronic equipment | |
CN116228956A (en) | Shadow rendering method, device, equipment and medium | |
CN115984950A (en) | Sight line detection method and device, electronic equipment and storage medium | |
CN113703704B (en) | Interface display method, head-mounted display device, and computer-readable medium | |
CN117319725A (en) | Subtitle display method, device, equipment and medium | |
CN117651199A (en) | Video processing method, apparatus, device, computer readable storage medium and product | |
CN115861503A (en) | Rendering method, device and equipment of virtual object and storage medium | |
CN117311837A (en) | Visual positioning parameter updating method and device, electronic equipment and storage medium | |
CN115358959A (en) | Generation method, device and equipment of special effect graph and storage medium | |
CN112053450B (en) | Text display method and device, electronic equipment and storage medium | |
CN114693860A (en) | Highlight rendering method, highlight rendering device, highlight rendering medium and electronic equipment | |
CN114494574A (en) | Deep learning monocular three-dimensional reconstruction method and system based on multi-loss function constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |