CN115550563A - Video processing method, video processing device, computer equipment and storage medium - Google Patents

Video processing method, video processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN115550563A
CN115550563A CN202211108463.0A CN202211108463A CN115550563A CN 115550563 A CN115550563 A CN 115550563A CN 202211108463 A CN202211108463 A CN 202211108463A CN 115550563 A CN115550563 A CN 115550563A
Authority
CN
China
Prior art keywords
target
time
video frame
video
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211108463.0A
Other languages
Chinese (zh)
Inventor
杜孟林
那强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Insta360 Innovation Technology Co Ltd
Original Assignee
Insta360 Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Insta360 Innovation Technology Co Ltd filed Critical Insta360 Innovation Technology Co Ltd
Priority to CN202211108463.0A priority Critical patent/CN115550563A/en
Publication of CN115550563A publication Critical patent/CN115550563A/en
Priority to PCT/CN2023/118362 priority patent/WO2024055967A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present application relates to a video processing method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: acquiring an original video and camera motion data acquired by a camera in a motion process; determining target time when the camera reaches a target point in the motion process according to the camera motion data; extracting a target video frame corresponding to the target moment from the original video; and rotating the target video frame according to the target time and the preset duration to obtain a rotated video. The method can improve the visual effect of the picture.

Description

Video processing method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video processing method and apparatus, a computer device, and a storage medium.
Background
With the development of computer technology, panoramic cameras are more and more widely applied, and panoramic cameras can shoot panoramic videos, which have the advantage that the viewing angle of the panoramic videos is very wide, for example, the panoramic videos can cover a viewing angle of 360 degrees.
However, the display displays a planar image, so more and more people research how to sufficiently and effectively display a 360-degree panoramic video, and effectively improve the user experience.
Disclosure of Invention
Based on this, it is necessary to provide a video processing method, an apparatus, a computer device, and a computer-readable storage medium for solving the technical problem of how to sufficiently and effectively present a panoramic video.
In a first aspect, the present application provides a video processing method. The method comprises the following steps:
acquiring an original video and camera motion data acquired by a camera in a motion process;
determining target time when the camera reaches a target point in the motion process according to the camera motion data;
extracting a target video frame corresponding to the target moment from the original video;
and rotating the target video frame according to the target time and a preset duration to obtain a rotated video.
In a second aspect, the present application further provides a video processing apparatus. The device comprises:
the acquisition module is used for acquiring an original video and camera motion data acquired by a camera in a motion process;
the determining module is used for determining the camera motion data and determining the target moment when the camera reaches a target point in the motion process;
the extraction module is used for extracting a target video frame corresponding to the target moment from the original video;
and the rotation processing module is used for performing rotation processing on the target video frame according to the target time and the preset duration to obtain a rotation video.
In one embodiment, the apparatus further comprises:
the calculation module is used for calculating a yaw angle and a pitch angle corresponding to each time point of the target video frame in a preset time length according to the target time and the preset time length;
and the rotation processing module is further configured to perform rotation processing on the target video frame based on the yaw angle and the pitch angle to obtain a rotation video.
In one embodiment, the determining module is further configured to:
determining a first acceleration and a second acceleration of the camera in the motion process according to the camera motion data;
and determining the target time when the camera reaches a target point in the motion process according to the first time corresponding to the first acceleration and the second time corresponding to the second acceleration.
In one embodiment, the rotation processing module is further configured to:
taking the target time as a rotation starting time, and determining a rotation ending time according to the rotation starting time and a preset time length;
determining a rotation period according to the rotation starting time and the rotation ending time;
calculating the ratio of each time point in the preset time length to the rotation period;
calculating a yaw angle corresponding to each time point of the target video frame in the preset duration based on the ratio and the initial yaw angle;
and calculating the pitch angle corresponding to each time point of the target video frame in the preset time length based on the ratio and the initial pitch angle.
In one embodiment, the time points in the preset time duration form a time point set, and the time point set comprises a first time point set and a second time point set; the first set of time points consists of first time points before the target video frame is rotated to a first target position; the second set of time points consists of a second time point between rotating the target video frame to a first target position and a second target position; the rotation processing module is further configured to:
based on the yaw angle and the pitch angle corresponding to the first time point, enabling the target video frame to rotate towards the upper left direction or the upper right direction;
copying the target video frames rotated at each first time point to obtain a first video frame sequence;
when the target video frame rotates to a first target position, enabling the target video frame to rotate to the right lower direction or the left lower direction until the target video frame rotates to a second target position on the basis of the yaw angle and the pitch angle corresponding to the second time point;
copying the target video frame rotated at each second time point to obtain a second video frame sequence;
generating a rotated video based on the first sequence of video frames and the second sequence of video frames.
In one embodiment, the time points within the preset duration constitute a set of time points; the set of time points comprises a third set of time points; the third time point set consists of all third time points in the process of horizontally rotating the target video frame at the highest point of the motion process; the rotation processing module is further configured to:
when the target video frame rotates to the highest point, enabling the target video frame to rotate in the horizontal direction based on the yaw angle and the pitch angle corresponding to the third time point; when the target video frame rotates in the horizontal direction, the rotation speed component in the vertical direction is zero.
In one embodiment, the apparatus further comprises:
the identification module is used for identifying a tracking target in a pre-acquired video;
the acquisition module is further configured to acquire tracking data corresponding to the tracking target;
the adjusting module is used for adjusting the visual angle of the original video according to the tracking data to obtain the adjusted original video;
the extracting module is further configured to extract a target video frame corresponding to the target time from the adjusted original video.
In one embodiment, the apparatus further comprises:
the inserting module is used for inserting the rotary video into the original video according to the target moment to obtain a special effect video;
and the playing module is used for responding to a playing instruction aiming at the special video and playing the special video.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring an original video and camera motion data acquired by a camera in a motion process;
determining target time when the camera reaches a target point in the motion process according to the camera motion data;
extracting a target video frame corresponding to the target moment from the original video;
and rotating the target video frame according to the target time and a preset duration to obtain a rotated video.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring an original video and camera motion data acquired by a camera in a motion process;
determining target time when the camera reaches a target point in the motion process according to the camera motion data;
extracting a target video frame corresponding to the target moment from the original video;
and rotating the target video frame according to the target time and the preset duration to obtain a rotated video.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring an original video and camera motion data acquired by a camera in a motion process;
determining a target moment when the camera reaches a target point in the motion process according to the camera motion data;
extracting a target video frame corresponding to the target moment from the original video;
and rotating the target video frame according to the target time and a preset duration to obtain a rotated video.
The video processing method, the video processing device, the computer equipment, the storage medium and the computer program product are used for acquiring the original video and the camera motion data acquired by the camera in the motion process, and determining the target time when the camera reaches the target point in the motion process according to the camera motion data. And extracting a target video frame corresponding to the target moment from the original video. Therefore, the target video frame can be accurately determined according to the camera motion data, and the accuracy of the determined target video frame is ensured. And then, according to the target time and the preset duration, performing rotation processing on the target video frame to obtain a rotation video. The computer equipment can accurately determine the target point reached by the camera in the motion process according to the camera motion data, so that the efficiency of determining the target point is improved, and the visual effect of the rotating video is improved.
Drawings
FIG. 1 is a diagram of an exemplary video processing application;
FIG. 2 is a flow diagram of a video processing method in one embodiment;
FIG. 3 is a diagram illustrating rotation of a target video frame rendered in a spherical model, according to one embodiment;
FIG. 4 is a flow diagram illustrating a method for determining a target time in one embodiment;
FIG. 5 is a schematic flow chart diagram of a method for calculating yaw and pitch angles in one embodiment;
FIG. 6 is a flowchart illustrating a method for adjusting a viewing angle of an original video according to an embodiment;
FIG. 7 is a flow diagram illustrating a video processing method according to another embodiment;
FIG. 8 is a block diagram showing the structure of a video processing apparatus according to one embodiment;
FIG. 9 is a block diagram showing the structure of a video processing apparatus according to one embodiment;
FIG. 10 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The video processing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. The terminal 102 acquires an original video and camera motion data acquired by a camera in a motion process; determining target time when the camera reaches a target point in the motion process according to the camera motion data; extracting a target video frame corresponding to a target moment from an original video; calculating a yaw angle and a pitch angle corresponding to each time point of a target video frame in a preset time length according to the target time and the preset time length; and rotating the target video frame based on the yaw angle and the pitch angle to obtain a rotating video. The terminal 102 may be, but is not limited to, a camera, a video camera, a smart phone, or a portable wearable device. The camera may be a general camera, a panoramic camera, a motion camera, or the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like.
It should be noted how to show a panoramic video with 360 degrees sufficiently effectively.
The method adopted can be as follows: the rotating video is generated according to one frame of the video shot by the panoramic camera. For example, a video shot by the panoramic camera is observed manually, a video frame shot by the panoramic camera at a target point is found, then the video frame determined by human eye observation is rotated, the picture information of the target point (for example, the highest point) is fully displayed, and a picture with a frozen visual effect is generated.
In the operation process, a user needs to learn a large amount of panoramic video clip knowledge and operations, such as key frames, rotation or dotting, which brings a large amount of learning cost and labor cost; and the confirmed target point is inaccurate, resulting in poor visual effect of the resulting rotated video (or the original video inserted with the rotated video). For example, when a user makes a video clip of a throw shot (throw a camera up and shoot a piece of video), a video frame of the highest point determined is needed.
If the determined time point of the video frame is not the highest point, it means that the highest and widest view effect brought by the highest point cannot be obtained, and also means that the camera continues to ascend in the next frame of the original video, and the end of the rotating video of the video frame descends, so that when the rotating video is inserted into the original video, a very large pause feeling is generated, and the original video effect after the rotating video is inserted is not good.
In order to solve the technical problem of how to obtain a better rotated video (or an original video inserted with the rotated video), in an embodiment, as shown in fig. 2, a video processing method is provided, which is exemplified by the application of the method to the terminal in fig. 1, and includes the following steps:
s202, acquiring an original video and camera motion data acquired by a camera in a motion process.
The movement is that the camera leaves the initial position at a certain initial speed, and the movement is under the action of gravity or other acting forces, including upward throwing movement or horizontal movement and the like. The upward throwing motion may be, for example, a vertical upward throwing motion, a vertical downward throwing motion, a horizontal throwing motion, an oblique throwing motion, or the like. For example, when a camera is thrown vertically up by a user, the course of motion of the camera includes: first, the moving object moves obliquely upward until the velocity in the vertical direction is zero, and then moves downward until the moving object falls on the target position. For another example, when the camera is horizontally ejected by an elastic member (e.g., a spring), the motion process of the camera includes: first away from the starting point to the farthest end and then back to the starting point under the action of the elastic force. The camera collects original data in the motion process, and the collected original data comprises an original video and camera motion data. The original video is the video collected by the camera in the motion process and can be a panoramic video. The panoramic video is a video with a visual angle larger than the normal effective visual angle of human eyes, and comprises a black-and-white video, a color video and the like. For example, a panoramic video is a video in which a view angle range in the horizontal direction is larger than 90 degrees and a view angle range in the vertical direction is larger than 70 degrees.
And S204, determining the target moment when the camera reaches the target point in the motion process according to the camera motion data.
The camera motion data is data for recording the motion condition of the camera, and may be data acquired by a gyroscope in the camera, including accelerations of the camera in three directions of an X axis, a Y axis, and a Z axis, or may be accelerations obtained by synthesizing the accelerations in the three directions of the X axis, the Y axis, and the Z axis. The target point may be any point that the camera reaches during the motion, for example, the target point may be the highest point that the camera reaches based on the upward motion. For another example, the target point may be a point where the camera has zero velocity in the vertical direction. For another example, the target point may be a point where the camera is zero velocity in the horizontal direction. The target time is the time when the camera reaches the target point. For example, the target time is 3 minutes and 4 seconds of the original video. For another example, the target time is 11 hours, 3 minutes, and 15 seconds. And S206, extracting a target video frame corresponding to the target moment from the original video.
The target video frame is a video frame collected when the camera moves to a target point. For example, the target video frame is a video frame captured when the camera moves to the highest point. For another example, the target video frame is a video frame captured when the camera moves to a vertical direction at a speed of zero. For another example, the target video frame is a video frame captured when the camera moves to a horizontal direction speed of zero.
In one embodiment, S206 specifically includes: rendering the original video according to the three-dimensional sphere model, and extracting a target video frame corresponding to a target moment from the rendered spherical panoramic video.
And S208, rotating the target video frame according to the target time and the preset duration to obtain a rotated video.
The preset duration is the duration of the generated video special effect, and can be any numerical value. For example, the preset time period is 8 seconds, 6 seconds, or 0.1 minute, etc. The rotation processing is a processing method of rotating the view angle corresponding to the target video frame. For example, as shown in fig. 3, the rotation process may be performed in such a manner that the virtual camera at the center of the spherical model is rotated at an angle at which the virtual camera looks at the target video frame. When the terminal performs the rotation processing on the target video frame, the target video frame may have a rotation speed component in the horizontal direction or the vertical direction, or the target video frame may have a rotation speed component in both the horizontal direction and the vertical direction.
In the embodiment, the original video acquired by the camera in the motion process is acquired, and the target time when the camera reaches the target point in the motion process is determined according to the camera motion data extracted from the original video, so that the target point can be accurately determined according to the camera motion data, and the accuracy of determining the target point is improved. And extracting a target video frame corresponding to the target moment from the original video. And then, according to the target time and the preset duration, rotating the target video frame to obtain a rotating video. The computer equipment can accurately determine the target point reached by the camera in the motion process according to the camera motion data, so that the efficiency and the accuracy of determining the target point are improved, a target video frame is accurately acquired, and the visual effect of the rotating video is improved.
In an embodiment, S208 specifically includes: calculating a yaw angle and a pitch angle corresponding to each time point of the target video frame in the preset duration according to the target time and the preset duration; and rotating the target video frame based on the yaw angle and the pitch angle to obtain a rotating video.
The yaw angle is the angle of rotation of the visual angle corresponding to the target video frame around the Z axis. For example, the yaw angle is 30 degrees, 45 degrees, 60 degrees, or the like. The pitch angle is the angle at which the view angle corresponding to the target video frame rotates around the X-axis. For example, the pitch angle is 15 degrees, 20 degrees, 25 degrees, or the like. For example, as shown in fig. 3, the target video frame is a three-dimensional panoramic video frame rendered in a spherical model, and a corresponding view angle of the target video frame is a view angle of the target video frame observed by a virtual camera in the spherical model. When the view angle corresponding to the target video frame moves from point a to point B, the yaw angle changes corresponding to the pitch angle.
And the terminal calculates the yaw angle and the pitch angle corresponding to each time point of the target video frame in the preset duration according to the target time and the preset duration. For example, at a target time (t 0), a view angle corresponding to a target video frame looks at a point a, then the terminal rotates the view angle corresponding to the target video frame within a preset time length (if L), and a rotation ending time point is calculated to be t3= t0+ L according to the target time and the preset time length. And the terminal enables the corresponding visual angle of the target video frame to look at the point B at the time point t1, look at the point C at the time point t2 and look at the point D at the time point t 3. The terminal can calculate the yaw angle and the pitch angle of the target video frame at the time points of t1, t2 and t3 respectively according to t0 and t 3.
In the embodiment, the terminal calculates the yaw angle and the pitch angle of the target video frame corresponding to each time point in the preset time length according to the target time and the preset time length; and rotating the target video frame based on the yaw angle and the pitch angle to obtain a rotating video. Therefore, the yaw angle and the pitch angle can be accurately determined according to the target time and the preset time, the precision of the yaw angle and the pitch angle is ensured, and the stability of the rotating effect is improved; and the target point is determined through the motion data, so that the subsequent video frame rotating operation is combined, the automatic clipping of the video is realized, excessive clipping knowledge of the panoramic video is not required to be learned by users, and the learning cost of the panoramic video clipping by the users is effectively reduced.
In one embodiment, as shown in fig. 4, S204 specifically includes the following steps:
s402, determining a first acceleration and a second acceleration of the camera in the motion process according to the camera motion data.
The first acceleration is the maximum acceleration of the camera in the motion process. For example, when the camera moves based on the upward throwing motion of the user, the first acceleration is the acceleration of the camera at the moment of being thrown, and the direction of the first acceleration is the same as the moving direction of the camera, that is, the first acceleration is a positive value. For another example, when the camera is horizontally ejected by an elastic member (e.g., a spring) connected thereto, the first acceleration is an acceleration of the camera at the time of being ejected. The second acceleration is the minimum acceleration of the camera in the motion process, and the direction of the second acceleration is opposite to the motion direction of the camera, namely the second acceleration is a negative value. For example, when the camera moves based on the upward motion of the user, the second acceleration is an acceleration at the end of the moving process of the camera. For another example, when the camera is horizontally ejected by a connected elastic member (e.g., a spring), the second acceleration is an acceleration at which the camera returns to the departure point based on a pulling force of the elastic member.
In one embodiment, the first acceleration and the second acceleration are accelerations obtained by synthesizing accelerations in three directions of an X axis, a Y axis and a Z axis.
S404, determining the target time when the camera reaches the target point in the moving process according to the first time corresponding to the first acceleration and the second time corresponding to the second acceleration.
The first moment is the moment when the camera has the first acceleration. For example, when the camera moves based on the user's upward motion, the first time is the time when the camera is thrown up. For another example, when the camera is horizontally ejected by an elastic member (e.g., a spring), the first timing is a timing at which the camera is ejected. The second time is a time when the camera has a second acceleration. For example, when the camera moves based on the upward motion of the user, the second time is a time when the camera movement process ends. For another example, when the camera is horizontally ejected by the connected elastic member (e.g., a spring), the second timing is a timing when the camera returns to the departure point based on the pulling force of the elastic member.
In one embodiment, the target point is the highest point of the camera movement based on the upward motion, or the target point may also be the farthest point moved when ejected horizontally by the elastic member. The target time is the middle time between the first time and the second time, and the terminal calculates the target time according to the formula (1). Wherein, T 0 Is a target time, T 1 Is a first time, T 2 Is the second time.
T 0 =(T 2 +T 1 )/2 (1)
In the above embodiment, the first acceleration and the second acceleration of the camera during the motion process are determined according to the camera motion data. And determining the target time when the camera reaches the target point in the motion process according to the first time corresponding to the first acceleration and the second time corresponding to the second acceleration. And the target time when the camera reaches the target point is accurately determined according to the first acceleration and the second acceleration, so that a target video frame can be accurately extracted from the original video according to the target time, and the visual effect of the obtained special effect picture is ensured.
In one embodiment, the target point is the highest point reached by the camera during the upward motion. The camera cannot be shielded when reaching the highest point, so that the camera can shoot in the best view field, and the target video frame corresponding to the highest point has the best picture visual effect, so that the rotating video obtained by rotating the target video frame has good visual effect, and the visual effect of the rotating video is improved. In addition, when the highest point is manually determined, the determined highest point may have an error and is not the actual highest point of the camera motion, if the target video frame corresponding to the manually determined highest point is rotated, the target video frame is rotated upwards and then downwards, and the rotated video obtained through the rotation processing is inserted into the original video, so that the special effect video is obtained. In the special effect video, when the rotation video rotates downwards and returns to the original video, the original video continues to rotate upwards, that is, the rotation video cannot be smoothly combined with the original video, and the visual effect is poor. The highest point of the camera motion process can be accurately determined through the camera motion data, so that the rotating video and the original video can be smoothly combined, and the special visual effect of the video is improved.
In one embodiment, as shown in fig. 5, S208 specifically includes the following steps:
s502, taking the target time as the rotation starting time, and determining the rotation ending time according to the rotation starting time and the preset time length.
The rotation starting time is the time when the rotation processing of the target video frame is started. For example, the terminal may zero the time of the rotation start time to zero the rotation start time. For another example, the terminal may determine the rotation start time according to the time when the target video frame appears in the original video. Specifically, if the target video frame appears in the original video for the 6.05 th second, the rotation start time is determined to be the 6.05 th second.
In an embodiment, S502 specifically includes: and calculating the sum of the rotation starting time and the preset time length, and taking the obtained sum as the rotation ending time.
And S504, determining a rotation period according to the rotation starting time and the rotation ending time.
The rotation starting time and the rotation ending time are defined as a total period, and then the total period may include a plurality of rotation periods, and the durations of the plurality of rotation periods may be the same or different, and are specifically set according to the requirement of the actual rotation effect. For example, when the rotation is performed in two stages, one rotation period may be equal to half the total period, and when the rotation is performed in three stages, the rotation period may be one-third of the total period. It will be appreciated that the duration of the rotation periods may also be unequal.
In one embodiment, S504 specifically includes: and the terminal calculates the sum of the rotation starting time and the rotation ending time, and then calculates the ratio of the sum to 2 to obtain the rotation period. Specifically, the terminal calculates a rotation period according to formula (2), where halftime is the rotation period, endtime is the rotation end time, and starttime is the rotation start time.
halftime=(endtime-startime)/2 (2)
S506, calculating the ratio of each time point in the preset time length to the rotation period.
The terminal calculates the ratio of the rotation period to each time point in the preset time length, and the calculated ratio is t/half time, wherein t can be each time point in the preset time length.
And S508, calculating the yaw angle corresponding to each time point of the target video frame in the preset duration based on the ratio and the initial yaw angle.
The initial yaw angle is a preset yaw angle, for example, the initial yaw angle may be a yaw angle corresponding to a viewing angle of the target video frame at the rotation start time. In one embodiment, S508 specifically includes: the terminal calculates the product of the ratio and 2 pi, and then adds the product and the initial yaw angle to obtain the target video frameAnd (4) presetting a yaw angle corresponding to each time point in the duration. Specifically, the terminal calculates the Yaw angle according to the formula (3), wherein the Yaw angle is 0 For the initial yaw angle, t may be each time point within a preset time period, and halftime is a rotation period.
Yaw=Yaw 0 +(t/halftime)×2π (3)
And S510, calculating the pitch angle corresponding to each time point of the target video frame in the preset time length based on the ratio and the initial pitch angle.
The initial pitch angle is a preset pitch angle, for example, the initial pitch angle is a pitch angle corresponding to a viewing angle of the target video frame at the rotation start time. In one embodiment, S510 specifically includes: terminal calculates the ratio
Figure BDA0003842698670000111
The sine of the product and then multiplying the resulting sine value by a preset parameter (which may be, for example, 0.058). And finally, adding the numerical value obtained by multiplying the preset parameter with the initial pitch angle to obtain the pitch angle corresponding to each time point of the target video frame in the preset duration. Specifically, the terminal calculates the pitch angle according to formula (4), wherein pitch 0 For the initial pitch angle, t may be each time point within a preset time duration, and halftime is the rotation period.
Figure BDA0003842698670000121
In the above embodiment, the target time is taken as the rotation start time, and the rotation end time is determined according to the rotation start time and the preset time length. And determining a rotation period according to the rotation starting time and the rotation ending time, and calculating the ratio of each time point in the preset time length to the rotation period. Then calculating a yaw angle corresponding to each time point of the target video frame in a preset time length based on the ratio and the initial yaw angle; and calculating the pitch angle corresponding to each time point of the target video frame in the preset time length based on the ratio and the initial pitch angle. Therefore, the target video frame can be rotated at the same speed according to the accurately determined yaw angle and pitch angle, the stability of the rotating effect is ensured, and the visual effect of the rotating video is improved. In addition, when the target video frame is rotated according to the pitch angle and the yaw angle, the target video frame can be rotated in the upper left or upper right direction at the same speed through the synergistic effect of the pitch angle and the yaw angle, and then rotated to the target position and rotated in the lower left or lower right direction, so that the rotating video is ensured to have the best visual effect.
In one embodiment, each time point within the preset time length forms a time point set, and the time point set comprises a first time point set and a second time point set; the first time point set consists of first time points before the target video frame is rotated to the first target position; the second set of time points consists of a second time point between rotating the target video frame to the first target position and the second target position; s210 specifically includes: based on the yaw angle and the pitch angle corresponding to the first time point, rotating the target video frame in the left-upper direction or the right-upper direction; copying the target video frame rotated at each first time point to obtain a first video frame sequence; when the target video frame rotates to the first target position, the target video frame rotates to the right lower direction or the left lower direction based on the yaw angle and the pitch angle corresponding to the second time point until the target video frame rotates to the second target position; copying the target video frames rotated at each second time point to obtain a second video frame sequence; a rotated video is generated based on the first sequence of video frames and the second sequence of video frames.
The first target position is a preset rotation position, and may be a plane in which a pitch angle of a corresponding view angle of the target video frame deviates upward by 0.058 radian from an initial pitch angle. The second target position is a position of the target video frame corresponding to the view direction tracking target.
The terminal firstly rotates the target video frame in the upper left direction or the upper right direction based on the yaw angle and the pitch angle corresponding to each first time point in the first time point set, namely, the visual angle corresponding to the target video frame rotates in the upper left direction or the upper right direction until the visual angle of the target video frame rotates to the plane of the first target position. At each first time point of each rotation, the terminal copies the target video frame rotated at that time point. The video frames copied at each first point in time are combined to obtain a first video frame sequence. And then the terminal enables the target video frame to rotate towards the right-lower direction or the left-lower direction based on the yaw angle and the pitch angle corresponding to each second time point in the second time point set until the target video frame rotates to a second target position. At a second time point of each rotation, the terminal copies the target video frame rotated at the time point. The video frames copied at each second point in time are combined to obtain a second video frame sequence. Finally, the terminal generates a rotated video based on the first sequence of video frames and the first sequence of video frames.
In the above embodiment, the terminal rotates the target video frame in the upper left direction or the upper right direction according to the yaw angle and the pitch angle, and then rotates the target video frame in the lower right direction or the lower left direction, so that the stability of the rotation effect is ensured, and the visual effect of the rotation video is improved. And the terminal makes the target video frame rotate to the upper left or the upper right according to the yaw angle and the pitch angle, and then rotates to the lower right or the lower left, namely makes the target video frame rotate to the upper oblique direction and then rotate to the lower oblique direction, thereby can produce the effect of looking around through oblique rotation, compare in another kind and realize looking around scheme: the rotation is vertically upwards rotated, then horizontally rotated and finally vertically downwards rotated (connection among three stages has pause), the visual effect of the rotation of the embodiment is smoother, and the visual effect of the rotation is improved.
In one embodiment, the time point set composed of time points within the preset time length further includes a third time point set; the third time point set consists of all time points which enable the target video frame to horizontally rotate at the highest point of the motion process; when the target video frame rotates to the highest point, the target video frame rotates in the horizontal direction based on the yaw angle corresponding to the third time point; when the target video frame is rotated in the horizontal direction, the rotation speed component in the vertical direction is zero.
The terminal can enable the target video frame to rotate to a preset target position when the rotation phase corresponding to the third time point is finished. For example, the preset target position may be a position where the corresponding view angle of the target video frame is rotated from the first target position by a preset angle. For example, the target video frame rotates in the horizontal direction from the first target position, and reaches a preset target position when the viewing angle is rotated by a preset angle such as 180 degrees, 360 degrees, 90 degrees, or the like.
In one embodiment, the terminal may rotate the target video frame to the highest point vertically or obliquely upward (upper left or upper right), then keep the rotation speed component in the vertical direction to be zero, and horizontally rotate, and at the end of the rotation phase corresponding to the third time point, rotate the target video frame vertically or obliquely downward (lower left or lower right). The terminal enables the target video frame to horizontally rotate when the target video frame is at the highest point, and the visual effect of looking around can be generated.
When the terminal rotates the target video frame to the highest point of the motion process, the rotation speed component of the target video frame in the vertical direction is enabled to be zero, and the target video frame rotates in the horizontal direction. For example, the target video frame is rotated by one turn in the horizontal direction. And then the target video frame is rotated to the right or left direction.
In the above embodiment, when the target video frame rotates to the first target position, the target video frame is rotated in the horizontal direction based on the yaw angle and the pitch angle corresponding to the third time point, and when the target video frame rotates to the third target position, the target video frame is rotated to the lower right direction or the lower left direction based on the yaw angle and the pitch angle corresponding to the second time point until the target video frame rotates to the second target position. Therefore, the rotating effect of the target video frame can be enriched, and the visual effect of the rotating video is improved.
In one embodiment, as shown in fig. 6, S206 further includes S602-S606 before S206, and S206 specifically includes S608.
And S602, identifying a tracking target in the acquired video.
The tracking target is a target pointed by a view angle corresponding to the video, and may be a person or an object, for example, the tracking target may be a user who throws a camera upward. In an embodiment, S602 specifically includes: and the terminal starts to acquire the video in response to the video acquisition instruction, and identifies the tracking target in the acquired video through an image identification technology.
And S604, acquiring tracking data corresponding to the tracking target.
The tracking data is data indicating the position of the tracking target in the video picture. For example, the tracking data is the position coordinates of the tracking target in the spherical model. For example, the tracking data is (x, y, z), and x, y, and z are position coordinates of the tracking target on the x axis, the y axis, and the z axis, respectively.
And S606, adjusting the visual angle of the original video according to the tracking data to obtain the adjusted original video.
And the terminal adjusts the visual angle of the original video according to the tracking data, so that the visual angle of each video frame in the original video points to the tracking target. Specifically, the terminal adjusts the view angle of the original video, so that the view angle of each video frame in the original video points to the position indicated by the tracking data.
And S608, extracting a target video frame corresponding to the target moment from the adjusted original video.
And the terminal adjusts the visual angle of the original video according to the tracking data and extracts a target video frame corresponding to the target moment from the adjusted original video.
In the above embodiment, the tracking target is identified in the acquired video, the tracking data corresponding to the tracking target is acquired, the view angle of the original video is adjusted according to the tracking data, and the target video frame corresponding to the target moment is extracted from the adjusted original video. Therefore, the adjusted visual angle of the original video can always look at the tracking target, and the video tracking method has a better visual effect.
In one embodiment, S210 is followed by: inserting the rotary video into the original video according to the target moment to obtain a special effect video; and responding to a playing instruction for the special effect video, and playing the special effect video.
The special effect video is a video with a special visual effect. For example, a special effect video is a video with a frozen visual effect. The solidification visual effect is a visual effect in which a picture photographed when the camera moves to the target point is repeatedly played for a certain period of time, and the angle of view of the picture is rotated during the repeated playing. For example, the solidification visual effect may be a high-altitude solidification visual effect. The high-altitude solidification visual effect is a visual effect that a picture shot when the camera moves to the highest point based on the upward throwing motion is repeatedly played for a period of time, and the visual angle of the picture is rotated during the repeated playing. The target time can be the time when the camera reaches the target point, and the terminal inserts the rotated video into the original video after the target video frame corresponding to the target time, so as to generate the visual effect that the picture is solidified from the target time.
In the above embodiment, the rotating video is inserted into the original video according to the target time, so as to obtain the special effect video. The special video is played in response to the playing instruction for the special video, so that the rotation stability of the special video is improved, and the visual effect is ensured.
In one embodiment, the terminal is a panoramic camera with a wide angle lens. When the panoramic camera receives a video recording instruction triggered by a user, a tracking target is automatically selected, and tracking data corresponding to the tracking target is obtained. When the panoramic camera is thrown into the air by a user, the panoramic camera shoots an original video based on the movement process of the upward throwing action. And then adjusting the visual angle of the original video according to the tracking data, so that the visual angle of the lens in the original video always points to the tracking target. The panoramic camera extracts gyroscope data from the original video, and calculates the maximum acceleration and the minimum acceleration in the motion process according to the gyroscope data, wherein the moment of occurrence of the maximum acceleration is the moment of starting and throwing points, and the moment of occurrence of the minimum acceleration is the moment of ending the motion. And the middle moment between the throwing point starting moment and the movement ending moment is the moment of throwing the highest point of the panoramic camera. Extracting a target video frame corresponding to the moment of throwing the highest point from the adjusted original video, repeatedly playing the target video frame, and adjusting the visual angle of the target video frame during the repeated playing period to rotate the visual angle of the target video frame. Specifically, the terminal may calculate a yaw angle and a pitch angle according to the method in fig. 5, and then rotate the view angle of the target video frame first to the upper left or the upper right according to the yaw angle and the pitch angle, and then rotate the view angle to the lower left or the lower right until the view angle is returned to the state of tracking the target.
In one embodiment, as shown in fig. 7, the video processing method includes the steps of:
s702, acquiring an original video and camera motion data acquired by a camera in a motion process, and determining a first acceleration and a second acceleration of the camera in the motion process according to the camera motion data.
S704, determining a target time when the camera reaches the target point in the motion process according to a first time corresponding to the first acceleration and a second time corresponding to the second acceleration.
S706, in response to the video acquisition instruction, identifying a tracking target in the acquired video, and acquiring tracking data corresponding to the tracking target.
And S708, adjusting the visual angle of the original video according to the tracking data, and extracting a target video frame corresponding to the target moment from the adjusted original video.
And S710, taking the target time as a rotation starting time, and determining a rotation ending time according to the rotation starting time and a preset time length.
And S712, determining a rotation period according to the rotation starting time and the rotation ending time, and calculating the ratio of each time point in the preset time length to the rotation period.
And S714, calculating the yaw angle corresponding to each time point of the target video frame in the preset time length based on the ratio and the initial yaw angle.
And S716, calculating the pitch angle corresponding to each time point of the target video frame in the preset duration based on the ratio and the initial pitch angle. The time points include a first time point and a second time point.
And S718, rotating the target video frame in the upper left direction or the upper right direction based on the yaw angle and the pitch angle corresponding to the first time point, and copying the target video frame rotated at each first time point to obtain a first video frame sequence.
And S720, when the target video frame rotates to the first target position, the target video frame rotates to the lower right direction or the lower left direction based on the yaw angle and the pitch angle corresponding to the second time point until the target video frame rotates to the second target position.
And S722, copying the target video frame rotated at each second time point to obtain a second video frame sequence, and generating a rotating video based on the first video frame sequence and the second video frame sequence.
And S724, inserting the rotary video into the original video according to the target moment to obtain the special effect video and playing the special effect video.
The specific contents of S702 to S724 may refer to the above specific implementation process.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a video processing apparatus for implementing the above-mentioned video processing method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the video processing apparatus provided below can be referred to the limitations of the video processing method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 8, there is provided a video processing apparatus including: an obtaining module 802, a determining module 804, an extracting module 806, a calculating module 808, and a rotation processing module 810, wherein:
an obtaining module 802, configured to obtain an original video and camera motion data acquired by a camera in a motion process;
a determining module 804, configured to determine, according to the camera motion data, a target time at which the camera reaches a target point in a motion process;
an extracting module 806, configured to extract a target video frame corresponding to a target time from an original video;
and a rotation processing module 808, configured to perform rotation processing on the target video frame according to the target time and a preset duration to obtain a rotation video.
In the above embodiment, the original video acquired by the camera in the motion process is acquired, and the target time when the camera reaches the target point in the motion process is determined according to the camera motion data extracted from the original video. And extracting a target video frame corresponding to the target moment from the original video. Therefore, the target video frame can be accurately determined according to the camera motion data, and the accuracy of the determined target video frame is ensured. And then, according to the target time and the preset duration, rotating the target video frame to obtain a rotating video. The computer equipment can accurately determine the target point reached by the camera in the motion process according to the camera motion data, so that the efficiency of determining the target point is improved, and the visual effect of the rotating video is improved.
In one embodiment, the apparatus further comprises:
the calculating module 810 is configured to calculate a yaw angle and a pitch angle of the target video frame corresponding to each time point in a preset duration according to the target time and the preset duration;
the rotation processing module 808 is further configured to perform rotation processing on the target video frame based on the yaw angle and the pitch angle to obtain a rotation video.
In one embodiment, the determining module 804 is further configured to:
determining a first acceleration and a second acceleration of the camera in the motion process according to the camera motion data;
and determining the target time when the camera reaches the target point in the motion process according to the first time corresponding to the first acceleration and the second time corresponding to the second acceleration.
In one embodiment, the rotation processing module 808 is further configured to:
taking the target time as a rotation starting time, and determining a rotation ending time according to the rotation starting time and a preset time length;
determining a rotation period according to the rotation starting time and the rotation ending time;
calculating the ratio of each time point in a preset time length to the rotation period;
calculating a yaw angle corresponding to each time point of the target video frame in a preset time length based on the ratio and the initial yaw angle;
and calculating the pitch angle corresponding to each time point of the target video frame in the preset time length based on the ratio and the initial pitch angle.
In one embodiment, each time point in the preset duration constitutes a time point set, and the time point set comprises a first time point set and a second time point set; the first time point set consists of first time points before the target video frame is rotated to the first target position; the second set of time points consists of a second time point between rotating the target video frame to the first target position and the second target position; a rotation processing module 808, further configured to:
rotating the target video frame in the left upper direction or the right upper direction based on the yaw angle and the pitch angle corresponding to the first time point;
copying the target video frame rotated at each first time point to obtain a first video frame sequence;
when the target video frame rotates to the first target position, the target video frame rotates to the right lower direction or the left lower direction based on the yaw angle and the pitch angle corresponding to the second time point until the target video frame rotates to the second target position;
copying the target video frame rotated at each second time point to obtain a second video frame sequence;
a rotated video is generated based on the first sequence of video frames and the second sequence of video frames.
In one embodiment, the time point set composed of time points within the preset time length comprises a third time point; the third time point is composed of all third time points in the process of enabling the target video frame to horizontally rotate at the highest point of the motion process;
a rotation processing module 808, further configured to: when the target video frame rotates to the highest point, the target video frame rotates in the horizontal direction based on the yaw angle and the pitch angle corresponding to the third time point; when the target video frame is rotated in the horizontal direction, the rotation speed component in the vertical direction is zero.
In one embodiment, as shown in fig. 9, the apparatus further comprises:
an identification module 812 for identifying a tracking target in the pre-captured video;
the obtaining module 802 is further configured to obtain tracking data corresponding to the tracking target;
an adjusting module 814, configured to perform view angle adjustment on the original video according to the tracking data to obtain an adjusted original video;
the extracting module 806 is further configured to extract a target video frame corresponding to the target time from the adjusted original video.
In one embodiment, the apparatus further comprises:
the inserting module 816 is configured to insert the rotated video into the original video according to the target time to obtain a special-effect video;
the playing module 818 is configured to play the special effect video in response to a playing instruction for the special effect video.
The various modules in the video processing apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 10. The computer apparatus includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a video processing method. The display unit of the computer equipment is used for forming a visual and visible picture, and can be a display screen, a projection device or a virtual reality imaging device, the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (12)

1. A method of video processing, the method comprising:
acquiring an original video and camera motion data acquired by a camera in a motion process;
determining a target moment when the camera reaches a target point in the motion process according to the camera motion data;
extracting a target video frame corresponding to the target moment from the original video;
and rotating the target video frame according to the target time and the preset duration to obtain a rotated video.
2. The method according to claim 1, wherein the rotating the target video frame according to the target time and a preset duration to obtain a rotated video comprises:
calculating a yaw angle and a pitch angle corresponding to each time point of the target video frame in a preset time length according to the target time and the preset time length;
and rotating the target video frame based on the yaw angle and the pitch angle to obtain a rotating video.
3. The method of claim 1, wherein determining, from the camera motion data, a target time at which the camera reaches a target point during the motion comprises:
determining a first acceleration and a second acceleration of the camera in the motion process according to the camera motion data;
and determining the target time when the camera reaches a target point in the motion process according to the first time corresponding to the first acceleration and the second time corresponding to the second acceleration.
4. The method of claim 2, wherein calculating the yaw angle and the pitch angle of the target video frame at each time point in a preset time period according to the target time and the preset time period comprises:
taking the target time as a rotation starting time, and determining a rotation ending time according to the rotation starting time and a preset time length;
determining a rotation period according to the rotation starting time and the rotation ending time;
calculating the ratio of each time point in the preset time length to the rotation period;
calculating a yaw angle corresponding to each time point of the target video frame in the preset time length based on the ratio and the initial yaw angle;
and calculating the pitch angle corresponding to each time point of the target video frame in the preset duration based on the ratio and the initial pitch angle.
5. The method of claim 2, wherein the time points within the preset duration constitute a set of time points, the set of time points comprising a first set of time points and a second set of time points; the first set of time points consists of first time points before the target video frame is rotated to a first target position; the second set of time points consists of a second time point between rotating the target video frame to a first target position and a second target position; the rotating the target video frame based on the yaw angle and the pitch angle to obtain a rotating video comprises:
based on the yaw angle and the pitch angle corresponding to the first time point, enabling the target video frame to rotate towards the upper left direction or the upper right direction;
copying the target video frame rotated at each first time point to obtain a first video frame sequence;
when the target video frame rotates to a first target position, the target video frame rotates to the right lower direction or the left lower direction based on the yaw angle and the pitch angle corresponding to the second time point until the target video frame rotates to a second target position;
copying the target video frames rotated at each second time point to obtain a second video frame sequence;
generating a rotated video based on the first sequence of video frames and the second sequence of video frames.
6. The method of claim 2, wherein the time points within the preset duration constitute a set of time points; the set of points in time comprises a third set of points in time; the third time point set consists of each third time point in the process of horizontally rotating the target video frame at the highest point of the motion process; the rotating the target video frame based on the yaw angle and the pitch angle to obtain a rotating video comprises:
when the target video frame rotates to the highest point, enabling the target video frame to rotate in the horizontal direction based on the yaw angle corresponding to the third time point; when the target video frame is rotated in the horizontal direction, the rotation speed component in the vertical direction is zero.
7. The method of claim 1, further comprising:
identifying a tracking target in a pre-collected video;
acquiring tracking data corresponding to the tracking target;
adjusting the visual angle of the original video according to the tracking data to obtain the adjusted original video;
the extracting the target video frame corresponding to the target moment from the original video comprises:
and extracting a target video frame corresponding to the target moment from the adjusted original video.
8. The method according to claim 1, wherein after the rotating the target video frame based on the yaw angle and the pitch angle to obtain a rotated video, the method further comprises:
inserting the rotating video into the original video according to the target moment to obtain a special effect video;
and responding to a playing instruction for the special effect video, and playing the special effect video.
9. The method according to any one of claims 1 to 8, wherein the course of motion is an up-throwing course of motion; the target point is the highest point reached by the camera in the upward throwing motion process.
10. A video processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring an original video and camera motion data acquired by a camera in a motion process;
the determining module is used for determining the target time when the camera reaches a target point in the motion process according to the camera motion data;
the extraction module is used for extracting a target video frame corresponding to the target moment from the original video;
and the rotation processing module is used for performing rotation processing on the target video frame according to the target time and the preset duration to obtain a rotation video.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202211108463.0A 2022-09-13 2022-09-13 Video processing method, video processing device, computer equipment and storage medium Pending CN115550563A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211108463.0A CN115550563A (en) 2022-09-13 2022-09-13 Video processing method, video processing device, computer equipment and storage medium
PCT/CN2023/118362 WO2024055967A1 (en) 2022-09-13 2023-09-12 Video processing method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211108463.0A CN115550563A (en) 2022-09-13 2022-09-13 Video processing method, video processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115550563A true CN115550563A (en) 2022-12-30

Family

ID=84724777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211108463.0A Pending CN115550563A (en) 2022-09-13 2022-09-13 Video processing method, video processing device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115550563A (en)
WO (1) WO2024055967A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024055967A1 (en) * 2022-09-13 2024-03-21 影石创新科技股份有限公司 Video processing method and apparatus, computer device, and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7999842B1 (en) * 2004-05-28 2011-08-16 Ricoh Co., Ltd. Continuously rotating video camera, method and user interface for using the same
CN109561254B (en) * 2018-12-18 2020-11-03 影石创新科技股份有限公司 Method and device for preventing panoramic video from shaking and portable terminal
CN111242975B (en) * 2020-01-07 2023-08-25 影石创新科技股份有限公司 Panoramic video rendering method capable of automatically adjusting viewing angle, storage medium and computer equipment
CN112017216B (en) * 2020-08-06 2023-10-27 影石创新科技股份有限公司 Image processing method, device, computer readable storage medium and computer equipment
CN115550563A (en) * 2022-09-13 2022-12-30 影石创新科技股份有限公司 Video processing method, video processing device, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024055967A1 (en) * 2022-09-13 2024-03-21 影石创新科技股份有限公司 Video processing method and apparatus, computer device, and storage medium

Also Published As

Publication number Publication date
WO2024055967A1 (en) 2024-03-21

Similar Documents

Publication Publication Date Title
CN106797460B (en) The reconstruction of 3 D video
WO2019154013A1 (en) Expression animation data processing method, computer device and storage medium
CN107315470B (en) Graphic processing method, processor and virtual reality system
EP2225607B1 (en) Guided photography based on image capturing device rendered user recommendations
CN110249626B (en) Method and device for realizing augmented reality image, terminal equipment and storage medium
WO2018014601A1 (en) Method and relevant apparatus for orientational tracking, method and device for realizing augmented reality
US9756260B1 (en) Synthetic camera lenses
WO2017032336A1 (en) System and method for capturing and displaying images
CN111694430A (en) AR scene picture presentation method and device, electronic equipment and storage medium
CN106162204A (en) Panoramic video generation, player method, Apparatus and system
CN112348968B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
WO2019084719A1 (en) Image processing method and unmanned aerial vehicle
CN111862348B (en) Video display method, video generation method, device, equipment and storage medium
US11373329B2 (en) Method of generating 3-dimensional model data
US10043552B1 (en) Systems and methods for providing thumbnails for video content
WO2024055967A1 (en) Video processing method and apparatus, computer device, and storage medium
WO2023169283A1 (en) Method and apparatus for generating binocular stereoscopic panoramic image, device, storage medium, and product
US20160037134A1 (en) Methods and systems of simulating time of day and environment of remote locations
KR20210049783A (en) Image processing apparatus, image processing method and image processing program
CA3119609A1 (en) Augmented reality (ar) imprinting methods and systems
CN106131421A (en) The method of adjustment of a kind of video image and electronic equipment
CN114584681A (en) Target object motion display method and device, electronic equipment and storage medium
WO2023142650A1 (en) Special effect rendering
CN108416255B (en) System and method for capturing real-time facial expression animation of character based on three-dimensional animation
CN108320331A (en) A kind of method and apparatus for the augmented reality video information generating user's scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination