WO2024055967A1 - 视频处理方法、装置、计算机设备和存储介质 - Google Patents

视频处理方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2024055967A1
WO2024055967A1 PCT/CN2023/118362 CN2023118362W WO2024055967A1 WO 2024055967 A1 WO2024055967 A1 WO 2024055967A1 CN 2023118362 W CN2023118362 W CN 2023118362W WO 2024055967 A1 WO2024055967 A1 WO 2024055967A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
video frame
video
time
rotated
Prior art date
Application number
PCT/CN2023/118362
Other languages
English (en)
French (fr)
Inventor
杜孟林
那强
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2024055967A1 publication Critical patent/WO2024055967A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Definitions

  • the present application relates to the field of computer technology, and in particular to a video processing method, device, computer equipment, and storage medium.
  • panoramic cameras can shoot panoramic videos.
  • the advantage of panoramic videos is that they have a very wide viewing angle.
  • panoramic videos can cover a 360-degree viewing angle.
  • monitors display flat images, so more and more people are studying how to fully and effectively display 360-degree panoramic videos to effectively improve the user experience.
  • this application provides a video processing method.
  • the methods include:
  • the camera movement data determine the target moment when the camera reaches the target point during the movement
  • the target video frame is rotated to obtain a rotated video.
  • this application also provides a video processing device.
  • the device includes:
  • the acquisition module is used to obtain the original video and camera motion data collected by the camera during movement;
  • a determination module for the camera motion data, to determine a target time when the camera reaches a target point during the motion
  • An extraction module configured to extract the target video frame corresponding to the target moment from the original video
  • a rotation processing module configured to perform rotation processing on the target video frame according to the target time and a preset duration to obtain a rotated video.
  • the device further includes:
  • a calculation module configured to calculate the yaw angle and pitch angle corresponding to the target video frame at each time point within the preset duration based on the target time and the preset duration;
  • the rotation processing module is also configured to perform rotation processing on the target video frame based on the yaw angle and the pitch angle to obtain a rotated video.
  • the determining module is also used to:
  • the target moment when the camera reaches the target point during the movement is determined.
  • the rotation processing module is also used to:
  • the pitch angle corresponding to the target video frame at each time point within the preset duration is calculated.
  • the time points within the preset time period constitute a time point set, and the time point set includes a first time point set and a second time point set; the first time point set consists of the target The second time point set consists of each first time point before the video frame is rotated to the first target position; the second time point set is composed of the second time point between the first target position and the second target position when the target video frame is rotated.
  • the rotation processing module is also used for:
  • the target video frame is moved to the upper left Rotate to or to the upper right;
  • the target video frame When the target video frame is rotated to the first target position, based on the yaw angle and pitch angle corresponding to the second time point, the target video frame is rotated to the lower right direction or the lower left direction until it is rotated to the first target position.
  • a rotated video is generated based on the first video frame sequence and the second video frame sequence.
  • the time points within the preset duration constitute a time point set; the time point set includes a third time point set; the third time point set consists of making the target video frame in the motion process The highest point is composed of each third time point during the horizontal rotation process; the rotation processing module is also used for:
  • the target video frame rotates to the highest point, the target video frame is rotated in the horizontal direction based on the yaw angle and pitch angle corresponding to the third time point; the target video frame is rotated in the horizontal direction
  • the rotational velocity component in the vertical direction is zero.
  • the device further includes:
  • Recognition module used to identify tracking targets in pre-collected videos
  • the acquisition module is also used to acquire tracking data corresponding to the tracking target
  • An adjustment module configured to adjust the viewing angle of the original video according to the tracking data to obtain the adjusted original video
  • the extraction module is also used to extract the target video frame corresponding to the target moment from the adjusted original video.
  • the device further includes:
  • An insertion module configured to insert the rotated video into the original video according to the target moment to obtain a special effects video
  • a playback module configured to play the special effects video in response to a playback instruction for the special effects video.
  • this application also provides a computer device.
  • the computer device includes a memory and a processor, the memory stores a computer program, and when the processor executes the computer program Implement the following steps:
  • the camera movement data determine the target moment when the camera reaches the target point during the movement
  • the target video frame is rotated to obtain a rotated video.
  • this application also provides a computer-readable storage medium.
  • the computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by the processor, the following steps are implemented:
  • the camera movement data determine the target moment when the camera reaches the target point during the movement
  • the target video frame is rotated to obtain a rotated video.
  • this application also provides a computer program product.
  • the computer program product includes a computer program that implements the following steps when executed by a processor:
  • the camera movement data determine the target moment when the camera reaches the target point during the movement
  • the target video frame is rotated to obtain a rotated video.
  • the above-mentioned video processing methods, devices, computer equipment, storage media and computer program products obtain the original video and camera movement data collected by the camera during movement, and determine the target moment when the camera reaches the target point during movement based on the camera movement data. Extract the target video frame corresponding to the target moment from the original video.
  • the target video frame can be accurately determined based on the camera motion data, ensuring The accuracy of the determined target video frame.
  • the target video frame is rotated to obtain a rotated video. Since the computer device can accurately determine the target point reached by the camera during movement based on the camera movement data, the efficiency of determining the target point is improved, and the visual effect of the rotated video is improved.
  • Figure 1 is an application environment diagram of a video processing method in an embodiment
  • Figure 2 is a schematic flow chart of a video processing method in one embodiment
  • Figure 3 is a schematic diagram of rotating a target video frame rendered in a spherical model in one embodiment
  • Figure 4 is a schematic flowchart of a method for determining a target time in an embodiment
  • Figure 5 is a schematic flowchart of a method for calculating yaw angle and pitch angle in one embodiment
  • Figure 6 is a schematic flowchart of a method for adjusting the perspective of an original video in one embodiment
  • Figure 7 is a schematic flow chart of a video processing method in another embodiment
  • Figure 8 is a structural block diagram of a video processing device in one embodiment
  • Figure 9 is a structural block diagram of a video processing device in one embodiment
  • Figure 10 is an internal structure diagram of a computer device in one embodiment.
  • the video processing method provided by the embodiment of the present application can be applied in the application environment as shown in Figure 1.
  • the terminal 102 obtains the original video collected by the camera during the movement and the camera movement data; determines the target time when the camera reaches the target point during the movement according to the camera movement data; extracts the target video frame corresponding to the target time from the original video; According to the target time and the preset duration, the yaw angle and pitch angle corresponding to the target video frame at each time point within the preset duration are calculated; based on the yaw angle and pitch angle, the target video frame is rotated to obtain a rotated video.
  • the terminal 102 may be, but is not limited to, a camera, a camera, a smartphone or a portable wearable device.
  • the camera can be a normal camera, panoramic camera machine or action camera, etc.
  • Portable wearable devices can be smart watches, smart bracelets, head-mounted devices, etc.
  • the method adopted may be: based on one frame of the video captured by the panoramic camera, rotate to generate a rotated video. For example, manually observe the video captured by the panoramic camera to find the video frame captured by the panoramic camera at the target point, and then rotate the video frame determined by human eye observation to fully obtain the picture information of the target point (such as the highest point). Display and generate images with solid visual effects.
  • the time point of the determined video frame is not the highest point, it means that the highest and broadest field of view effect brought by the highest point cannot be obtained, and it also means that the camera will continue to rise in the next frame of the original video, and the video frame
  • the rotated video ends with a drop.
  • the rotated video is inserted into the original video, it will produce a very large sense of frustration, resulting in poor original video effects after the rotated video is inserted.
  • a video processing method is provided, and this method is applied to the terminal in Figure 1 Take an example to illustrate, including the following steps:
  • motion is the motion of the camera leaving the initial position at a certain initial speed under the action of gravity or other forces, including upward throwing motion or horizontal motion, etc.
  • the upward throwing motion may be, for example, a vertical upward throwing motion, a vertical downward throwing motion, a horizontal throwing motion or an oblique throwing motion.
  • the camera's movement process includes: first moving diagonally upward until the speed in the vertical direction is zero, and then moving downward until it lands at the target location.
  • the movement process of the camera includes: first moving away from the starting point to the farthest end, and then returning to the starting point under the action of elastic force.
  • the camera collects raw data during movement, and the raw data collected includes original video and camera motion data.
  • the original video is the video collected by the camera during movement, which can be a panoramic video.
  • Panoramic video is a video with a viewing angle larger than the normal effective viewing angle of the human eye, including black White video or color video, etc.
  • a panoramic video is a video with a horizontal viewing angle range greater than 90 degrees and a vertical viewing angle range greater than 70 degrees.
  • S204 Determine the target moment when the camera reaches the target point during movement based on the camera movement data.
  • the camera movement data is data that records the movement of the camera. It can be data collected by the gyroscope in the camera, including the acceleration of the camera in the three directions of X-axis, Y-axis and Z-axis, or it can also be the data of X, Y, The acceleration obtained by combining the accelerations in the three Z directions.
  • the target point can be any point reached by the camera during movement.
  • the target point can be the highest point reached by the camera based on the upward throwing motion.
  • the target point can be a point where the camera's velocity in the vertical direction is zero.
  • the target point can be a point where the camera's speed in the horizontal direction is zero.
  • the target time is the time when the camera reaches the target point.
  • the target moment is 3 minutes and 4 seconds of the original video.
  • the target time is 11 hours, 3 minutes and 15 seconds.
  • the target video frame is the video frame collected when the camera moves to the target point.
  • the target video frame is the video frame collected when the camera moves to the highest point.
  • the target video frame is a video frame collected when the camera moves to a vertical speed of zero.
  • the target video frame is a video frame collected when the camera moves to a horizontal speed of zero.
  • S206 specifically includes: rendering the original video according to the three-dimensional sphere model, and extracting the target video frame corresponding to the target moment from the rendered spherical panoramic video.
  • the preset duration is the duration of the generated video special effects, which can be any value.
  • the default duration is 8 seconds, 6 seconds, or 0.1 minutes, etc.
  • Rotation processing is a processing method that rotates the angle of view corresponding to the target video frame.
  • the rotation processing may be a processing method of rotating the virtual camera at the center of the spherical model toward the angle at which the target video frame is viewed.
  • the target video frame may have a rotation speed component in the horizontal direction or the vertical direction, or the target video frame may have a rotation speed component in both the horizontal direction and the vertical direction.
  • the original video collected by the camera during movement is obtained, and the target moment when the camera reaches the target point during movement is determined based on the camera movement data extracted from the original video, so that the target can be accurately determined based on the camera movement data. points, improving the accuracy of determining target points.
  • From the original Extract the target video frame corresponding to the target moment from the original video.
  • the target video frame is rotated to obtain a rotated video. Since the computer device can accurately determine the target point reached by the camera during movement based on the camera motion data, the efficiency and accuracy of determining the target point are improved, thereby accurately obtaining the target video frame and improving the visual effect of the rotated video.
  • S208 specifically includes: calculating the yaw angle and pitch angle corresponding to the target video frame at each time point within the preset duration according to the target time and the preset duration; based on the yaw angle and pitch angle, calculating the target The video frames are rotated to obtain a rotated video.
  • the yaw angle is the angle of rotation of the viewing angle corresponding to the target video frame around the Z-axis.
  • the yaw angle is 30 degrees, 45 degrees or 60 degrees, etc.
  • the pitch angle is the angle of rotation of the viewing angle corresponding to the target video frame around the X-axis.
  • the pitch angle is 15 degrees, 20 degrees or 25 degrees, etc.
  • the target video frame is a three-dimensional panoramic video frame rendered in a spherical model
  • the perspective corresponding to the target video frame is the perspective of observing the target video frame through a virtual camera in the spherical model.
  • the terminal causes the viewing angle corresponding to the target video frame to look toward point B at time point t1, point C at time point t2, and point D at time point t3.
  • the terminal can calculate the yaw angle and pitch angle respectively corresponding to the target video frame at time points t1, t2, and t3 based on t0 and t3.
  • the terminal calculates the yaw angle and pitch angle corresponding to the target video frame at each time point within the preset duration based on the target time and the preset duration; and rotates the target video frame based on the yaw angle and pitch angle. Process and get the rotated video.
  • the yaw angle and pitch angle can be accurately determined based on the target time and the preset duration, ensuring the accuracy of the yaw angle and pitch angle, and improving the stability of the rotation effect; and the target point is determined through motion data to facilitate subsequent integration.
  • the video frame rotation operation realizes automatic video editing, which does not require users to learn too much knowledge about panoramic video editing, effectively reducing the user's learning cost for panoramic video editing.
  • S204 specifically includes the following steps:
  • S402 Determine the first acceleration and the second acceleration of the camera during movement based on the camera movement data.
  • the first acceleration is the maximum acceleration of the camera during movement.
  • the first acceleration is the acceleration of the camera when it is thrown up.
  • the direction of the first acceleration is the same as the movement direction of the camera, that is, the first acceleration is a positive value.
  • the first acceleration is the acceleration of the camera at the moment when it is ejected.
  • the second acceleration is the minimum acceleration of the camera during movement. At this time, the direction of the second acceleration is opposite to the movement direction of the camera, that is, the second acceleration is a negative value.
  • the second acceleration is the acceleration at the end of the camera's movement process.
  • the second acceleration is the acceleration when the camera returns to the starting point based on the pulling force of the elastic component.
  • the first acceleration and the second acceleration are accelerations obtained by combining accelerations in three directions of the X-axis, the Y-axis, and the Z-axis.
  • S404 Determine the target time when the camera reaches the target point during movement based on the first time corresponding to the first acceleration and the second time corresponding to the second acceleration.
  • the first moment is the moment when the camera has the first acceleration. For example, when the camera moves based on the user's upward throwing action, the first moment is the moment when the camera is thrown up. For another example, when the camera is ejected horizontally by an elastic component (such as a spring), the first moment is the moment when the camera is ejected.
  • the second moment is the moment when the camera has the second acceleration. For example, when the camera moves based on the user's upward throwing action, the second moment is the moment when the camera movement process ends. For another example, when the camera is ejected horizontally by a connected elastic component (such as a spring), the second moment is the moment when the camera returns to the starting point based on the pulling force of the elastic component.
  • the target point is the highest point of the camera's movement based on the upward throwing action, or the target point can also be the farthest point moved when being horizontally ejected by the elastic component.
  • the first acceleration and the second acceleration of the camera during movement are determined based on the camera movement data.
  • the target moment when the camera reaches the target point during movement is determined.
  • the target time when the camera reaches the target point is accurately determined according to the first acceleration and the second acceleration, so that the target video frame can be accurately extracted from the original video according to the target time, ensuring the visual effect of the resulting special effects picture.
  • the target point is the highest point reached by the camera during the upward throwing motion. Since the camera will not be blocked when it reaches the highest point, it can shoot with the best view.
  • the target video frame corresponding to the highest point has the best visual effect. Therefore, the rotated video obtained by rotating the target video frame has a good visual effect. Improved visual effects of rotated videos.
  • the highest point is determined manually, there may be an error in the determined highest point, which is not the actual highest point of the camera movement. If the target video frame corresponding to the manually determined highest point is rotated, the target video frame will be first Rotate up and then down, and insert the rotated video obtained by the rotation process into the original video to obtain a special effects video.
  • S208 specifically includes the following steps:
  • S502 Use the target time as the rotation starting time, and determine the rotation end time based on the rotation starting time and the preset duration.
  • the rotation starting time is the time when the target video frame starts to be rotated.
  • the terminal can reset the time of the rotation starting moment to zero so that the rotation starting moment is zero.
  • the terminal may determine the rotation starting moment according to the moment when the target video frame appears in the original video. Specifically, if the target video frame appears at the 6.05th second in the original video, the rotation starting moment is determined to be the 6.05th second.
  • S502 specifically includes: calculating the sum of the rotation start time and the preset duration, and using the obtained sum as the rotation end time.
  • S504 Determine the rotation period based on the rotation start time and the rotation end time.
  • the rotation starting moment and the rotation ending moment are defined as a total period, then a total period can include multiple rotation periods, and the duration of multiple rotation periods can be the same or different, depending on the specific reason.
  • a total period can include multiple rotation periods, and the duration of multiple rotation periods can be the same or different, depending on the specific reason.
  • the rotation period may be equal to half of the total period, and when the rotation is performed in three sections, the rotation period may be one-third of the total period. It is understood that the duration of each rotation period may not be equal.
  • S506 Calculate the ratio of each time point within the preset time period to the rotation period.
  • the terminal calculates the ratio of the rotation period to each time point within the preset time length, and the calculated ratio is t/halftime, where t can be each time point within the preset time length.
  • the initial yaw angle is a preset yaw angle.
  • the initial yaw angle can be the yaw angle corresponding to the perspective of the target video frame at the starting moment of rotation.
  • S508 specifically includes: the terminal calculates the product of the ratio and 2 ⁇ , and then adds the obtained product to the initial yaw angle to obtain the yaw angle corresponding to each time point of the target video frame within the preset duration.
  • the initial pitch angle is a preset pitch angle.
  • the initial pitch angle is the pitch angle corresponding to the angle of view of the target video frame at the moment when the rotation starts.
  • S510 specifically includes: the terminal calculates the ratio and sine of the product, and then multiply the resulting sine value by a preset parameter (the preset parameter is e.g. can be 0.058). Finally, the value obtained by multiplying the preset parameter is added to the initial pitch angle to obtain the pitch angle corresponding to each time point of the target video frame within the preset duration.
  • the terminal calculates the pitch angle according to formula (4), where pitch 0 is the initial pitch angle, t can be each time point within the preset time period, and halftime is the rotation period.
  • the target time is used as the rotation start time, and the rotation end time is determined based on the rotation start time and the preset duration.
  • the rotation period is determined based on the rotation start time and the rotation end time, and the ratio of each time point within the preset time period to the rotation period is calculated. Then based on the ratio and the initial yaw angle, the yaw angle corresponding to the target video frame at each time point within the preset duration is calculated; based on the ratio and the initial pitch angle, the yaw angle corresponding to the target video frame at each time point within the preset duration is calculated. Pitch angle.
  • the target video frame can be rotated at an even speed according to the accurately determined yaw angle and pitch angle, ensuring the stability of the rotation effect and improving the visual effect of the rotated video.
  • the synergy of the pitch angle and the yaw angle can realize the target video frame to rotate to the upper left or upper right direction at an average speed and then rotate to the target position. Then rotate to the lower left or lower right to ensure the best visual effect of the rotated video.
  • each time point within the preset duration constitutes a time point set, and the time point set includes a first time point set and a second time point set; the first time point set consists of rotating the target video frame to the first target The second time point set consists of rotating the target video frame to the second time point between the first target position and the second target position; S210 specifically includes: corresponding to the first time point based on The yaw angle and pitch angle make the target video frame rotate to the upper left direction or the upper right direction; copy the target video frame after rotation at each first time point to obtain the first video frame sequence; when the target video frame rotates to At the first target position, based on the yaw angle and pitch angle corresponding to the second time point, the target video frame is rotated to the lower right direction or the lower left direction until it is rotated to the second target position; for rotation at each second time point The subsequent target video frames are copied to obtain a second video frame sequence; a rotated video is generated based on the first video frame sequence and the second video frame sequence.
  • the first target position is a preset rotation position, which may be a plane in which the pitch angle of the corresponding viewing angle of the target video frame is offset upward by 0.058 radians relative to the initial pitch angle.
  • the second target position is the target video frame The corresponding perspective points to the position of the tracking target.
  • the terminal first rotates the target video frame to the upper left direction or the upper right direction based on the yaw angle and pitch angle corresponding to each first time point in the first time point set, that is, the angle of view corresponding to the target video frame is rotated to the upper left direction or the upper right direction. Rotate until the angle of view of the target video frame is in the plane of the first target position. At each first time point of each rotation, the terminal copies the target video frame rotated at that time point. The video frames copied at each first time point are combined to obtain a first video frame sequence.
  • the terminal rotates the target video frame to the lower right direction or the lower left direction until it rotates to the second target position.
  • the terminal copies the target video frame rotated at that time point.
  • the video frames copied at each second time point are combined to obtain a second video frame sequence.
  • the terminal generates a rotated video based on the first video frame sequence and the first video frame sequence.
  • the terminal first rotates the target video frame to the upper left direction or the upper right direction according to the yaw angle and the pitch angle, and then rotates the target video frame to the lower right direction or the lower left direction, ensuring the stability of the rotation effect. , which improves the visual effect of rotating videos. Moreover, the terminal rotates the target video frame first to the upper left or upper right according to the yaw angle and the pitch angle, and then rotates to the lower right or lower left. That is, even if the target video frame is first rotated diagonally upward and then diagonally downward, Therefore, the effect of looking around can be produced by tilting the rotation. Compared with another solution for looking around: vertical upward rotation, then horizontal rotation, and finally vertical downward rotation (there are frustrations in the connection between the three stages), this implementation The visual effect of the rotation of the example is smoother and the visual effect of the rotation is improved.
  • the time point set composed of time points within the preset duration also includes a third time point set; the third time point set consists of time points that cause the target video frame to horizontally rotate at the highest point of the motion process. ;
  • the target video frame rotates to the highest point, the target video frame is rotated in the horizontal direction based on the yaw angle corresponding to the third time point; when the target video frame is rotated in the horizontal direction, the rotation speed component in the vertical direction is zero.
  • the terminal may cause the target video frame to rotate to a preset target position at the end of the rotation phase corresponding to the third time point.
  • the preset target position may be a position such that the corresponding perspective of the target video frame is rotated by a preset angle starting from the first target position.
  • the target video frame starts at the first target position at Rotate in the horizontal direction.
  • the viewing angle rotates through preset angles such as 180 degrees, 360 degrees, and 90 degrees, it reaches the preset target position.
  • the terminal can first rotate the target video frame vertically upward or diagonally upward (upper left or upper right) to the highest point, then keep the rotation speed component in the vertical direction at zero, and rotate horizontally, corresponding to the third time point.
  • the target video frame is rotated vertically downward or diagonally downward (lower left or lower right).
  • the terminal rotates the target video frame horizontally at its highest point, which can produce the visual effect of looking around.
  • the rotation speed component of the target video frame in the vertical direction is zero and the target video frame is rotated in the horizontal direction. For example, make the target video frame rotate one circle in the horizontal direction. Then rotate the target video frame to the lower right direction or the lower left direction.
  • the target video frame when the target video frame rotates to the first target position, the target video frame is rotated in the horizontal direction based on the yaw angle and pitch angle corresponding to the third time point.
  • the target video frame is rotated in the horizontal direction. position, based on the yaw angle and pitch angle corresponding to the second time point, the target video frame is rotated to the lower right direction or the lower left direction until it is rotated to the second target position. This can enrich the rotation effect of the target video frame and improve the visual effect of the rotated video.
  • S602-S606 are also included before S206, and S206 specifically includes S608.
  • the tracking target is the target pointed by the perspective corresponding to the video, which can be a person or an object.
  • the tracking target can be a user who throws up the camera.
  • S602 specifically includes: the terminal starts video collection in response to the video collection instruction, and uses image recognition technology to identify the tracking target in the collected video.
  • the tracking data is data used to represent the position of the tracking target in the video screen.
  • the tracking data is the position coordinates of the tracking target in the spherical model.
  • the tracking data is (x, y, z), and x, y, and z are the position coordinates of the tracking target on the x-axis, y-axis, and z-axis respectively.
  • S606 Adjust the perspective of the original video according to the tracking data to obtain the adjusted original video.
  • the terminal adjusts the viewing angle of the original video based on the tracking data so that the viewing angle of each video frame in the original video points to the tracking target. Specifically, the terminal adjusts the viewing angle of the original video so that the viewing angle of each video frame in the original video points to the position represented by the tracking data.
  • the terminal adjusts the perspective of the original video based on the tracking data, and extracts the target video frame corresponding to the target moment from the adjusted original video.
  • the tracking target is identified in the collected video, tracking data corresponding to the tracking target is obtained, the perspective of the original video is adjusted according to the tracking data, and the target video frame corresponding to the target moment is extracted from the adjusted original video. This allows the angle of view in the adjusted original video to always look toward the tracking target, resulting in better visual effects.
  • after S210 it also includes: inserting the rotated video into the original video according to the target time to obtain a special effects video; and playing the special effects video in response to a playback instruction for the special effects video.
  • special effects videos are videos with special visual effects.
  • special effects videos are videos with frozen visual effects.
  • the frozen visual effect is a visual effect in which the picture taken when the camera moves to the target point is repeatedly played for a period of time, and the perspective of the picture is rotated during the repeated playback.
  • the frozen visual effect can be a high-altitude frozen visual effect.
  • the high-altitude freezing visual effect is a visual effect in which the picture taken when the camera reaches the highest point based on the upward throwing action is repeatedly played for a period of time, and the perspective of the picture is rotated during the repeated playback.
  • the target time can be the time when the camera reaches the target point.
  • the terminal inserts the rotated video into the target video frame corresponding to the target time in the original video, producing a visual effect of the picture being frozen from the target time.
  • the rotated video is inserted into the original video according to the target time to obtain a special effect video.
  • the special effects video is played, which improves the rotation stability of the special effects video and ensures the visual effect.
  • the terminal is a panoramic camera with a wide-angle lens.
  • the panoramic camera receives the video recording instruction triggered by the user, it automatically selects the tracking target and obtains the tracking data corresponding to the tracking target.
  • the panoramic camera is thrown into the air by the user, the panoramic camera takes pictures during the motion based on the upward throwing action. Take the original video. Then the perspective of the original video is adjusted based on the tracking data so that the perspective of the lens in the original video always points to the tracking target.
  • the panoramic camera extracts gyroscope data from the original video, and calculates the maximum acceleration and minimum acceleration during the movement based on the gyroscope data. The maximum acceleration occurs at the starting point, and the minimum acceleration occurs at the end of the movement.
  • the intermediate moment between the starting point and the end of the movement is the highest point of the panoramic camera's upward throw.
  • Extract the target video frame corresponding to the highest point of the upward throw from the adjusted original video repeatedly play the target video frame, and adjust the perspective of the target video frame during the repeated playback to rotate the perspective of the target video frame.
  • the terminal can calculate the yaw angle and the pitch angle according to the method in the embodiment of Figure 5, and then rotate the viewing angle of the target video frame to the upper left or upper right first, and then to the lower left or lower right according to the yaw angle and the pitch angle. Rotate until the camera returns to the tracking target.
  • the video processing method includes the following steps:
  • S702 Obtain the original video and camera movement data collected by the camera during movement, and determine the first acceleration and the second acceleration of the camera during movement based on the camera movement data.
  • S704 Determine the target time when the camera reaches the target point during movement based on the first time corresponding to the first acceleration and the second time corresponding to the second acceleration.
  • S708 Adjust the perspective of the original video according to the tracking data, and extract the target video frame corresponding to the target moment from the adjusted original video.
  • S710 Use the target time as the rotation starting time, and determine the rotation end time based on the rotation starting time and the preset duration.
  • S712 Determine the rotation period based on the rotation start time and the rotation end time, and calculate the ratio of each time point within the preset time period to the rotation period.
  • S716 Based on the ratio and the initial pitch angle, calculate the pitch angle corresponding to the target video frame at each time point within the preset duration.
  • the time point includes a first time point and a second time point.
  • the target video frame is rotated toward the upper left direction or the upper right direction, and the rotated target video frame at each first time point is copied to obtain a first video frame sequence.
  • S722 Copy the target video frames rotated at each second time point to obtain a second video frame sequence, and generate a rotated video based on the first video frame sequence and the second video frame sequence.
  • embodiments of the present application also provide a video processing device for implementing the above-mentioned video processing method.
  • the implementation scheme for solving the problem provided by this device is similar to the implementation scheme recorded in the above method. Therefore, for the specific limitations in one or more video processing device embodiments provided below, please refer to the above limitations on the video processing method. I won’t go into details here.
  • a video processing device including: an acquisition module 802, a determination module 804, an extraction module 806, a calculation module 808 and a rotation processing module 810, wherein:
  • the acquisition module 802 is used to acquire the original video collected by the camera during movement and the camera movement data. according to;
  • the determination module 804 is used to determine the target moment when the camera reaches the target point during movement according to the camera movement data;
  • Extraction module 806 is used to extract the target video frame corresponding to the target moment from the original video
  • the rotation processing module 808 is used to rotate the target video frame according to the target time and the preset duration to obtain a rotated video.
  • the original video collected by the camera during movement is obtained, and the target moment when the camera reaches the target point during movement is determined based on the camera movement data extracted from the original video. Extract the target video frame corresponding to the target moment from the original video.
  • the target video frame can be accurately determined based on the camera motion data, ensuring the accuracy of the determined target video frame.
  • the target video frame is rotated to obtain a rotated video. Since the computer device can accurately determine the target point reached by the camera during movement based on the camera movement data, the efficiency of determining the target point is improved, and the visual effect of the rotated video is improved.
  • the device further includes:
  • the calculation module 810 is used to calculate the yaw angle and pitch angle corresponding to the target video frame at each time point within the preset duration according to the target time and the preset duration;
  • the rotation processing module 808 is also used to perform rotation processing on the target video frame based on the yaw angle and the pitch angle to obtain a rotated video.
  • the determination module 804 is also used to:
  • the camera movement data determine the first acceleration and the second acceleration of the camera during movement
  • the target moment when the camera reaches the target point during movement is determined.
  • the rotation processing module 808 is also used to:
  • the pitch angle corresponding to the target video frame at each time point within the preset duration is calculated.
  • each time point within the preset duration constitutes a time point set, and the time point set includes a first time point set and a second time point set; the first time point set consists of rotating the target video frame to the first target The second time point set consists of rotating the target video frame to a second time point between the first target position and the second target position; the rotation processing module 808 is also used to:
  • the target video frame is rotated to the upper left direction or the upper right direction;
  • the target video frame When the target video frame is rotated to the first target position, based on the yaw angle and pitch angle corresponding to the second time point, the target video frame is rotated in the lower right direction or the lower left direction until it is rotated to the second target position;
  • a rotated video is generated based on the first video frame sequence and the second video frame sequence.
  • the time point set consisting of time points within the preset duration includes a third time point; the third time point consists of each third time point in the process of horizontally rotating the target video frame at the highest point of the motion process. Composition of time points;
  • the rotation processing module 808 is also used to: when the target video frame rotates to the highest point, based on the yaw angle and pitch angle corresponding to the third time point, rotate the target video frame in the horizontal direction; when the target video frame rotates in the horizontal direction, the rotation speed component in the vertical direction is zero.
  • the device further includes:
  • Recognition module 812 used to identify tracking targets in pre-collected videos
  • the acquisition module 802 is also used to obtain tracking data corresponding to the tracking target;
  • An adjustment module 814 is used to adjust the viewing angle of the original video according to the tracking data to obtain an adjusted original video
  • the extraction module 806 is also used to extract the target video frame corresponding to the target moment from the adjusted original video.
  • the device further includes:
  • the insertion module 816 is used to insert the rotated video into the original video according to the target moment to obtain a special effects video
  • the play module 818 is used to play the special effects video in response to the play instruction for the special effects video.
  • Each module in the above video processing device can be implemented in whole or in part by software, hardware and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 10 .
  • the computer device includes a processor, memory, input/output interface, communication interface, display unit and input device.
  • the processor, memory and input/output interface are connected through the system bus, and the communication interface, display unit and input device are connected to the system bus through the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores operating systems and computer programs. This internal memory provides an environment for the execution of operating systems and computer programs in non-volatile storage media.
  • the input/output interface of the computer device is used to exchange information between the processor and external devices.
  • the communication interface of the computer device is used for wired or wireless communication with external terminals.
  • the wireless mode can be implemented through WIFI, mobile cellular network, NFC (Near Field Communication) or other technologies.
  • the computer program implements a video processing method when executed by a processor.
  • the display unit of the computer device is used to form a visually visible picture and can be a display screen, a projection device or a virtual reality imaging device.
  • the display screen can be a liquid crystal display screen or an electronic ink display screen.
  • the input device of the computer device can be a display screen.
  • the touch layer covered above can also be buttons, trackballs or touch pads provided on the computer equipment shell, or it can also be an external keyboard, touch pad or mouse, etc.
  • FIG. 10 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • Specific computer equipment can May include more or fewer parts than shown, or combine certain parts, or have a different arrangement of parts.
  • a computer device including a memory and a processor.
  • a computer program is stored in the memory.
  • the processor executes the computer program, it implements the steps in the above method embodiments.
  • a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the steps in each of the above method embodiments are implemented.
  • a computer program product including a computer program that implements the steps in each of the above method embodiments when executed by a processor.
  • the user information including but not limited to user equipment information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • the computer program can be stored in a non-volatile computer-readable storage.
  • the computer program when executed, may include the processes of the above method embodiments.
  • Any reference to memory, database or other media used in the embodiments provided in this application may include at least one of non-volatile and volatile memory.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive memory (ReRAM), magnetic variable memory (Magnetoresistive memory) Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (Phase Change Memory, PCM), graphene memory, etc.
  • Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory.
  • RAM Random Access Memory
  • RAM random access memory
  • RAM Random Access Memory
  • RAM random access memory
  • RAM random access memory
  • RAM random access memory
  • RAM random access memory
  • SRAM static random access memory
  • DRAM Dynamic Random Access Memory
  • the databases involved in the various embodiments provided in this application may include at least one of a relational database and a non-relational database.
  • Non-relational databases may include blockchain-based distributed databases, etc., but are not limited thereto.
  • the processors involved in the various embodiments provided in this application may be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based processors, etc. Computing data processing logic, etc., is not limited to this.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请涉及一种视频处理方法、装置、计算机设备、存储介质和计算机程序产品。所述方法包括:获取相机在运动过程中采集的原始视频以及相机运动数据;根据相机运动数据,确定所述相机在所述运动过程中到达目标点的目标时刻;从所述原始视频中提取所述目标时刻对应的目标视频帧;根据所述目标时刻与预设时长,对所述目标视频帧进行旋转处理,得到旋转视频。采用本方法能够提高画面视觉效果。

Description

视频处理方法、装置、计算机设备和存储介质 技术领域
本申请涉及计算机技术领域,特别是涉及一种视频处理方法、装置、计算机设备、存储介质。
背景技术
随着计算机技术的发展,全景相机的应用越来越广泛,全景相机可以拍摄全景视频,全景视频的优势在于其可视角度非常广,例如,全景视频可以涵盖360度的视角。
但是,显示器显示的是平面图像,因此越来越多的人研究如何将具有360度的全景视频进行充分有效展示,有效提高用户的体验。
发明内容
基于此,有必要针对如何对全景视频进行充分有效展示的技术问题,提供一种视频处理方法、装置、计算机设备、计算机可读存储介质。
第一方面,本申请提供了一种视频处理方法。所述方法包括:
获取相机在运动过程中采集的原始视频以及相机运动数据;
根据所述相机运动数据,确定所述相机在所述运动过程中到达目标点的目标时刻;
从所述原始视频中提取所述目标时刻对应的目标视频帧;
根据所述目标时刻与预设时长,对所述目标视频帧进行旋转处理,得到旋转视频。
第二方面,本申请还提供了一种视频处理装置。所述装置包括:
获取模块,用于获取相机在运动过程中采集的原始视频以及相机运动数据;
确定模块,用于所述相机运动数据,确定所述相机在所述运动过程中到达目标点的目标时刻;
提取模块,用于从所述原始视频中提取所述目标时刻对应的目标视频帧;
旋转处理模块,用于根据所述目标时刻与预设时长,对所述目标视频帧进行旋转处理,得到旋转视频。
在一个实施例中,所述装置还包括:
计算模块,用于根据所述目标时刻与预设时长,计算所述目标视频帧在所述预设时长内的各时间点对应的偏航角与俯仰角;
所述旋转处理模块,还用于基于所述偏航角与所述俯仰角,对所述目标视频帧进行旋转处理,得到旋转视频。
在一个实施例中,所述确定模块,还用于:
根据所述相机运动数据,确定所述相机在所述运动过程中的第一加速度以及第二加速度;
根据所述第一加速度对应的第一时刻以及所述第二加速度对应的第二时刻,确定所述相机在所述运动过程中到达目标点的目标时刻。
在一个实施例中,所述旋转处理模块,还用于:
将所述目标时刻作为旋转起始时刻,根据所述旋转起始时刻与预设时长确定旋转结束时刻;
根据所述旋转起始时刻与所述旋转结束时刻确定旋转周期;
计算所述预设时长内的各时间点与所述旋转周期的比值;
基于所述比值以及初始偏航角,计算所述目标视频帧在所述预设时长内的各时间点对应的偏航角;
基于所述比值以及初始俯仰角,计算所述目标视频帧在所述预设时长内的各时间点对应的俯仰角。
在一个实施例中,所述预设时长内的时间点组成时间点集,所述时间点集包括第一时间点集与第二时间点集;所述第一时间点集由将所述目标视频帧旋转至第一目标位置之前的各第一时间点组成;所述第二时间点集由将所述目标视频帧旋转至第一目标位置与第二目标位置之间的第二时间点组成;所述旋转处理模块,还用于:
基于所述第一时间点对应的偏航角与俯仰角,使所述目标视频帧向左上方 向或者右上方向进行旋转;
对在各所述第一时间点旋转后的所述目标视频帧进行复制,得到第一视频帧序列;
当所述目标视频帧旋转至第一目标位置时,基于所述第二时间点对应的偏航角与俯仰角,使所述目标视频帧向右下方向或者左下方向进行旋转,直至旋转至第二目标位置;
对在各所述第二时间点旋转后的所述目标视频帧进行复制,得到第二视频帧序列;
基于所述第一视频帧序列与所述第二视频帧序列生成旋转视频。
在一个实施例中,所述预设时长内的时间点组成时间点集;时间点集包括第三时间点集;所述第三时间点集由使所述目标视频帧在所述运动过程的最高点进行水平旋转过程中的各第三时间点组成;所述旋转处理模块,还用于:
当所述目标视频帧旋转至最高点时,基于所述第三时间点对应的偏航角与俯仰角,使所述目标视频帧在水平方向进行旋转;所述目标视频帧在水平方向进行旋转时,在垂直方向的旋转速度分量为零。
在一个实施例中,所述装置还包括:
识别模块,用于在预采集的视频中识别追踪目标;
所述获取模块,还用于获取所述追踪目标对应的追踪数据;
调整模块,用于根据所述追踪数据对所述原始视频进行视角调整,得到调整后的所述原始视频;
所述提取模块,还用于从调整后的所述原始视频中提取所述目标时刻对应的目标视频帧。
在一个实施例中,所述装置还包括:
插入模块,用于根据所述目标时刻,将所述旋转视频插入所述原始视频,得到特效视频;
播放模块,用于响应于针对所述特效视频的播放指令,播放所述特效视频。
第三方面,本申请还提供了一种计算机设备。所述计算机设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时 实现以下步骤:
获取相机在运动过程中采集的原始视频以及相机运动数据;
根据所述相机运动数据,确定所述相机在所述运动过程中到达目标点的目标时刻;
从所述原始视频中提取所述目标时刻对应的目标视频帧;
根据所述目标时刻与预设时长,对所述目标视频帧进行旋转处理,得到旋转视频。
第四方面,本申请还提供了一种计算机可读存储介质。所述计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
获取相机在运动过程中采集的原始视频以及相机运动数据;
根据所述相机运动数据,确定所述相机在所述运动过程中到达目标点的目标时刻;
从所述原始视频中提取所述目标时刻对应的目标视频帧;
根据所述目标时刻与预设时长,对所述目标视频帧进行旋转处理,得到旋转视频。
第五方面,本申请还提供了一种计算机程序产品。所述计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现以下步骤:
获取相机在运动过程中采集的原始视频以及相机运动数据;
根据所述相机运动数据,确定所述相机在所述运动过程中到达目标点的目标时刻;
从所述原始视频中提取所述目标时刻对应的目标视频帧;
根据所述目标时刻与预设时长,对所述目标视频帧进行旋转处理,得到旋转视频。
上述视频处理方法、装置、计算机设备、存储介质和计算机程序产品,获取相机在运动过程中采集的原始视频以及相机运动数据,根据相机运动数据,确定相机在运动过程中到达目标点的目标时刻。从原始视频中提取目标时刻对应的目标视频帧。从而可以根据相机运动数据精确的确定目标视频帧,保证了 所确定的目标视频帧的准确性。然后根据目标时刻与预设时长,对目标视频帧进行旋转处理,得到旋转视频。由于计算机设备可以根据相机运动数据精确的确定相机在运动过程中到达的目标点,提高了确定目标点的效率,并且提高了旋转视频的视觉效果。
附图说明
图1为一个实施例中视频处理方法的应用环境图;
图2为一个实施例中视频处理方法的流程示意图;
图3为一个实施例中对渲染在球形模型中的目标视频帧进行旋转的示意图;
图4为一个实施例中确定目标时刻方法的流程示意图;
图5为一个实施例中计算偏航角与俯仰角方法的流程示意图;
图6为一个实施例中对原始视频进行视角调整方法的流程示意图;
图7为另一个实施例中视频处理方法的流程示意图;
图8为一个实施例中视频处理装置的结构框图;
图9为一个实施例中视频处理装置的结构框图;
图10为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请实施例提供的视频处理方法,可以应用于如图1所示的应用环境中。其中,终端102获取相机在运动过程中采集的原始视频以及相机运动数据;根据相机运动数据,确定相机在运动过程中到达目标点的目标时刻;从原始视频中提取目标时刻对应的目标视频帧;根据目标时刻与预设时长,计算目标视频帧在预设时长内的各时间点对应的偏航角与俯仰角;基于偏航角与俯仰角,对目标视频帧进行旋转处理,得到旋转视频。其中,终端102可以但不限于是相机、摄像头、智能手机或者便携式可穿戴设备。相机可以为普通相机、全景相 机或者运动相机等。便携式可穿戴设备可为智能手表、智能手环、头戴设备等。
需要说明的是,针对如何如何将具有360度的全景视频进行充分有效展示。
采用的方法可以是:根据全景相机拍摄的视频的一帧,旋转产生旋转视频。例如,通过人工对全景相机拍摄的视频进行观察,找到全景相机在目标点拍摄的视频帧,然后对通过人眼观察确定的视频帧进行旋转,对目标点(例如最高点)的画面信息进行充分展示,并生成具有凝固视觉效果的画面。
在上述操作过程中,用户不仅需要学习大量的全景视频剪辑知识和操作,例如关键帧、旋转或者打点等,带来大量的学习成本和人力成本;而且确认的目标点不准确,导致所得旋转视频(或者插入旋转视频的原始视频)的视觉效果差。例如,在用户进行抛拍(将相机上抛并拍摄一段视频)的视频剪辑时,需要确定的最高点的视频帧。
如果确定的视频帧所在时间点不是最高点,则意味着无法获得最高点的带来的最高最广阔的视野的效果,也且意味着原始视频的下一帧会相机会继续上升,而视频帧的旋转视频结尾是下降,将旋转视频插入原始视频时,会产生非常大的顿挫感,导致插入旋转视频后的原始视频效果不佳。
针对如何获得较佳旋转视频(或者插入旋转视频的原始视频)的技术问题,在一个实施例中,如图2所示,提供了一种视频处理方法,以该方法应用于图1中的终端为例进行说明,包括以下步骤:
S202,获取相机在运动过程中采集的原始视频以及相机运动数据。
其中,运动为相机以一定初速度离开初始位置,在重力或者其他作用力的作用下的运动,包括上抛运动或者水平运动等。上抛运动例如可以是竖直上抛运动、竖直下抛运动、平抛运动或者斜抛运动等。例如,当相机被用户竖直上抛时,相机的运动过程包括:首先向斜上方运动,直到垂直方向的速度为零,然后再向下运动,直到落在目标位置。又例如,当相机被弹性部件(例如,弹簧)水平弹出时,相机的运动过程包括:首先远离出发点到达最远端,然后在弹力作用下回到出发点。相机在运动过程中采集原始数据,所采集的原始数据中包括原始视频与相机运动数据。原始视频为相机在运动过程中采集的视频,可以为全景视频。全景视频是可视角度大于人眼正常有效视角的视频,包括黑 白视频或者彩色视频等。例如,全景视频为水平方向上的视角范围大于90度,垂直方向上的视角范围大于70度的视频。
S204,根据相机运动数据,确定相机在运动过程中到达目标点的目标时刻。
其中,相机运动数据为记录相机运动情况的数据,可以是通过相机中陀螺仪采集的数据,包括相机在X轴、Y轴和Z轴三个方向的加速度,或者也可以为对X、Y、Z三个方向的加速度进行合成所得的加速度。目标点可以为相机在运动过程中到达的任意一点,例如,目标点可以为相机基于上抛运动所达到的最高点。又例如,目标点可以为相机在垂直方向速度为零的点。又例如,目标点可以为相机在水平方向速度为零的点。目标时刻为相机到达目标点的时刻。例如,目标时刻为原始视频的第3分4秒。又例如,目标时刻为11时3分15秒。S206,从原始视频中提取目标时刻对应的目标视频帧。
其中,目标视频帧为相机运动至目标点时采集的视频帧。例如,目标视频帧为相机运动至最高点时采集的视频帧。又例如,目标视频帧为相机运动至垂直方向速度为零时采集的视频帧。又例如,目标视频帧为相机运动至水平方向速度为零时采集的视频帧。
在一个实施例中,S206具体包括:根据三维球体模型对原始视频进行渲染,从渲染后所得的球形全景视频中提取目标时刻对应的目标视频帧。
S208,根据目标时刻与预设时长,对目标视频帧进行旋转处理,得到旋转视频。
其中,预设时长为生成的视频特效的时长,可以为任意数值。例如,预设时长为8秒、6秒或者0.1分钟等。旋转处理为使目标视频帧对应的视角进行旋转的处理方式。例如,如图3所示,旋转处理可以为使球形模型中心的虚拟摄像机看向目标视频帧的角度进行旋转的处理方式。终端在对目标视频帧进行旋转处理时,可以使目标视频帧在水平方向或者垂直方向具有旋转速度分量,或者也可以使目标视频帧在水平方向与垂直方向均具有旋转速度分量。
上述实施例中,获取相机在运动过程中采集的原始视频,根据从原始视频中提取的相机运动数据,确定相机在运动过程中到达目标点的目标时刻,从而可以根据相机运动数据准确的确定目标点,提高了确定目标点的准确性。从原 始视频中提取目标时刻对应的目标视频帧。然后根据目标时刻与预设时长,对目标视频帧进行旋转处理,得到旋转视频。由于计算机设备可以根据相机运动数据精确的确定相机在运动过程中到达的目标点,提高了确定目标点的效率和准确性,从而准确获取目标视频帧,提高了旋转视频的视觉效果。
在一个实施例中,S208具体包括:根据目标时刻与预设时长,计算目标视频帧在预设时长内的各时间点对应的偏航角与俯仰角;基于偏航角与俯仰角,对目标视频帧进行旋转处理,得到旋转视频。
偏航角为目标视频帧对应的视角绕Z轴旋转的角度。例如,偏航角为30度、45度或者60度等。俯仰角为目标视频帧对应的视角绕X轴旋转的角度。例如,俯仰角为15度、20度或者25度等。例如,如图3所示,目标视频帧为渲染在球形模型中的三维全景视频帧,目标视频帧对应的视角为通过球形模型中的虚拟摄像机观察目标视频帧的视角。当目标视频帧对应的视角从A点移动到B点时,偏航角与俯仰角对应发生改变。
终端根据目标时刻与预设时长,计算目标视频帧在预设时长内的各时间点对应的偏航角与俯仰角。例如,在目标时刻(t0),目标视频帧对应的视角看向A点,然后终端在预设时长(假如为L)内对目标视频帧对应的视角进行旋转,根据目标时刻与预设时长计算得到旋转结束时间点为t3=t0+L。终端使目标视频帧对应的视角在t1时间点看向B点,在t2时间点看向C点,在t3时间点看向D点。终端可以根据t0与t3计算目标视频帧在t1、t2、t3时间点分别对应的偏航角与俯仰角。
上述实施例中,终端根据目标时刻与预设时长,计算目标视频帧在预设时长内的各时间点对应的偏航角与俯仰角;基于偏航角与俯仰角,对目标视频帧进行旋转处理,得到旋转视频。从而可以根据目标时刻与预设时长准确的确定偏航角与俯仰角,保证了偏航角与俯仰角的精度,提高了旋转效果的稳定性;并且通过运动数据确定目标点,以便于结合后续的视频帧旋转操作,实现视频的自动剪辑,不需要用户过多的学习全景视频的剪辑知识,有效的降低了用户对全景视频剪辑的学习成本。
在一个实施例中,如图4所示,S204具体包括如下步骤:
S402,根据相机运动数据,确定相机在运动过程中的第一加速度以及第二加速度。
其中,第一加速度为相机在运动过程中的最大加速度。例如,当相机基于用户的上抛动作进行运动时,第一加速度为相机在被抛起时刻的加速度,此时第一加速度的方向与相机的运动方向相同,也即第一加速度为正值。又例如,当相机被所连接的弹性部件(例如弹簧)水平弹出时,第一加速度为相机在被弹出时刻的加速度。第二加速度为相机在运动过程中的最小加速度,此时第二加速度的方向与相机的运动方向相反,也即第二加速度为负值。例如,当相机基于用户的上抛动作进行运动时,第二加速度为相机的运动过程结束时的加速度。又例如,当相机被所连接的弹性部件(例如弹簧)水平弹出时,第二加速度为相机基于弹性部件的拉力回到出发点时的加速度。
在一个实施例中,第一加速度与第二加速度均为对X轴、Y轴与Z轴三个方向加速度进行合成所得的加速度。
S404,根据第一加速度对应的第一时刻以及第二加速度对应的第二时刻,确定相机在运动过程中到达目标点的目标时刻。
其中,第一时刻为相机具有第一加速度时的时刻。例如,当相机基于用户的上抛动作进行运动时,第一时刻为相机被抛起的时刻。又例如,当相机被弹性部件(例如弹簧)水平弹出时,第一时刻为相机被弹出的时刻。第二时刻为相机具有第二加速度时的时刻。例如,当相机基于用户的上抛动作进行运动时,第二时刻为相机运动过程结束时的时刻。又例如,当相机被所连接的弹性部件(例如弹簧)水平弹出时,第二时刻为相机基于弹性部件的拉力回到出发点时的时刻。
在一个实施例中,目标点为相机基于上抛动作运动的最高点,或者目标点也可以为在被弹性部件水平弹出时所运动的最远点。目标时刻为第一时刻与第二时刻的中间时刻,终端根据公式(1)计算得到目标时刻。其中,T0为目标时刻,T1为第一时刻,T2为第二时刻。
T0=(T2+T1)/2    (1)
上述实施例中,根据相机运动数据,确定相机在运动过程中的第一加速度以及第二加速度。根据第一加速度对应的第一时刻以及第二加速度对应的第二时刻,确定相机在运动过程中到达目标点的目标时刻。根据第一加速度以及第二加速度精确确定相机到达目标点的目标时刻,从而可以根据目标时刻从原始视频中准确的提取出目标视频帧,保证了所得特效画面的视觉效果。
在一个实施例中,目标点为相机在上抛运动过程中到达的最高点。由于相机在达到最高点时不会受到遮挡,可以最佳视野进行拍摄,最高点对应的目标视频帧具有最佳的画面视觉效果,因此对目标视频帧进行旋转处理所得的旋转视频视觉效果好,提高了旋转视频的视觉效果。此外,当通过人工手动确定最高点时,所确定的最高点可能存在误差,并不是相机运动的实际最高点,若对手动确定的最高点对应的目标视频帧进行旋转处理,使目标视频帧先向上旋转再向下旋转,并将旋转处理所得的旋转视频插入原始视频,得到特效视频。在特效视频中,在旋转视频向下旋转结束回到原始视频时,原始视频会继续向上旋转,也即旋转视频不能流畅的与原始视频进行结合,导致视觉效果差。而本申请中通过相机运动数据可以精确的确定相机运动过程的最高点,从而可以使旋转视频与原始视频流畅的进行结合,提高了视频特效的视觉效果。
在一个实施例中,如图5所示,S208具体包括如下步骤:
S502,将目标时刻作为旋转起始时刻,根据旋转起始时刻与预设时长确定旋转结束时刻。
其中,旋转起始时刻为开始对目标视频帧进行旋转处理的时刻。例如,终端可以对旋转起始时刻的时间进行归零,使旋转起始时刻为零。又例如,终端可以按照目标视频帧在原始视频中出现的时刻确定旋转起始时刻。具体地,若目标视频帧出现在原始视频中的第6.05秒,则将旋转起始时刻确定为第6.05秒。
在一个实施例中,S502具体包括:计算旋转起始时刻与预设时长的和值,将所得和值作为旋转结束时刻。
S504,根据旋转起始时刻与旋转结束时刻确定旋转周期。
其中,定义旋转起始时刻与旋转结束时刻为一个总周期,则一个总周期可以包括多个旋转周期,多个旋转周期的持续时间可以相同也可以不同,具体根 据实际旋转效果的需求进行设置。例如,当旋转分为两段进行时,一个旋转周期可以等于半个总周期,而当旋转分为三段进行时,旋转周期可以为三分之一个总周期。可以理解地,各旋转周期的持续时间也可以不相等。
在一个实施例中,S504具体包括:终端计算旋转起始时刻与旋转结束时刻的和值,然后再计算所得和值与2的比值,得到旋转周期。具体地,终端根据公式(2)计算旋转周期,其中,halftime为旋转周期,endtime为旋转结束时刻,startime为旋转起始时刻。
halftime=(endtime-startime)/2   (2)
S506,计算预设时长内的各时间点与旋转周期的比值。
终端计算旋转周期与预设时长内的各时间点的比值,计算所得的比值为t/halftime,其中,t可以为预设时长内的各时间点。
S508,基于比值以及初始偏航角,计算目标视频帧在预设时长内的各时间点对应的偏航角。
其中,初始偏航角为预设的偏航角,例如初始偏航角可以为目标视频帧在旋转起始时刻的视角所对应的偏航角。在一个实施例中,S508具体包括:终端计算比值与2π的乘积,然后将所得的乘积与初始偏航角相加,得到目标视频帧在预设时长内的各时间点对应的偏航角。具体地,终端根据公式(3)计算得到偏航角,其中,Yaw0为初始偏航角,t可以为预设时长内的各时间点,halftime为旋转周期。
Yaw=Yaw0+(t/halftime)×2π     (3)
S510,基于比值以及初始俯仰角,计算目标视频帧在预设时长内的各时间点对应的俯仰角。
其中,初始俯仰角为预设的俯仰角,例如,初始俯仰角为目标视频帧在旋转起始时刻的视角所对应的俯仰角。在一个实施例中,S510具体包括:终端计算比值与乘积的正弦,然后再将所得的正弦值与预设参数相乘(预设参数例如 可以是0.058)。最后将与预设参数相乘后所得的数值与初始俯仰角相加,得到目标视频帧在预设时长内的各时间点对应的俯仰角。具体地,终端根据公式(4)计算得到俯仰角,其中,pitch0为初始俯仰角,t可以为预设时长内的各时间点,halftime为旋转周期。
上述实施例中,将目标时刻作为旋转起始时刻,根据旋转起始时刻与预设时长确定旋转结束时刻。根据旋转起始时刻与旋转结束时刻确定旋转周期并计算预设时长内的各时间点与旋转周期的比值。然后基于比值以及初始偏航角,计算目标视频帧在预设时长内的各时间点对应的偏航角;基于比值以及初始俯仰角,计算目标视频帧在预设时长内的各时间点对应的俯仰角。从而可以根据精确确定的偏航角与俯仰角对目标视频帧进行均速旋转,保证了旋转效果的稳定性,提高了旋转视频的视觉效果。此外,在根据俯仰角与偏航角对目标视频帧进行旋转处理时,通过俯仰角与偏航角的协同作用可以实现目标视频帧均速的向左上或右上方向进行旋转并在旋转至目标位置时再向左下或右下进行旋转,保证了旋转视频具有最佳的视觉效果。
在一个实施例中,预设时长内的各时间点组成时间点集,时间点集包括第一时间点集与第二时间点集;第一时间点集由将目标视频帧旋转至第一目标位置之前的各第一时间点组成;第二时间点集由将目标视频帧旋转至第一目标位置与第二目标位置之间的第二时间点组成;S210具体包括:基于第一时间点对应的偏航角与俯仰角,使目标视频帧向左上方向或者右上方向进行旋转;对在各第一时间点旋转后的目标视频帧进行复制,得到第一视频帧序列;当目标视频帧旋转至第一目标位置时,基于第二时间点对应的偏航角与俯仰角,使目标视频帧向右下方向或者左下方向进行旋转,直至旋转至第二目标位置;对在各第二时间点旋转后的目标视频帧进行复制,得到第二视频帧序列;基于第一视频帧序列与第二视频帧序列生成旋转视频。
其中,第一目标位置为预设的旋转位置,可以为目标视频帧对应视角的俯仰角相对于初始俯仰角向上偏移0.058弧度的平面。第二目标位置为目标视频帧 对应视角指向追踪目标的位置。
终端首先基于第一时间点集中各第一时间点对应的偏航角与俯仰角,将目标视频帧向左上方向或者右上方向进行旋转,也即使目标视频帧对应的视角向左上方向或者右上方向进行旋转,直到旋转至目标视频帧的视角在第一目标位置的平面。在每个旋转的每个第一时间点,终端对在该时间点旋转后的目标视频帧进行复制。在各第一时间点复制的视频帧组合得到第一视频帧序列。然后终端基于第二时间点集中各第二时间点对应的偏航角与俯仰角,使目标视频帧向右下方向或者左下方向进行旋转,直至旋转至第二目标位置。在每个旋转的第二时间点,终端对在该时间点旋转后的目标视频帧进行复制。在各第二时间点复制的视频帧组合得到第二视频帧序列。最后,终端基于第一视频帧序列与第一视频帧序列生成旋转视频。
上述实施例中,终端根据偏航角与俯仰角,先使目标视频帧向左上方向或者右上方向进行旋转,然后再使目标视频帧向右下方向或者左下方向进行旋转,保证了旋转效果的稳定性,提高了旋转视频的视觉效果。并且,终端根据偏航角与俯仰角使目标视频帧先向左上或者右上进行旋转,然后再下右下或左下进行旋转,也即使目标视频帧先斜向上方进行旋转再斜向下方进行旋转,从而可以通过倾斜旋转产生环视四周的效果,相比于另一种实现环视四周的方案:垂直向上旋转,再水平旋转、最后垂直向下旋转(三个阶段之间的衔接存在顿挫),本实施例的旋转的视觉效果更加流畅,提高了旋转的视觉效果。
在一个实施例中,预设时长内的时间点组成的时间点集还包括第三时间点集;第三时间点集由使目标视频帧在运动过程的最高点进行水平旋转的各时间点组成;当目标视频帧旋转至最高点时,基于第三时间点对应的偏航角,使目标视频帧在水平方向进行旋转;目标视频帧在水平方向进行旋转时,在垂直方向的旋转速度分量为零。
其中,终端可以使目标视频帧在第三时间点对应的旋转阶段结束时旋转至预设的目标位置。例如,预设的目标位置可以为使目标视频帧对应视角从第一目标位置开始旋转预设角度的位置。例如,目标视频帧从第一目标位置开始在 水平方向进行旋转,当视角旋转了180度、360度、90度等预设角度时,到达预设的目标位置。
在一个实施例中,终端可以使目标视频帧先垂直向上或者斜向上(左上或者右上)旋转至最高点,然后保持垂直方向的旋转速度分量为零,并水平进行旋转,在第三时间点对应的旋转阶段结束时,使目标视频帧垂直向下或者斜向下(左下或者右下)进行旋转。终端使目标视频帧在最高点时水平进行旋转,可以产生环视四周的视觉效果。
当终端将目标视频帧旋转至运动过程的最高点时,使目标视频帧在垂直方向的旋转速度分量为零,在水平方向进行旋转。例如,使目标视频帧在水平方向旋转一周。然后再使目标视频帧向右下方向或者左下方向进行旋转。
上述实施例中,当目标视频帧旋转至第一目标位置时,基于第三时间点对应的偏航角与俯仰角,使目标视频帧在水平方向进行旋转,当目标视频帧旋转至第三目标位置时,基于第二时间点对应的偏航角与俯仰角,使目标视频帧向右下方向或者左下方向进行旋转,直至旋转至第二目标位置。从而可以丰富目标视频帧的旋转效果,提高了旋转视频的视觉效果。
在一个实施例中,如图6所示,S206之前还包括S602-S606,S206具体包括S608。
S602,在采集的视频中识别追踪目标。
其中,追踪目标为视频对应的视角所指向的目标,可以为人或者物体等,例如追踪目标可以为上抛相机的用户。在一个实施例中,S602具体包括:终端响应于视频采集指令开始进行视频采集,通过图像识别技术在采集的视频中识别追踪目标。
S604,获取追踪目标对应的追踪数据。
其中,追踪数据为用于表示追踪目标在视频画面中位置的数据。例如,追踪数据为追踪目标在球形模型中的位置坐标。例如,追踪数据为(x,y,z),x、y、z分别为追踪目标在x轴、y轴与z轴的位置坐标。
S606,根据追踪数据对原始视频进行视角调整,得到调整后的原始视频。
终端根据追踪数据对原始视频进行视角调整,使原始视频中各视频帧的视角指向追踪目标。具体的,终端对原始视频进行视角调整,使原始视频中各视频帧的视角指向追踪数据所表示的位置。
S608,从调整后的原始视频中提取目标时刻对应的目标视频帧。
终端根据追踪数据对原始视频进行视角调整,从调整后的原始视频中提取目标时刻对应的目标视频帧。
上述实施例中,在采集的视频中识别追踪目标,获取追踪目标对应的追踪数据,根据追踪数据对原始视频进行视角调整,并从调整后的原始视频中提取目标时刻对应的目标视频帧。从而可以使调整后的原始视频中视角始终看向追踪目标,具有更好的视觉效果。
在一个实施例中,S210之后还包括:根据目标时刻,将旋转视频插入原始视频,得到特效视频;响应于针对特效视频的播放指令,播放特效视频。
其中,特效视频为具有特殊视觉效果的视频。例如,特效视频为具有凝固视觉效果的视频。凝固视觉效果为对相机运动到目标点时拍摄的画面重复播放一段时间,并在重复播放期间使画面的视角进行旋转的视觉效果。例如,凝固视觉效果可以为高空凝固视觉效果。高空凝固视觉效果为对相机基于上抛动作运动到最高点时拍摄的画面重复播放一段时间,并在重复播放期间使画面的视角进行旋转的视觉效果。目标时刻可以为相机到达目标点的时刻,终端将旋转视频插入原始视频中目标时刻对应的目标视频帧之后,产生在从目标时刻开始画面被凝固的视觉效果。
上述实施例中,根据目标时刻,将旋转视频插入原始视频,得到特效视频。响应于针对特效视频的播放指令,播放特效视频,提高了特效视频的旋转稳定性,保证了视觉效果。
在一个实施例中,终端为具有广角镜头的全景相机。当全景相机接收到用户触发的视频录制指令时,自动选取追踪目标,并获取追踪目标对应的追踪数据。在全景相机被用户上抛到空中时,全景相机基于上抛动作运动的过程中拍 摄原始视频。然后根据追踪数据调整原始视频的视角,使原始视频中镜头的视角始终指向追踪目标。全景相机从原始视频中提取陀螺仪数据,根据陀螺仪数据计算运动过程中的最大加速度和最小加速度,最大加速度的出现时刻为起抛点时刻,最小加速度的出现时刻为运动结束时刻。起抛点时刻和运动结束时刻的中间时刻即为全景相机的上抛最高点时刻。从调整后的原始视频中提取上抛最高点时刻对应的目标视频帧,对该目标视频帧进行重复播放,并在重复播放期间调整目标视频帧的视角,使目标视频帧的视角进行旋转。具体地,终端可以根据图5实施例的方法计算偏航角与俯仰角,然后根据偏航角与俯仰角使目标视频帧的视角先向左上或右上进行旋转,然后再向左下或者右下进行旋转,直到回到视角指向追踪目标的状态。
在一个实施例中,如图7所示,视频处理方法包括如下步骤:
S702,获取相机在运动过程中采集的原始视频以及相机运动数据,根据相机运动数据,确定相机在运动过程中的第一加速度以及第二加速度。
S704,根据第一加速度对应的第一时刻以及第二加速度对应的第二时刻,确定相机在运动过程中到达目标点的目标时刻。
S706,响应于视频采集指令,在采集的视频中识别追踪目标,并获取追踪目标对应的追踪数据。
S708,根据追踪数据对原始视频进行视角调整,并从调整后的原始视频中提取目标时刻对应的目标视频帧。
S710,将目标时刻作为旋转起始时刻,根据旋转起始时刻与预设时长确定旋转结束时刻。
S712,根据旋转起始时刻与旋转结束时刻确定旋转周期,并计算预设时长内的各时间点与旋转周期的比值。
S714,基于比值以及初始偏航角,计算目标视频帧在预设时长内的各时间点对应的偏航角。
S716,基于比值以及初始俯仰角,计算目标视频帧在预设时长内的各时间点对应的俯仰角。时间点包括第一时间点与第二时间点。
S718,基于第一时间点对应的偏航角与俯仰角,使目标视频帧向左上方向或者右上方向进行旋转,对在各第一时间点旋转后的目标视频帧进行复制,得到第一视频帧序列。
S720,当目标视频帧旋转至第一目标位置时,基于第二时间点对应的偏航角与俯仰角,使目标视频帧向右下方向或者左下方向进行旋转,直至旋转至第二目标位置。
S722,对在各第二时间点旋转后的目标视频帧进行复制,得到第二视频帧序列,基于第一视频帧序列与第二视频帧序列生成旋转视频。
S724,根据目标时刻,将旋转视频插入原始视频,得到特效视频并播放。
上述S702至S724的具体内容可以参考上文的具体实现过程。
应该理解的是,虽然如上所述的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上所述的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
基于同样的发明构思,本申请实施例还提供了一种用于实现上述所涉及的视频处理方法的视频处理装置。该装置所提供的解决问题的实现方案与上述方法中所记载的实现方案相似,故下面所提供的一个或多个视频处理装置实施例中的具体限定可以参见上文中对于视频处理方法的限定,在此不再赘述。
在一个实施例中,如图8所示,提供了一种视频处理装置,包括:获取模块802、确定模块804、提取模块806、计算模块808和旋转处理模块810,其中:
获取模块802,用于获取相机在运动过程中采集的原始视频以及相机运动数 据;
确定模块804,用于根据相机运动数据,确定相机在运动过程中到达目标点的目标时刻;
提取模块806,用于从原始视频中提取目标时刻对应的目标视频帧;
旋转处理模块808,用于根据目标时刻与预设时长,对目标视频帧进行旋转处理,得到旋转视频。
上述实施例中,获取相机在运动过程中采集的原始视频,根据从原始视频中提取的相机运动数据,确定相机在运动过程中到达目标点的目标时刻。从原始视频中提取目标时刻对应的目标视频帧。从而可以根据相机运动数据精确的确定目标视频帧,保证了所确定的目标视频帧的准确性。然后根据目标时刻与预设时长,对目标视频帧进行旋转处理,得到旋转视频。由于计算机设备可以根据相机运动数据精确的确定相机在运动过程中到达的目标点,提高了确定目标点的效率,并且提高了旋转视频的视觉效果。
在一个实施例中,装置还包括:
计算模块810,用于根据目标时刻与预设时长,计算目标视频帧在预设时长内的各时间点对应的偏航角与俯仰角;
旋转处理模块808,还用于基于偏航角与俯仰角,对目标视频帧进行旋转处理,得到旋转视频。
在一个实施例中,确定模块804,还用于:
根据相机运动数据,确定相机在运动过程中的第一加速度以及第二加速度;
根据第一加速度对应的第一时刻以及第二加速度对应的第二时刻,确定相机在运动过程中到达目标点的目标时刻。
在一个实施例中,旋转处理模块808,还用于:
将目标时刻作为旋转起始时刻,根据旋转起始时刻与预设时长确定旋转结束时刻;
根据旋转起始时刻与旋转结束时刻确定旋转周期;
计算预设时长内的各时间点与旋转周期的比值;
基于比值以及初始偏航角,计算目标视频帧在预设时长内的各时间点对应 的偏航角;
基于比值以及初始俯仰角,计算目标视频帧在预设时长内的各时间点对应的俯仰角。
在一个实施例中,预设时长内的各时间点组成时间点集,时间点集包括第一时间点集与第二时间点集;第一时间点集由将目标视频帧旋转至第一目标位置之前的各第一时间点组成;第二时间点集由将目标视频帧旋转至第一目标位置与第二目标位置之间的第二时间点组成;旋转处理模块808,还用于:
基于第一时间点对应的偏航角与俯仰角,使目标视频帧向左上方向或者右上方向进行旋转;
对在各第一时间点旋转后的目标视频帧进行复制,得到第一视频帧序列;
当目标视频帧旋转至第一目标位置时,基于第二时间点对应的偏航角与俯仰角,使目标视频帧向右下方向或者左下方向进行旋转,直至旋转至第二目标位置;
对在各第二时间点旋转后的目标视频帧进行复制,得到第二视频帧序列;
基于第一视频帧序列与第二视频帧序列生成旋转视频。
在一个实施例中,预设时长内的时间点组成的时间点集包括第三时间点;第三时间点由使目标视频帧在所述运动过程的最高点进行水平旋转过程中的各第三时间点组成;
旋转处理模块808,还用于:当目标视频帧旋转至最高点时,基于第三时间点对应的偏航角与俯仰角,使目标视频帧在水平方向进行旋转;目标视频帧在水平方向进行旋转时,在垂直方向的旋转速度分量为零。
在一个实施例中,如图9所示,装置还包括:
识别模块812,用于在预采集的视频中识别追踪目标;
获取模块802,还用于获取追踪目标对应的追踪数据;
调整模块814,用于根据追踪数据对原始视频进行视角调整,得到调整后的原始视频;
提取模块806,还用于从调整后的原始视频中提取目标时刻对应的目标视频帧。
在一个实施例中,装置还包括:
插入模块816,用于根据目标时刻,将旋转视频插入原始视频,得到特效视频;
播放模块818,用于响应于针对特效视频的播放指令,播放特效视频。
上述视频处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图10所示。该计算机设备包括处理器、存储器、输入/输出接口、通信接口、显示单元和输入装置。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口、显示单元和输入装置通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、移动蜂窝网络、NFC(近场通信)或其他技术实现。该计算机程序被处理器执行时以实现一种视频处理方法。该计算机设备的显示单元用于形成视觉可见的画面,可以是显示屏、投影装置或虚拟现实成像装置,显示屏可以是液晶显示屏或电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图10中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
需要说明的是,本申请所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(Magnetoresistive Random Access Memory,MRAM)、铁电存储器(Ferroelectric Random Access Memory,FRAM)、相变存储器(Phase Change Memory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号处理器、可编程逻辑器、基于量子计 算的数据处理逻辑器等,不限于此。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。

Claims (12)

  1. 一种视频处理方法,其特征在于,所述方法包括:
    获取相机在运动过程中采集的原始视频以及相机运动数据;
    根据所述相机运动数据,确定所述相机在所述运动过程中到达目标点的目标时刻;
    从所述原始视频中提取所述目标时刻对应的目标视频帧;
    根据所述目标时刻与预设时长,对所述目标视频帧进行旋转处理,得到旋转视频。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述目标时刻与预设时长,对所述目标视频帧进行旋转处理,得到旋转视频包括:
    根据所述目标时刻与预设时长,计算所述目标视频帧在所述预设时长内的各时间点对应的偏航角与俯仰角;
    基于所述偏航角与所述俯仰角,对所述目标视频帧进行旋转处理,得到旋转视频。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述相机运动数据,确定所述相机在所述运动过程中到达目标点的目标时刻包括:
    根据所述相机运动数据,确定所述相机在所述运动过程中的第一加速度以及第二加速度;
    根据所述第一加速度对应的第一时刻以及所述第二加速度对应的第二时刻,确定所述相机在所述运动过程中到达目标点的目标时刻。
  4. 根据权利要求2所述的方法,其特征在于,所述根据所述目标时刻与预设时长,计算所述目标视频帧在所述预设时长内的各时间点对应的偏航角与俯仰角包括:
    将所述目标时刻作为旋转起始时刻,根据所述旋转起始时刻与预设时长确定旋转结束时刻;
    根据所述旋转起始时刻与所述旋转结束时刻确定旋转周期;
    计算所述预设时长内的各时间点与所述旋转周期的比值;
    基于所述比值以及初始偏航角,计算所述目标视频帧在所述预设时长内的各时间点对应的偏航角;
    基于所述比值以及初始俯仰角,计算所述目标视频帧在所述预设时长内的各时间点对应的俯仰角。
  5. 根据权利要求2所述的方法,其特征在于,所述预设时长内的时间点组成时间点集,所述时间点集包括第一时间点集与第二时间点集;所述第一时间点集由将所述目标视频帧旋转至第一目标位置之前的各第一时间点组成;所述第二时间点集由将所述目标视频帧旋转至第一目标位置与第二目标位置之间的第二时间点组成;所述基于所述偏航角与所述俯仰角,对所述目标视频帧进行旋转处理,得到旋转视频包括:
    基于所述第一时间点对应的偏航角与俯仰角,使所述目标视频帧向左上方向或者右上方向进行旋转;
    对在各所述第一时间点旋转后的所述目标视频帧进行复制,得到第一视频帧序列;
    当所述目标视频帧旋转至第一目标位置时,基于所述第二时间点对应的偏航角与俯仰角,使所述目标视频帧向右下方向或者左下方向进行旋转,直至旋转至第二目标位置;
    对在各所述第二时间点旋转后的所述目标视频帧进行复制,得到第二视频帧序列;
    基于所述第一视频帧序列与所述第二视频帧序列生成旋转视频。
  6. 根据权利要求2所述的方法,其特征在于,所述预设时长内的时间点组成时间点集;所述时间点集包括第三时间点集;所述第三时间点集由使所述目标视频帧在所述运动过程的最高点进行水平旋转过程中的各第三时间点组成;所述基于所述偏航角与所述俯仰角,对所述目标视频帧进行旋转处理,得到旋转视频包括:
    当所述目标视频帧旋转至所述最高点时,基于所述第三时间点对应的偏航角,使所述目标视频帧在水平方向进行旋转;所述目标视频帧在水平方向进行旋转时,在垂直方向的旋转速度分量为零。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在预采集的视频中识别追踪目标;
    获取所述追踪目标对应的追踪数据;
    根据所述追踪数据对所述原始视频进行视角调整,得到调整后的所述原始视频;
    所述从所述原始视频中提取所述目标时刻对应的目标视频帧包括:
    从调整后的所述原始视频中提取所述目标时刻对应的目标视频帧。
  8. 根据权利要求1所述的方法,其特征在于,所述基于所述偏航角与所述俯仰角,对所述目标视频帧进行旋转处理,得到旋转视频之后,所述方法还包括:
    根据所述目标时刻,将所述旋转视频插入所述原始视频,得到特效视频;
    响应于针对所述特效视频的播放指令,播放所述特效视频。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述运动过程为上抛运动过程;所述目标点为所述相机在所述上抛运动过程中到达的最高点。
  10. 一种视频处理装置,其特征在于,所述装置包括:
    获取模块,用于获取相机在运动过程中采集的原始视频以及相机运动数据;
    确定模块,用于根据所述相机运动数据,确定所述相机在所述运动过程中到达目标点的目标时刻;
    提取模块,用于从所述原始视频中提取所述目标时刻对应的目标视频帧;
    旋转处理模块,用于根据所述目标时刻与预设时长,对所述目标视频帧进行旋转处理,得到旋转视频。
  11. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述的方法的步骤。
  12. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至8中任一项所述的方法的步骤。
PCT/CN2023/118362 2022-09-13 2023-09-12 视频处理方法、装置、计算机设备和存储介质 WO2024055967A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211108463.0 2022-09-13
CN202211108463.0A CN115550563A (zh) 2022-09-13 2022-09-13 视频处理方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2024055967A1 true WO2024055967A1 (zh) 2024-03-21

Family

ID=84724777

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/118362 WO2024055967A1 (zh) 2022-09-13 2023-09-12 视频处理方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN115550563A (zh)
WO (1) WO2024055967A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115550563A (zh) * 2022-09-13 2022-12-30 影石创新科技股份有限公司 视频处理方法、装置、计算机设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7999842B1 (en) * 2004-05-28 2011-08-16 Ricoh Co., Ltd. Continuously rotating video camera, method and user interface for using the same
CN109561254A (zh) * 2018-12-18 2019-04-02 深圳岚锋创视网络科技有限公司 一种全景视频防抖的方法、装置及便携式终端
CN111242975A (zh) * 2020-01-07 2020-06-05 影石创新科技股份有限公司 自动调整视角的全景视频渲染方法、存储介质及计算机设备
CN112017216A (zh) * 2020-08-06 2020-12-01 影石创新科技股份有限公司 图像处理方法、装置、计算机可读存储介质及计算机设备
CN115550563A (zh) * 2022-09-13 2022-12-30 影石创新科技股份有限公司 视频处理方法、装置、计算机设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7999842B1 (en) * 2004-05-28 2011-08-16 Ricoh Co., Ltd. Continuously rotating video camera, method and user interface for using the same
CN109561254A (zh) * 2018-12-18 2019-04-02 深圳岚锋创视网络科技有限公司 一种全景视频防抖的方法、装置及便携式终端
CN111242975A (zh) * 2020-01-07 2020-06-05 影石创新科技股份有限公司 自动调整视角的全景视频渲染方法、存储介质及计算机设备
CN112017216A (zh) * 2020-08-06 2020-12-01 影石创新科技股份有限公司 图像处理方法、装置、计算机可读存储介质及计算机设备
CN115550563A (zh) * 2022-09-13 2022-12-30 影石创新科技股份有限公司 视频处理方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN115550563A (zh) 2022-12-30

Similar Documents

Publication Publication Date Title
CN107079141B (zh) 用于三维视频的图像拼接
WO2019154013A1 (zh) 表情动画数据处理方法、计算机设备和存储介质
CN107315470B (zh) 图形处理方法、处理器和虚拟现实系统
CN110249626B (zh) 增强现实图像的实现方法、装置、终端设备和存储介质
CN106973228B (zh) 一种拍摄方法及电子设备
US20110085017A1 (en) Video Conference
WO2017032336A1 (en) System and method for capturing and displaying images
CN111694430A (zh) 一种ar场景画面呈现方法、装置、电子设备和存储介质
WO2008026342A1 (fr) Visionneuse d'images, procédé d'affichage d'images et support de stockage d'informations
JP6723512B2 (ja) 画像処理装置、画像処理方法及びプログラム
WO2019237745A1 (zh) 人脸图像处理方法、装置、电子设备及计算机可读存储介质
WO2024055967A1 (zh) 视频处理方法、装置、计算机设备和存储介质
TW202123178A (zh) 一種分鏡效果的實現方法、裝置及相關產品
WO2015180684A1 (zh) 基于移动终端的摄影仿真教学方法及系统、存储介质
US11244423B2 (en) Image processing apparatus, image processing method, and storage medium for generating a panoramic image
WO2019084719A1 (zh) 图像处理方法及无人机
CN111862348B (zh) 视频显示方法、视频生成方法、装置、设备及存储介质
WO2023169283A1 (zh) 双目立体全景图像的生成方法、装置、设备、存储介质和产品
US11847735B2 (en) Information processing apparatus, information processing method, and recording medium
CN109089045A (zh) 一种基于多个摄像装置的摄像方法及设备及其终端
CN108421240A (zh) 基于ar的球场弹幕系统
CN106210701A (zh) 一种用于拍摄vr图像的移动终端及其vr图像拍摄系统
WO2021109764A1 (zh) 图像或视频生成方法、装置、计算设备及计算机可读介质
CN106131421A (zh) 一种视频图像的调整方法和电子设备
US20210289147A1 (en) Images with virtual reality backgrounds

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23864699

Country of ref document: EP

Kind code of ref document: A1