CN117812455A - Video processing method and device and electronic equipment - Google Patents

Video processing method and device and electronic equipment Download PDF

Info

Publication number
CN117812455A
CN117812455A CN202311861795.0A CN202311861795A CN117812455A CN 117812455 A CN117812455 A CN 117812455A CN 202311861795 A CN202311861795 A CN 202311861795A CN 117812455 A CN117812455 A CN 117812455A
Authority
CN
China
Prior art keywords
video
shooting
acquiring
place position
departure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311861795.0A
Other languages
Chinese (zh)
Inventor
杨荣涛
郑欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shenhai Innovation Technology Co ltd
Original Assignee
Shenzhen Shenhai Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shenhai Innovation Technology Co ltd filed Critical Shenzhen Shenhai Innovation Technology Co ltd
Priority to CN202311861795.0A priority Critical patent/CN117812455A/en
Publication of CN117812455A publication Critical patent/CN117812455A/en
Pending legal-status Critical Current

Links

Abstract

The application relates to the technical field of video processing, and discloses a video processing method, a video processing device and electronic equipment. According to the method, the first video shot by the shooting equipment and the shooting place position corresponding to the first video are acquired, the second video of the space flight view angle from the starting place position to the shooting place position is acquired according to the starting place position selected by a user, and the first video and the second video are subjected to synthesis processing to generate a synthesized video. By combining videos from different perspectives, videos with stronger narrative and continuity can be created, providing a viewing experience with more visual richness, immersive and storyline.

Description

Video processing method and device and electronic equipment
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video processing method, a video processing device, and an electronic device.
Background
In recent years, with the rapid development of science and technology, the pursuit of the general public for mental civilization is increasingly improved, and accordingly, many kinds of image pickup apparatuses are emerging on the market for public entertainment.
At present, many image capturing apparatuses are mainly used for capturing video materials in entertainment scenes such as aerial photography and follow-up photography, and are a record of the captured scenes, but do not perform more processing on the captured video. Therefore, in order to enhance the playability of the image pickup apparatus and enhance the user's play experience, the inventors considered to study a brand new video processing method.
Disclosure of Invention
The technical problem that this embodiment of the application mainly solves is how to realize the video processing of different visual angles to reach the purpose of reinforcing sight and the immersion sense of video.
In order to solve the technical problems, one technical scheme adopted by the embodiment of the application is as follows: there is provided a video processing method, the method comprising: acquiring a first video shot by an image pickup device, and acquiring a shooting place position corresponding to the first video; acquiring a departure place position selected by a user, and acquiring a second video of a space flight view angle from the departure place position to the shooting place position; and synthesizing the first video and the second video to generate a synthesized video.
Optionally, the acquiring the shooting location corresponding to the first video includes: acquiring GPS data of the first video, and acquiring longitude and latitude coordinate information of a shooting place according to the GPS data; when the shooting sites corresponding to the longitude and latitude coordinate information are multiple, shooting time lengths corresponding to the shooting sites are obtained according to the first video, and the shooting site with the longest shooting time length is used as the shooting site position corresponding to the first video.
Optionally, the acquiring the second video of the space flight view angle from the departure place to the shooting place comprises: acquiring satellite pictures from the departure place to the shooting place; acquiring shooting heights corresponding to the shooting place positions; determining a type of the departure location; and processing the satellite picture according to the shooting altitude and the type of the starting place position to generate a second video of a space flight view angle from the starting place position to the shooting place position.
Optionally, when the type of the departure location is space, the processing the satellite picture according to the shooting altitude and the type of the departure location to generate a second video of a space flight view angle from the departure location to the shooting location includes: determining a first height larger than the shooting height according to the shooting height, and acquiring a first zoom level corresponding to the first height; acquiring GPS coordinates of the shooting place position; and obtaining a corresponding satellite picture from the satellite pictures according to the GPS coordinates and the first zoom level, and generating a second video of a space flight view angle from the departure place position to the shooting place position by the obtained corresponding satellite picture.
Optionally, when the type of the departure location is the ground, the processing the satellite picture according to the shooting altitude and the type of the departure location to generate a second video of a space flight view angle from the departure location to the shooting location includes: acquiring a horizontal picture sequence from the departure position to the shooting place position from the satellite picture based on a preset fixed zoom level; determining a first height larger than the shooting height according to the shooting height, and acquiring a first zoom level corresponding to the first height; acquiring all satellite pictures from the fixed zoom level to the first zoom level from the satellite pictures to form a vertical picture sequence; generating a second video of a space flight perspective from the departure location to the shooting location based on the horizontal picture sequence and the vertical picture sequence.
Optionally, the method further comprises: acquiring a interstellar departure place selected by a user, and generating a third video from the interstellar departure place to the departure place position; and synthesizing the third video with the first video and the second video to generate a final video.
Optionally, the synthesizing the third video with the first video and the second video to generate a final video includes: responding to a virtual interstellar departure place, a virtual departure place position and a virtual shooting place position selected by a user on a display interface, and acquiring a first video, a second video and a third video corresponding to the virtual interstellar departure place, the virtual departure place position and the virtual shooting place position; and responding to the synthesis operation of the user on the display interface, and generating the synthesized video of the first video, the second video and the third video.
In order to solve the technical problems, another technical scheme adopted by the embodiment of the application is as follows: there is provided a video processing apparatus, the apparatus comprising: the first video acquisition module is used for acquiring a first video shot by the camera equipment and acquiring a shooting place position corresponding to the first video; the second video acquisition module is used for acquiring a departure place position selected by a user and acquiring a second video of a space flight view angle from the departure place position to the shooting place position; and the synthesis module is used for synthesizing the first video and the second video to generate synthesized video.
In order to solve the technical problems, another technical scheme adopted by the embodiment of the application is as follows: there is provided a computing device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods described above.
In order to solve the technical problems, another technical scheme adopted by the embodiment of the application is as follows: there is provided a non-transitory computer readable storage medium storing computer executable instructions which, when executed by a computing device, cause the computing device to perform the method of any of the preceding claims.
Different from the situation of the related art, the embodiment of the application provides a video processing method, a video processing device and electronic equipment. The method comprises the steps of acquiring position information of a first video and a shooting place when shooting through an image shooting device, acquiring a second video of a space flight view angle from the starting place position to the shooting place position according to the starting place position selected by a user, and then carrying out synthesis processing on the first video and the second video to generate a synthesized video. The video processing method, the video processing device and the electronic equipment can synthesize videos with different visual angles, provide viewing experience with more visual richness, immersion and storyline, enable users to feel the visual angles of the ground and the space at the same time, and enable the users to obtain viewing experience with more dynamic and attractive performance through visual angle switching and flight effects.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to scale, unless expressly stated otherwise.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
FIG. 2 is a flow chart of a video processing method according to another embodiment of the present application;
FIG. 3 is a schematic diagram of a satellite picture scaling operation provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a specific application scenario of a video processing method according to an embodiment of the present application;
fig. 5 is an interactive flow diagram of a video processing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a first video editing page provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a second video editing page provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a third video editing page provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a video composition editing page provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a video processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic hardware structure of a computing device for performing a video processing method according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that, if not conflicting, the various features in the embodiments of the present application may be combined with each other, which are all within the protection scope of the present application. In addition, while the division of functional blocks is performed in a device diagram and the logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in a device diagram or the sequence in a flowchart.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used in this specification includes any and all combinations of one or more of the associated listed items.
According to the video processing method, the video processing device and the electronic equipment, the first video shot by the shooting equipment and the second video shot by the space flight view angle are synthesized mainly through the multi-view video processing means, so that the video from the departure place to the shooting place is obtained. Through the synthesis processing of the multi-section video, pictures with multiple visual angles can be displayed in a single video, so that a strong visual impact is brought to a user. In addition, a third video from the interstellar departure place to the departure place position can be generated according to the selected type of the interstellar departure place, the first video, the second video and the third video are synthesized, and a final video from the interstellar departure place to the shooting place is generated, so that videos with rich vision and stronger narrative property and continuity can be generated, and the watching experience of a user is improved.
Referring to fig. 1, fig. 1 is a flowchart of a video processing method according to an embodiment of the present application. As shown in fig. 1, the method includes:
s101: acquiring a first video shot by an image pickup device, and acquiring a shooting place position corresponding to the first video;
s102: acquiring a departure place position selected by a user, and acquiring a second video of a space flight view angle from the departure place position to the shooting place position;
s103: and synthesizing the first video and the second video to generate a synthesized video.
According to the method and the device, the video material of the shooting place can be shot through the shooting equipment to serve as the first video, and the corresponding positioning data of the first video are obtained while the first video is acquired to determine the position of the shooting place. And acquiring satellite pictures from the departure place to the shooting place according to the type of the selected departure place, and processing the satellite pictures to generate a second video of the space flight view angle from the departure place to the shooting place. And combining the first video and the second video to enable the single video to display pictures with multiple visual angles, thereby obtaining the generated combined video.
Specifically, the embodiment may capture, by the image capturing apparatus, the video material of the capturing place concerned as the first video, and may also capture the position information of the capturing place corresponding to the first video. It can be understood that, at present, a lot of camera devices can additionally acquire the position information of the relevant shooting place when shooting videos, taking an unmanned aerial vehicle as a camera device to shoot videos as an example, when a user uses the unmanned aerial vehicle to shoot video materials at the shooting place as a first video, a GPS receiving module built in the unmanned aerial vehicle can receive signals transmitted by satellites, calculate the position of the unmanned aerial vehicle by utilizing a triangle positioning principle, namely acquire longitude and latitude coordinate information of the shooting place, and then determine the optimal shooting place position corresponding to the first video according to the acquired longitude and latitude coordinate information. When the longitude and latitude coordinate information is multiple sets of coordinates, that is, when the shooting locations are multiple, shooting time lengths corresponding to the shooting locations can be obtained according to the shot first video, and the shooting location with the longest shooting time length is used as the optimal shooting location corresponding to the first video.
In addition, the method for acquiring the shooting location corresponding to the first video in the present embodiment includes, but is not limited to, the above method. For example, a landmark, a feature point or a map of the shooting place can be identified by a camera or a sensor mounted on the unmanned aerial vehicle by utilizing image processing and a computer vision algorithm so as to determine the position of the unmanned aerial vehicle, thereby acquiring the position information of the shooting place. For example, taking a camera as an image capturing apparatus to capture a video, metadata is additionally acquired when a user captures a video material at a capturing place using the camera, wherein the metadata may contain information about the capturing place, such as latitude and longitude coordinates, a geographic mark, or a place name, and the position of the capturing place is acquired by reading the metadata of the video.
The embodiment can enable the image pickup device to shoot video materials through the GPS receiving module built in the image pickup device, and can provide more accurate and representative shooting place position information.
Wherein, as shown in fig. 2, the step of obtaining the departure place position selected by the user and obtaining the second video of the space flight view angle from the departure place position to the shooting place position includes:
s201: acquiring satellite pictures from the departure place to the shooting place;
s202: acquiring shooting heights corresponding to the shooting place positions;
s203: determining a type of the departure location;
s204: and processing the satellite picture according to the shooting altitude and the type of the starting place position to generate a second video of a space flight view angle from the starting place position to the shooting place position.
First, in the present embodiment, a satellite picture from a departure location to a shooting location is acquired based on acquired position information of the shooting location and a location of the departure location selected by a user. It can be understood that taking the unmanned aerial vehicle as the image capturing device for example, the built-in image capturing module of the unmanned aerial vehicle obtains a large number of satellite pictures from the departure place to the shooting place from the open API interface provided by the satellite map suppliers (such as hundred degrees, germany, google, mapBox and the like) through the image capturing algorithm. According to the difference of the shooting height corresponding to the shooting place position and the type of the departure place position selected by the user, satellite pictures of different types and representing all zoom levels above the shooting height can be obtained. It can be understood that taking google map platform as an example, when a user shoots a first video at a height of 1200 meters on the ground through an unmanned aerial vehicle, a zoom level corresponding to a viewpoint height higher than that of the first video is obtained through a map parameter lookup table, and the corresponding zoom level is 19 levels even though a viewpoint height of 1500 meters is used.
If the type of the departure place selected by the user is space, 20 satellite pictures from 0 to 19 levels of GPS coordinates of the shooting place are acquired through an API interface of the satellite map provider and recorded as a vertical picture sequence 1. If the type of the departure place selected by the user is the ground, the zoom level is fixed (level 2 or above), all satellite pictures from the appointed place to the shooting place on the ground are acquired in a certain step length through an API interface of a satellite map provider and are recorded as a horizontal picture sequence 1, and all satellite pictures from the fixed zoom level to level 19 are acquired and recorded as a vertical picture sequence 2. In addition, it is necessary to set the resolution of the satellite picture according to the resolution of the first video, such as 1920×1080, while acquiring the satellite picture.
And secondly, screening out satellite pictures meeting the conditions from all the acquired satellite pictures from the starting place to the shooting place according to the shooting height corresponding to the shooting place and the type of the starting place position selected by the user, and processing the acquired satellite pictures meeting the conditions to obtain a second video representing the space flight view angle from the starting place position to the shooting place position.
And when the type of the departure place is space, processing the satellite picture according to the shooting altitude and the type of the departure place to generate a second video of a space flight view angle from the departure place to the shooting place. It can be understood that when the type of the departure place is space, that is, when the departure place is space above the earth, the first height greater than the shooting height is determined according to the shooting height, for example, when the shooting height of the user using the unmanned aerial vehicle to shoot the video material is 1200 meters above the ground, the height (greater than the shooting height) greater than 1500 meters above the ground is selected as the first height, and the first zoom level corresponding to the first height is obtained through the map parameter lookup table, that is, the corresponding zoom level is 19. Then, from among the plurality of obtained satellite pictures of each zoom level, all satellite pictures of 0 to 19 levels corresponding to the GPS coordinates of the shooting location and the first zoom level (19 levels) are found as satellite pictures from space above earth to the shooting location, i.e., vertical picture sequence 1, and then the filtered vertical picture sequence is converted into video.
It will be appreciated that when the origin is space, i.e. space above earth, the differentiation between vertical picture sequences is reduced by using picture scaling for frame insertion. For example, as shown in fig. 3 (a), adjacent zoom levels L with sizes P are selected n And L n-1 The satellite picture of the satellite is obtained by searching a satellite picture parameter table to obtain the spatial resolution (or scale) S of two scaling scales n And S is n-1 The distance pixel region of the high zoom level in the low zoom level, i.e., the red frame region size P 'in fig. 3 (b), can be calculated, where P' = (S) n-1 /S n ) X P. It can be appreciated that, for example, a satellite picture with a size of 1920×1080, and a spatial resolution of 4 meters and 2.15 meters corresponding to 16-level scaling and 15-level scaling, respectively, then P' = (S) n-1 /S n ) The x P formula can calculate the range pixel region of the 16-level satellite picture in the 15-level satellite picture, i.e., 1032 x 580. In addition, as shown in fig. 3 (c), between the P' and the P size, the satellite picture is gradually cut by the set pixels to generate a plurality of pictures, the sizes of the generated plurality of pictures are all scaled to P, and then scaling operation is performed on the satellite pictures of all adjacent scaling levels, so as to complete all the frame inserting pictures. And finally, all the processed satellite pictures are encoded into videos. In addition, by setting the number of frames and codes at the time of converting the processed satellite picture into video to the same value as those of the first video, the time consumption of additional transcoding can be reduced or avoided at the time of video composition, the picture conversion into video including but not limited toNot limited to the following methods, such as OpenCV, or other third party video tools, and may add smoothness when converting a picture into video by adding related inter-frame optimization algorithms (such as optical flow, AI-interpolation, motion blur, etc.).
According to the embodiment, the satellite pictures are processed to generate the second video of the space flight view angle from the space above the earth to the shooting place, so that the user can feel the view angles of the ground and the space at the same time, and a shocking visual effect is brought to the user.
And when the type of the departure place is the ground, processing the satellite picture according to the shooting altitude and the type of the departure place so as to generate a second video of a space flight view angle from the departure place to the shooting place. It can be understood that when the type of the departure place is the ground, that is, when the departure place is each city or a standard point on the earth, a first height greater than the shooting height is determined according to the shooting height, for example, when the shooting height of a user using an unmanned aerial vehicle to shoot video material is 200 meters above the ground, a height 1500 meters above the ground (higher than the shooting height) is selected as the first height, and a first zoom level corresponding to the first height is obtained through a map parameter lookup table, that is, the corresponding zoom level is 19. And acquiring a horizontal picture sequence 1 from the ground to the shooting place from a plurality of acquired satellite pictures at each level based on a preset fixed zoom level (level 2 or more), and acquiring all satellite pictures from the fixed zoom level to level 19, namely a vertical picture sequence 2. And then converting the horizontal picture sequence and the vertical picture sequence 2 after screening into videos, namely generating a second video of the space flight view angle which is firstly shifted to the shooting place from a certain city on the ground and then vertically zoomed to the designated shooting height. If the departure point is a city or a standard point on the ground, the video is synthesized using the horizontal picture sequence 1 without performing the scaling operation, and the scaling operation when the video is synthesized using the vertical picture sequence 2 is the same as the scaling operation described above.
According to the embodiment, the satellite pictures are processed to generate the second video of the space flight view angle from each city or the marked point to the shooting place on the ground, so that a user can feel an air journey from the departure place to the shooting place, and the immersion of the user can be enhanced through view angle switching and flight effects.
In some embodiments, a third video may also be generated from the interstellar departure to the departure location by the user-selected interstellar departure. It will be appreciated that the interplanetary origin comprises a partial cosmic star name, and that a user may select one of the star as the interstellar origin, one of the star as the origin, and a plurality of the star as the passthrough. For example, the user can select the Galaxy as the interstellar departure place and the solar system as the passing place, and the first visual angle animation video from the Galaxy to the passing solar system reaching the departure place is automatically generated as the third video through the CGI animation technology. And then, synthesizing the third video with the first video and the second video to generate a final video. It will be appreciated that the third video generated is spliced with the first video and the second video in sequence to generate the final complete space flight travel video from the interstellar departure point to the shooting point. In addition, when the three video segments are spliced in sequence, a related transition method, such as a method provided by adding a cut special effect or other third party software, can be added to enable the synthesized final video to be smoother, so that a multi-view video with stronger continuity and narrative is created.
Optionally, the synthesizing the third video with the first video and the second video to generate a final video includes: responding to a virtual interstellar departure place, a virtual departure place position and a virtual shooting place position selected by a user on a display interface, and acquiring a first video, a second video and a third video corresponding to the virtual interstellar departure place, the virtual departure place position and the virtual shooting place position; and responding to the synthesis operation of the user on the display interface, and generating the synthesized video of the first video, the second video and the third video.
The display interface may be in the form of a remote control, a mobile phone application, or other interactive interface. For example, when the display interface is a remote controller, the image capturing apparatus may be provided with a wireless remote controller having a display screen, and the user may select the virtual interplanetary departure place, the virtual departure place position, and the virtual shooting place position by pressing a button on the remote controller to transmit corresponding instructions, thereby acquiring the first video, the second video, and the third video provided by the image capturing apparatus corresponding to the virtual interplanetary departure place, the virtual departure place, and the virtual shooting place position, and may also compose the acquired first video, second video, and third video into new videos through a composition button on the remote controller. In addition, the quality of the synthesized video can be higher by performing operations such as smoothing, adding a cut-off special effect, video editing and the like in the video synthesis process through a beautifying button on the remote controller. For example, when the display interface is a mobile phone application, the user may connect with the image capturing device by installing the mobile phone application, and the mobile phone application provides a graphical interface, through which the user may select a type of departure place, a video processing mode, monitor and control a state of the image capturing device in real time, and so on.
In this embodiment, a display interface is set, and the display interface is connected with the image capturing device, so as to receive an instruction input by a user, including selecting a departure type and a video processing mode, thereby bringing user interaction, customizing videos, flexibility and diversity to the image capturing device, and increasing user experience.
The method of the embodiments of the present application is described below by way of a specific example. The video processing method provided by the embodiment of the application can be applied to an application environment shown in fig. 4. Specifically, the method includes a user terminal 101 and a drone 102. The user terminal 101 is connected in wireless communication with the drone 102.
The user terminal 101 sends a control instruction to the unmanned aerial vehicle 102, so that the unmanned aerial vehicle 102 moves according to the control instruction to adjust the shooting angle of the video acquisition equipment of the unmanned aerial vehicle 102; and sending a camera enabling instruction to the unmanned aerial vehicle 102 so that the unmanned aerial vehicle 102 can acquire and transmit video materials in real time.
As will be appreciated by those skilled in the art, a "user terminal" as used herein may be a remote control device, a cell phone, a tablet computer, a PDA (Personal Digital Assistant ), a MID (Mobile Internet Device, mobile internet device), etc.; the drone 102 is a device capable of autonomous and controlled movement, and capable of capturing video. The drone 102 is an unmanned aerial vehicle that is operated by a wireless remote control device or mobile client and a self-contained programming device, it being understood that the drone 102 may be replaced by other devices that move automatically or controlled and have a camera function.
The following describes a procedure of video composition of the unmanned aerial vehicle and the remote control device in combination with an interactive flow diagram of the video processing method shown in fig. 5:
s301: the remote control device sends a first video acquisition instruction to the unmanned aerial vehicle.
And the user sends a first video acquisition instruction to the unmanned aerial vehicle through the remote control equipment so that the unmanned aerial vehicle starts a camera to shoot the video of the shooting place.
S302: and the unmanned aerial vehicle receives the instruction and transmits the shot video material and the corresponding shooting position information to the remote control equipment.
The unmanned aerial vehicle can acquire longitude and latitude coordinate information of a shooting place in real time by a GPS receiving module arranged in the unmanned aerial vehicle when shooting video, and the shot video material and the position information are transmitted to the remote control equipment in real time.
S303: and the remote control equipment processes the acquired video material to obtain a first video.
As shown in fig. 6, a user may clip the video material of the shooting location received by the remote control device on a first video editing page on a display screen interface of the remote control device to obtain a first video. It can be appreciated that the user can cut any length of the video material according to his own preference, and can also select any segment of the video material as the first video. In addition, the user can also make beautifying operation during video processing through the beautifying button of the first video editing page. For example, smoothing, adding a filter, etc.
S304: and after confirming the departure place, the remote control equipment sends a second video acquisition instruction to the unmanned aerial vehicle.
The user determines the type of the departure place through the remote control equipment and sends a second video acquisition instruction to the unmanned aerial vehicle so that the unmanned aerial vehicle starts a built-in image acquisition module to acquire satellite pictures from the departure place to the shooting place from an API interface provided by a satellite map provider (such as hundred degrees, god, google and the like).
S305: and the unmanned aerial vehicle receives the instruction, acquires satellite images from the departure place to the shooting place according to the type of the departure place and transmits the satellite images to the remote control equipment.
If the type of the departure place is space (space above earth), acquiring satellite pictures of all zoom levels from the space above earth to GPS coordinates where the shooting place is located through an API interface of a satellite map provider according to the shooting height and transmitting the satellite pictures to remote control equipment;
if the type of the departure place is the ground (each city or index point on the earth), acquiring satellite pictures from each city or index point on the earth to the GPS coordinates of the shooting place through an API interface of a satellite map provider according to the shooting height, and transmitting the satellite pictures to the remote control equipment.
S306: and the remote control equipment processes the acquired satellite picture to obtain a second video.
As shown in fig. 7, the satellite picture received by the user through the remote control device may process the satellite picture on the second video editing page on the display screen interface of the remote control device to obtain a second video. It can be understood that the user can select the type of the departure place through the departure place button on the second video editing page, so as to obtain the corresponding satellite picture, and the satellite picture can be converted into the video through the generation button, so that the second video from the departure place to the shooting place is obtained.
S307: the remote control device generates a third video based on the selected interstellar departure.
As shown in fig. 8, the user selects an interstellar departure place including a part of the names of the cosmic galaxy through an interstellar departure place button of the third video editing page on the display interface of the remote control device, and the user can select the galaxy listed in the galaxy list as the interstellar departure place. It will be appreciated that the user may select the galaxy as the interstellar departure point and the solar system as the interstellar transit point, and generate the first perspective animation from the galaxy through the solar system to the earth's upper space as the third video by generating a button.
S308: and synthesizing the first video, the second video and the third video to obtain a final video.
As shown in fig. 9, the user may perform composition processing on the selected video material to obtain a final video through a video composition editing page on a display interface of the remote control device. It can be understood that the user can select the first video, the second video and the third video through the selection buttons of the video composition editing page, and then the composition button is used for carrying out composition processing on the three sections of video materials to obtain the complete space flight travel video from the interstellar departure place to the shooting place.
According to the embodiment, the first video shot by the unmanned aerial vehicle, the second video converted from satellite pictures from the departure place to the shooting place and the third video from the interstellar departure place to the departure place generated by the CGI animation technology are synthesized through the multi-view video processing method to obtain a multi-view final video, a visual effect with more visual richness and shock is provided for a user, and the user obtains a more dynamic and immersed viewing experience through switching of different visual angles.
Based on the video processing method provided by the embodiment, the embodiment of the application further provides a video processing device. Referring to fig. 10, fig. 10 is a schematic block diagram of the video apparatus. As shown in fig. 10, the video processing apparatus 300 includes: a first video acquisition module 310, a second video acquisition module 320, and a composition module 330. The first video obtaining module 310 is configured to obtain a first video captured by the image capturing device, and obtain a capturing location corresponding to the first video; a second video acquisition module 320, configured to acquire a departure location selected by a user, and acquire a second video of a space flight view angle from the departure location to a shooting location; and the synthesizing module 330 is configured to synthesize the first video and the second video, and generate a synthesized video.
In some embodiments, the first video acquisition module 310 includes: a first video acquisition unit 3101 and a GPS reception unit 3102. The first video acquisition unit 3101 shoots the shooting place through a camera to acquire video materials of the shooting place; the GPS receiving unit 3102 calculates its own position by receiving signals transmitted from satellites and using the principle of triangulation, that is, acquires latitude and longitude coordinate information of the shooting location, and determines the shooting location corresponding to the first video according to the acquired latitude and longitude coordinate information.
In some embodiments, the second video acquisition module 320 includes: a departure location selection unit 3201 and a second video acquisition unit 3202. Wherein the departure place position selecting unit 3201 is used for selecting the type of the departure place by the user; the second video capturing unit 3202 is configured to obtain satellite pictures from the departure place to the shooting place according to the type of the departure place selected by the user, and process the satellite pictures to generate a second video of the space flight view angle from the departure place position to the shooting place position.
In some embodiments, the synthesis module 330 includes: a video synthesis unit 3301 and a video processing unit 3302. The video synthesis unit 3301 is configured to sequentially splice the first video and the second video to generate a synthesized video; the video processing unit 3302 is used for the video synthesizing unit 3301 to perform a beautifying operation such as adding a cut-off effect, increasing the smoothness of a picture, and the like when video synthesis is performed.
It should be noted that, the video processing device may execute the video processing method provided in the embodiment of the present application, and has the corresponding functional modules and beneficial effects of the execution method. Technical details not described in detail in the embodiments of the video processing apparatus may be found in the video processing method provided in the embodiments of the present application.
The embodiment of the application further provides a computing device, please refer to fig. 11, which illustrates a hardware structure of a computing device capable of executing the method of any one of the embodiments. The computing device 400 includes: at least one processor 410; and a memory 420 communicatively coupled to the at least one processor 410, one processor 410 being illustrated in fig. 11. The memory 420 stores instructions executable by the at least one processor 410 to enable the at least one processor 410 to perform the video processing method according to any one of the embodiments described above. The processor 410 and the memory 420 may be connected by a bus or otherwise, for example in fig. 11.
The memory 420 is used as a non-volatile computer readable storage medium for storing non-volatile software programs, non-volatile computer executable programs, and modules, such as program instructions/modules corresponding to the video processing method in the embodiments of the present application. The processor 410 executes various functional applications of the server and data processing by running non-volatile software programs, instructions and modules stored in the memory 420, i.e. implements the video processing method according to any of the embodiments described above.
Memory 420 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created from the use of the computing device, and the like. In addition, memory 420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some of these embodiments, memory 420 optionally includes memory located remotely from processor 410, which may be connected to the computing device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 420 that, when executed by the one or more processors 410, perform the video processing method of any of the embodiments described above.
The computing device may be a drone, a remote control device, or an electronic terminal that installs an application associated with the drone.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. Technical details not described in detail in this embodiment may be referred to the video processing method described in any one of the embodiments of the present application.
Embodiments also provide a non-volatile computer-readable storage medium storing computer-executable instructions that are executed by one or more processors to enable the at least one processor to perform the video processing method of any one of the above embodiments. For example, the non-volatile computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a compact disc Read-Only Memory (CDROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
Embodiments of the present application also provide a computer program product comprising one or more program codes stored in a computer-readable storage medium. The processor of the computing device reads the program code from the computer readable storage medium and executes the program code to perform the steps of the video processing method provided in the above-described embodiments.
It should be noted that the embodiments described above are merely illustrative, and the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and where the program may include processes implementing the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present application, the steps may be implemented in any order, and there are many other variations of the different aspects of the present application as described above, which are not provided in details for the sake of brevity; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of video processing, the method comprising:
acquiring a first video shot by an image pickup device, and acquiring a shooting place position corresponding to the first video;
acquiring a departure place position selected by a user, and acquiring a second video of a space flight view angle from the departure place position to the shooting place position;
and synthesizing the first video and the second video to generate a synthesized video.
2. The method of claim 1, wherein the obtaining the location of the shooting location corresponding to the first video comprises:
acquiring GPS data of the first video, and acquiring longitude and latitude coordinate information of a shooting place according to the GPS data;
when the shooting sites corresponding to the longitude and latitude coordinate information are multiple, shooting time lengths corresponding to the shooting sites are obtained according to the first video, and the shooting site with the longest shooting time length is used as the shooting site position corresponding to the first video.
3. The method of claim 1, wherein the acquiring a second video of a space flight perspective from the origin location to the shooting location comprises:
acquiring satellite pictures from the departure place to the shooting place;
acquiring shooting heights corresponding to the shooting place positions;
determining a type of the departure location;
and processing the satellite picture according to the shooting altitude and the type of the starting place position to generate a second video of a space flight view angle from the starting place position to the shooting place position.
4. The method of claim 3, wherein when the type of origin location is space,
the processing the satellite picture according to the shooting altitude and the type of the departure place position to generate a second video of a space flight view angle from the departure place position to the shooting place position, including:
determining a first height larger than the shooting height according to the shooting height, and acquiring a first zoom level corresponding to the first height;
acquiring GPS coordinates of the shooting place position;
and obtaining a corresponding satellite picture from the satellite pictures according to the GPS coordinates and the first zoom level, and generating a second video of a space flight view angle from the departure place position to the shooting place position by the obtained corresponding satellite picture.
5. The method of claim 3, wherein when the type of departure location is ground,
the processing the satellite picture according to the shooting altitude and the type of the departure place position to generate a second video of a space flight view angle from the departure place position to the shooting place position, including:
acquiring a horizontal picture sequence from the departure position to the shooting place position from the satellite picture based on a preset fixed zoom level;
determining a first height larger than the shooting height according to the shooting height, and acquiring a first zoom level corresponding to the first height;
acquiring all satellite pictures from the fixed zoom level to the first zoom level from the satellite pictures to form a vertical picture sequence;
generating a second video of a space flight perspective from the departure location to the shooting location based on the horizontal picture sequence and the vertical picture sequence.
6. The method according to any one of claims 1 to 5, further comprising:
acquiring a interplanetary departure place selected by a user;
generating a third video from the interstellar departure location to the departure location;
and synthesizing the third video with the first video and the second video to generate a final video.
7. The method of claim 6, wherein the synthesizing the third video with the first video and the second video to generate a final video comprises:
responding to a virtual interstellar departure place, a virtual departure place position and a virtual shooting place position selected by a user on a display interface, and acquiring a first video, a second video and a third video corresponding to the virtual interstellar departure place, the virtual departure place position and the virtual shooting place position;
and responding to the synthesis operation of the user on the display interface, and generating the synthesized video of the first video, the second video and the third video.
8. A video processing apparatus, the apparatus comprising:
the first video acquisition module is used for acquiring a first video shot by the camera equipment and acquiring a shooting place position corresponding to the first video;
the second video acquisition module is used for acquiring a departure place position selected by a user and acquiring a second video of a space flight view angle from the departure place position to the shooting place position;
and the synthesis module is used for synthesizing the first video and the second video to generate synthesized video.
9. A computing device, characterized by being applied to a device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer-readable storage medium storing computer-executable instructions which, when executed by a computing device, cause the computing device to perform the method of any of claims 1-7.
CN202311861795.0A 2023-12-29 2023-12-29 Video processing method and device and electronic equipment Pending CN117812455A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311861795.0A CN117812455A (en) 2023-12-29 2023-12-29 Video processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311861795.0A CN117812455A (en) 2023-12-29 2023-12-29 Video processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117812455A true CN117812455A (en) 2024-04-02

Family

ID=90423207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311861795.0A Pending CN117812455A (en) 2023-12-29 2023-12-29 Video processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117812455A (en)

Similar Documents

Publication Publication Date Title
JP5406813B2 (en) Panorama image display device and panorama image display method
CN107534789B (en) Image synchronization device and image synchronization method
US20200288108A1 (en) Method, apparatus, terminal, capturing system and device for setting capturing devices
US20170064174A1 (en) Image shooting terminal and image shooting method
WO2018030206A1 (en) Camerawork generating method and video processing device
US10650590B1 (en) Method and system for fully immersive virtual reality
US8717411B2 (en) Virtual image generating apparatus, virtual image generating method, and recording medium storing virtual image generating program
EP2850592B1 (en) Processing panoramic pictures
CN106296589B (en) Panoramic image processing method and device
JP2012080432A (en) Panoramic image generation device and panoramic image generation method
CN111970562A (en) Video processing method, video processing device, storage medium and electronic equipment
CN105144687A (en) Image processing device, image processing method and program
JP2020086983A (en) Image processing device, image processing method, and program
JP2019114147A (en) Image processing apparatus, control method for image processing apparatus, and program
CN111669518A (en) Multi-angle free visual angle interaction method and device, medium, terminal and equipment
CN111669567A (en) Multi-angle free visual angle video data generation method and device, medium and server
JP2011233005A (en) Object displaying device, system, and method
CN111669564A (en) Image reconstruction method, system, device and computer readable storage medium
CN111669561A (en) Multi-angle free visual angle image data processing method and device, medium and equipment
JP2008187678A (en) Video generating apparatus and video generating program
KR101603876B1 (en) Method for fabricating a panorama
KR102223339B1 (en) Method for providing augmented reality-video game, device and system
JP2009230635A (en) Image data generating device, image data generating method and image data generating program
CN117812455A (en) Video processing method and device and electronic equipment
CN111669569A (en) Video generation method and device, medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination