CN110996112A - Video editing method, device, server and storage medium - Google Patents

Video editing method, device, server and storage medium Download PDF

Info

Publication number
CN110996112A
CN110996112A CN201911231580.4A CN201911231580A CN110996112A CN 110996112 A CN110996112 A CN 110996112A CN 201911231580 A CN201911231580 A CN 201911231580A CN 110996112 A CN110996112 A CN 110996112A
Authority
CN
China
Prior art keywords
video
target
data
value
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911231580.4A
Other languages
Chinese (zh)
Inventor
杜中强
谢春燕
张意烽
申武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sioeye Technology Co ltd
Original Assignee
Chengdu Sioeye Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sioeye Technology Co ltd filed Critical Chengdu Sioeye Technology Co ltd
Priority to CN201911231580.4A priority Critical patent/CN110996112A/en
Publication of CN110996112A publication Critical patent/CN110996112A/en
Priority to PCT/CN2020/132585 priority patent/WO2021109952A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application relates to the technical field of video processing, and provides a video editing method, a device, a server and a storage medium, wherein the method comprises the following steps: receiving an original video sent by shooting equipment and motion data of the motion equipment, wherein the motion data is acquired by a motion data acquisition device; selecting at least one video segment from the original video according to the motion data; inserting at least one video clip into a preset video template to obtain a composite video, wherein the video template comprises at least one template clip, and the composite video comprises at least one video clip and at least one template clip. Compared with the prior art, the method and the device have the advantages that the highlight segments in the original video can be automatically selected according to the motion data of the motion equipment to generate the synthetic video, so that manual participation is not needed, and the video editing efficiency is improved.

Description

Video editing method, device, server and storage medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video editing method, an apparatus, a server, and a storage medium.
Background
At present, an amusement park is provided with shooting equipment on amusement equipment, and in the process of taking the amusement equipment by tourists, the shooting equipment is adopted to shoot videos, and after shooting is completed, wonderful segments are clipped from the shot videos and spliced into a combined video, so that the tourists are helped to leave wonderful moments in the process of playing. However, at present, the original video is watched manually, and the highlight is selected and edited into the combined video, which consumes a lot of manpower and time, resulting in low efficiency.
Disclosure of Invention
The application aims to provide a video editing method, a video editing device, a video editing server and a storage medium, and aims to solve the problem of low efficiency caused by manual selection of highlights to carry out video editing at the moment in the prior art.
In order to achieve the above purpose, the embodiments of the present application employ the following technical solutions:
in a first aspect, the present application provides a video editing method applied to a server, where the server is in communication connection with a shooting device, the shooting device is installed on a sports equipment and includes a sports data acquisition device, and the method includes: receiving an original video sent by the shooting equipment and motion data of the motion equipment, wherein the motion data is acquired by the motion data acquisition device; selecting at least one video segment from the original video according to the motion data; and inserting the at least one video clip into a preset video template to obtain a composite video, wherein the video template comprises at least one template clip, and the composite video comprises at least one video clip and at least one template clip.
In a second aspect, the present application further provides a video editing apparatus applied to a server, the server is in communication connection with a shooting device, the shooting device is installed on a sports equipment and includes a sports data acquisition device, the apparatus includes: the receiving module is used for receiving an original video sent by the shooting equipment and the motion data of the motion equipment, wherein the motion data is acquired by the motion data acquisition device; the selection module is used for selecting at least one video segment from the original video according to the motion data; the processing module is used for inserting the at least one video clip into a preset video template to obtain a composite video, wherein the video template comprises at least one template clip, and the composite video comprises at least one video clip and at least one template clip.
In a third aspect, the present application further provides a server, including: one or more processors; a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the video editing method described above.
In a fourth aspect, the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the video editing method described above.
Compared with the prior art, according to the video editing method, the device, the server and the storage medium provided by the application, the original video is shot through the shooting equipment, the motion data of the motion equipment is collected through the motion data collecting device, after the server receives the original video and the motion data sent by the shooting equipment, at least one video segment is selected from the original video according to the motion data, and the at least one video segment is inserted into the preset video template to obtain the composite video. Compared with the prior art, the method and the device have the advantages that the highlight segments in the original video can be automatically selected according to the motion data of the motion equipment to generate the synthetic video, so that manual participation is not needed, and the video editing efficiency is improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 shows an application scenario diagram of a video editing method provided by an embodiment of the present application.
Fig. 2 shows a flowchart of a video editing method provided by an embodiment of the present application.
Fig. 3 is a flowchart illustrating step S102 in the video editing method illustrated in fig. 2.
Fig. 4 shows an exemplary diagram of a composite video provided by an embodiment of the present application.
Fig. 5 is a diagram illustrating another example of a composite video provided by an embodiment of the present application.
Fig. 6 shows another flowchart of a video editing method provided in an embodiment of the present application.
Fig. 7 shows another flowchart of a video editing method provided in an embodiment of the present application.
Fig. 8 shows a block schematic diagram of a video editing apparatus provided in an embodiment of the present application.
Fig. 9 shows a block schematic diagram of a server provided in an embodiment of the present application.
Icon: 10-a server; 20-a photographing device; 30-a mobile terminal; 21-a motion data acquisition device; 11-a processor; 12-a memory; 13-a bus; 100-video editing means; 101-a receiving module; 102-selecting module; 103-processing module.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a schematic view illustrating an application scenario of a video editing method according to an embodiment of the present application, and the application scenario includes a server 10, at least one shooting device 20, and at least one mobile terminal 30, where each shooting device 20 is in communication connection with the server 10 through a network, and each mobile terminal 30 is in communication connection with the server 10 through a network, so as to implement data communication or interaction between the server 10 and the shooting device 20, and between the server 10 and the mobile terminal 30.
The photographing apparatus 20 is installed on a sporting apparatus, which may be an apparatus for a limit sport, which may include a limit bike, a low-altitude parachuting, a high-speed racing car, diving, downhill skiing, etc., and an amusement item which may include a roller coaster, a kite flight vehicle, a pirate ship, a rapid marching, a trojan horse, etc., and the following embodiments are described taking the apparatus for the amusement item as an example.
Each piece of sports equipment is provided with at least one piece of shooting equipment 20, and the shooting equipment 20 can be a sports camera, a camera, equipment provided with a camera module and the like; each photographing device 20 includes a motion data collecting means 21, the motion data collecting means 21 is used for collecting motion data of the moving device when the moving device of the fixed photographing device 20 moves, and the motion data collecting means 21 may include, but is not limited to, a barometer, a gyroscope, an acceleration sensor, a velocity sensor, a gravity sensor, and the like.
The photographing apparatus 20 is used to photograph a video for a guest while the guest plays on the amusement apparatus, and transmit the photographed raw video and motion data collected by the motion data collecting device 21 to the server 10.
Since the video content actually shot by the shooting device 20 on the amusement equipment has a long time and a large data volume, and may contain some segments with bad effects, for example, no human face or incomplete human face exists in the picture, in order to shorten the time and reduce the data transmission amount, the shooting device 20 needs to preprocess the actually shot video content to obtain the original video, that is, the original video is obtained by preprocessing the actually shot video content by the shooting device 20.
As an embodiment, the process of preprocessing the captured video content by the capturing device 20 may include: firstly, the shooting device 20 performs face detection on video content to obtain a face area of each video frame in the video content; then, the photographing apparatus 20 calculates a picture ratio of the face region of each video frame in each video frame, and deletes the video frame having the picture ratio smaller than a preset ratio (e.g., 10%) to obtain the original video. That is, the photographing apparatus 20 deletes video frames having a face in a picture with a lower proportion than a preset proportion at a preset proportion (for example, 10%), thereby cutting video segments having no face or incomplete faces or poor angles of faces, etc.
As another embodiment, the process of preprocessing the captured video content by the capturing device 20 may include: firstly, the shooting device 20 performs face detection on video content to obtain a face area of each video frame in the video content; then, the shooting device 20 obtains a face pixel size corresponding to the face area of each video frame, and deletes the video frame whose face pixel size is smaller than a preset minimum face pixel size, so as to obtain an original video. That is, the photographing apparatus 20 performs a predetermined minimum face pixel size pair within the photographed videoFor preprocessing, the face frame that can be detected by the shooting device 20 is usually a square, and the minimum value of the side length of the square is the length of the short side of the video frame
Figure BDA0002303697410000051
The minimum value is not less than 48 pixels, for example, 4096 × 3200 pixels for a video frame, the minimum face pixel size is 66 × 66 pixels, and video frames having face pixels with sizes below the minimum face pixel size are deleted.
Optionally, the process of preprocessing the captured video content by the capturing device 20 may further include quality screening, that is, the capturing device 20 deletes video frames that are overexposed or blurred in the video content.
The server 10 is configured to receive an original video and motion data sent by the shooting device 20, select a highlight from the original video according to the motion data, and insert the highlight into a preset video template to obtain a composite video; meanwhile, the server 10 is further configured to, upon receiving a video acquisition request sent by the mobile terminal 30, acquire a target composite video corresponding to the video acquisition request and send the target composite video to the mobile terminal 30. The server 10 may be a single server or a group of servers.
The mobile terminal 30 is configured to receive the target composite video sent by the server 10, and display the target composite video for the user to select. The mobile terminal 30 may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, etc.
The mobile terminal 30 is installed with a third party Application (APP), which may run an applet, through which the user may interact with the server 10, for example, after riding an amusement device, the user may watch or download his/her own play video. Alternatively, when the user enters the applet through a third-party application installed on the mobile terminal 30, the applet may acquire a face image of the user, and perform matching with the synthesized video in the server 10 according to the face image, so as to acquire a target synthesized video with the user as a leading role, and display the target synthesized video for the user to view, download, and the like.
In addition, an application may be installed in the mobile terminal 30, so that the user may interact with the server 10 through the application to view, download, and the like a target composite video that is dominant to the user.
Referring to fig. 2, fig. 2 is a flowchart illustrating a video editing method provided in an embodiment of the present application, where the video editing method is applied to a server 10, and may include the following steps:
step S101, receiving an original video sent by a shooting device and motion data of a motion device, wherein the motion data is collected by a motion data collecting device.
In the embodiment, the photographing devices 20 are fixed on the sports apparatus, and it is ensured that one photographing device 20 can capture a video of at least one user riding the sports apparatus, for example, if the sports apparatus is a roller coaster, one photographing device 20 is correspondingly arranged in each row of seats, so that the photographing device 20 can capture a play video of a row of users. The shooting device 20 is always in an on state, when the motion device moves, the shooting device 20 shoots an original video of a user, the motion data acquisition device 21 acquires motion data of the motion device, and the time of the original video and the time of the motion data are corresponding, that is, an original video frame and a motion data value at the same time point are corresponding.
The original video is obtained by preprocessing the captured video content by the capturing device 20, and the original video is mainly associated with at least one specific user, for example, one capturing device 20 is provided in a row of seats of a roller coaster, and for the original video captured by one capturing device 20, the original video is mainly associated with one or two users in a row of seats corresponding to the capturing device 20. The original video can be a complete video or several independent small videos, and the duration of the original video can be 30s-1 min.
Step S102, at least one video segment is selected from the original video according to the motion data.
In this embodiment, the motion data collected by the motion data collection device 21 may be one or more of air pressure data, position data, acceleration data, velocity data, angular velocity data, gravity data, and the like, and meanwhile, the motion data includes a plurality of data values and a collection time point corresponding to each data value.
The video segments selected from the original video can include, but are not limited to, video segments when the sports equipment starts to move, video segments capable of embodying characteristics of the sports equipment, and the like, for example, if the sports equipment is amusement equipment with large height change such as a roller coaster and a pirate ship, the video segments when the roller coaster climbs to a vertex and starts to dive can be selected; the sports equipment is amusement equipment with large acceleration change, such as a torrent marching machine, a building jumping machine and the like, and a video clip during sudden acceleration can be selected; if the sports apparatus is a rotating amusement apparatus such as a carousel, a flight seat, or the like, a video clip or the like can be selected when the video clip arrives at a predetermined position (for example, a position with a beautiful landscape, a wide field of view, or the like). Obviously, the video segments may be selected according to the motion data of the motion device during the operation process, for example, if the motion device is a roller coaster, the video segment with the largest gravity value or the smallest gravity value may be selected.
Optionally, the manner of selecting at least one video segment from the original video according to the motion data may include: firstly, acquiring a data value reaching a set value in motion data or a data value which is greatly changed compared with a previous/next data value; then, acquiring a collection time point corresponding to the data value as a reference point, and selecting a video segment with a certain time length (for example, 5-15 s) before and/or after the reference point from the original video.
For example, after the server 10 receives the original video and the motion data sent by the shooting device 20, if a certain data value in the motion data satisfies the air pressure data until the air pressure value reaches the set air pressure value, or the acceleration data value reaches the set acceleration value, or the angular velocity data value reaches the set angular velocity value, etc., the video segment is selected according to a time point corresponding to the data value.
As an implementation manner, referring to fig. 3, step S102 may include:
and a substep S1021, obtaining a target selection condition corresponding to the motion data.
In this embodiment, the target selection condition refers to a condition for locating a target data value from the motion data, and after the target data value is located according to the target selection condition, the acquisition time point corresponding to the target data value may be used as a reference point, and a video segment before and/or after the reference point for a certain duration (e.g., 5 to 15 seconds, etc.) is selected from the original video.
In an embodiment, the process of obtaining the target selection condition corresponding to the motion data may include:
firstly, analyzing the motion data to determine the target equipment identification corresponding to the motion data.
The server 10 stores a plurality of device identifiers and selection conditions corresponding to each device identifier in advance, where the device identifiers are used to represent the moving devices, different moving devices correspond to different device identifiers, and the same moving device corresponds to the same device identifier. For example, the device identifier of the roller coaster is 1, the device identifier of the torrent marching is 2, the device identifier of the trojan horse is 3, and the like, and the corresponding selection conditions are as follows: the air pressure data value reaches a preset air pressure value, the acceleration data value reaches a preset acceleration value, the position data value corresponds to a preset position and the like, as shown in the following table 1:
TABLE 1
Device identification Selection conditions
1 The air pressure data value reaches the preset air pressure value
2 The acceleration data value reaches the preset acceleration value
3 The position data value corresponds to a preset position
After receiving the original video and the motion data sent by the shooting device 20, the server 10 first analyzes the motion data, and determines a target device identifier corresponding to the motion data, for example, the target device identifier is 2.
And then, according to the target equipment identification, determining a target selection condition corresponding to the motion data from the multiple selection conditions.
After the server 10 obtains the target device identifier corresponding to the motion data, it can obtain the target selection condition corresponding to the target device identifier, for example, if the target device identifier is 2, the target selection condition is: the acceleration data value reaches a preset acceleration value.
In another embodiment, the process of obtaining the target selection condition corresponding to the motion data may include:
first, a target device type corresponding to the motion data is acquired.
The server 10 stores a plurality of device types and selection conditions corresponding to each device type in advance, where the device type refers to the type of each sports device, and can classify amusement devices in an amusement park in advance to obtain all device types, for example, if the amusement devices in the amusement park include roller coasters, pirates, torrential runways, building jumps, trojans and flying chairs, 3 device types can be obtained: equipment with large height change, equipment with large acceleration change and rotating equipment; then, the selection conditions corresponding to each type of equipment are set, for example, the selection conditions corresponding to 3 types of equipment, such as equipment with large height change, equipment with large acceleration change, and rotating equipment, are sequentially as follows: the air pressure data value reaches a preset air pressure value, the acceleration data value reaches a preset acceleration value, the position data value corresponds to a preset position and the like, as shown in the following table 2:
TABLE 2
Type of device Selection conditions
Highly variable equipment The air pressure data value reaches the preset air pressure value
Apparatus with large acceleration variation The acceleration data value reaches the preset acceleration value
Rotating equipment The position data value corresponds to a preset position
After acquiring the motion data sent by the shooting device 20, the server 10 may acquire a target device type corresponding to the motion data based on the device types classified already.
And then, according to the type of the target equipment, determining a target selection condition corresponding to the motion data from the multiple selection conditions.
After the server 10 obtains the type of the target device corresponding to the motion data, the target selection condition corresponding to the type of the target device can be obtained, for example, if the type of the target device is a device with a large height change, the target selection condition is that the air pressure data value reaches a preset air pressure value.
It should be noted that the selection conditions corresponding to some device identifiers or device types may include multiple types, for example, the device identifier of the roller coaster is 1, the device type is a device with a large height change, and the corresponding selection conditions are: the air pressure data value reaches a preset air pressure value, the gravity data value reaches a preset gravity value, the acceleration data value reaches a preset acceleration value and the like. Meanwhile, in order to avoid the occurrence of the abnormality, when the selection conditions include a plurality of types, a priority may be set for each selection condition according to a corresponding device identifier or device type, for example, the device identifier of the roller coaster is 1, and the corresponding selection conditions are: the air pressure data value reaches a preset air pressure value, the gravity data value reaches a preset gravity value, the acceleration data value reaches a preset acceleration value, the priority is sequentially reduced, and when the target selection condition is executed, the target selection condition is determined according to the priority.
And a substep S1022, locating a target data value from the motion data according to the target selection condition, and obtaining a target time point corresponding to the target data value.
In this embodiment, the target selection condition may include that the air pressure data value reaches a preset air pressure value, or the air pressure change rate of two adjacent air pressure data values reaches a preset air pressure change rate, or the position data value corresponds to a preset position, or the acceleration data value reaches a preset acceleration value, or the difference between the consecutive preset number of acceleration data values and the gravitational acceleration is greater than a preset value, or the angular velocity data reaches a preset angular velocity value, or the accumulated value of at least one angular velocity data value within a preset time reaches a preset value, and the like, and the implementation process of the sub-step S1022 is described in detail below.
As an embodiment, when the motion data includes air pressure data, the air pressure data may be collected by an air pressure meter, the air pressure data includes a plurality of air pressure data values and a collection time point corresponding to each air pressure data value, and the target selection condition is: the air pressure data value reaches the preset air pressure value, in this case, the sub-step S1022 may include:
s1022-1, a target air pressure data value reaching the preset air pressure value is located from the air pressure data.
The preset air pressure value can be the air pressure value when the sports equipment runs to the highest point or the lowest point, or the air pressure value when the sports equipment just starts to run.
And S1022-2, acquiring a target time point corresponding to the target air pressure data value.
As another embodiment, when the motion data includes air pressure data, the air pressure data may be collected by an air pressure meter, the air pressure data includes a plurality of air pressure data values and a collection time point corresponding to each air pressure data value, and the target selection condition is: the air pressure change rate of two adjacent air pressure data values reaches the preset air pressure change rate, in this case, the sub-step S1022 may include:
s1022-3, two adjacent air pressure data values with the air pressure change rate reaching the preset air pressure change rate are determined from the air pressure data values.
The air pressure change rate may be an increase in a subsequent air pressure data value compared to a previous air pressure data value of two adjacent air pressure data values.
And S1022-4, taking the latter one of the two adjacent air pressure data values as the target air pressure data value.
And S1022-5, acquiring a target time point corresponding to the target air pressure data value.
For example, since the barometer collects data every 0.2s, the collection time point at which the change in the barometric pressure value is the largest can be set as the target time point, and generally corresponds to a position lower than the middle of the down slope. When the sports apparatus is a roller coaster, each row of the photographing apparatuses 20 provided on the roller coaster may determine the target time point in this manner. In addition, the condition that the target time point is abnormal due to the fact that the roller coaster stops due to faults can be effectively avoided by taking the maximum acquisition time point as the target time point.
In the case of an amusement facility having a large height change such as a roller coaster, the target air pressure data value can be determined from a change in air pressure data value detected by an air pressure gauge, and for example, when the air pressure gauge detects that the air pressure has decreased by 0.18hpa within 10 seconds (corresponding to a 2m rise of the roller coaster), the decreased air pressure data value is used as the target air pressure data value.
As another embodiment, when the motion data includes position data, the position data may be collected by a gyroscope, the position data includes a plurality of position data values and a collection time point corresponding to each position data value, and the target selection condition is: the position data value corresponds to a preset position, in which case sub-step S1022 may include:
s1022-6, locating a target position data value corresponding to the preset position from the position data.
The preset position may be a position where the user can take a picture conveniently, such as a beautiful landscape or a wide field of view, or may be a position where the sports apparatus is just started to operate.
And S1022-7, acquiring a target time point corresponding to the target position data value.
As another embodiment, when the motion data includes acceleration data, the acceleration data may be acquired by an acceleration sensor, the acceleration data includes a plurality of acceleration data values and an acquisition time point corresponding to each acceleration data value, and the target selection condition is: the acceleration data value reaches the preset acceleration value, in which case the sub-step S1022 may include:
and S1022-8, locating a target acceleration data value reaching a preset acceleration value from the acceleration data.
The preset acceleration may be an acceleration value when the sports apparatus suddenly accelerates or suddenly decelerates, or an acceleration value when the sports apparatus just starts to operate.
And S1022-9, acquiring a target time point corresponding to the target acceleration data value.
As another embodiment, when the motion data includes acceleration data, the acceleration data may be acquired by an acceleration sensor, the acceleration data includes a plurality of acceleration data values and an acquisition time point corresponding to each acceleration data value, and the target selection condition is: in the case that the difference between the acceleration data value and the gravitational acceleration is greater than the preset value, the sub-step S1022 may include:
and S1022-10, determining a plurality of acceleration data values which are continuously preset and have difference values with the gravity acceleration larger than a preset value from the plurality of acceleration data values.
And S1022-11, taking the last one of the preset number of acceleration data values as a target acceleration data value.
And S1022-12, acquiring a target time point corresponding to the target acceleration data value.
For example, in an amusement facility such as a pirate ship having a large acceleration change, the acceleration data value is detected by the acceleration sensor as compared with the gravitational acceleration g (about 9.8 m/s)2) Are all more than 1m/s2And the number of times reaches 6 continuously, the acceleration data corresponding to the 6 th time is taken as the target acceleration data value.
As another embodiment, when the motion data includes angular velocity data, the angular velocity data may be collected by a gyroscope, the angular velocity data includes a plurality of angular velocity data values and a collection time point corresponding to each angular velocity data value, and the target selection condition is: the angular velocity data reaches the preset angular velocity value, in which case the sub-step S1022 may include:
and S1022-13, locating a target angular velocity data value reaching the preset angular velocity value from the angular velocity data.
The preset angular velocity value may be an angular velocity value at which the exercise device is operated farthest from or closest to the rotating shaft, or may be an air pressure value at which the exercise device is just started to operate.
And S1022-14, acquiring a target time point corresponding to the target angular velocity data value.
As another embodiment, when the motion data includes angular velocity data, the angular velocity data may be collected by a gyroscope, the angular velocity data includes a plurality of angular velocity data values and a collection time point corresponding to each angular velocity data value, and the target selection condition is: the accumulated value of at least one angular velocity data value within the preset time reaches the preset value, in which case the sub-step S1022 may include:
and S1022-15, determining at least one angular velocity data value of which the accumulated value reaches a preset value within a preset time from the plurality of angular velocity data values.
And S1022-16, taking the last one of the at least one angular velocity data value as a target angular velocity data value.
And S1022-17, acquiring a target time point corresponding to the target angular velocity data value.
For amusement equipment with large angular velocity changes such as lovers' coasters, hobbyhorse whirlpools and spinning cups, the gyroscope can be used for detecting the change of angular velocity data values to determine target angular velocity data values, for example, when the gyroscope is used for detecting that the accumulated value of at least one angular velocity data value of the rotational hobbyhorse in 20s reaches 1.2rad/s, the last angular velocity data value in 20s is taken as the target angular velocity data value.
And a substep S1023 of selecting a video segment with preset duration before and/or after the target time point from the original video based on the target time point to obtain at least one video segment.
In this embodiment, after a target data value satisfying a preset condition is located from the motion data and a target time point (e.g., 12:01) corresponding to the target data value is obtained, since the acquisition time of the motion data and the shooting time of the original video correspond to each other, a video segment that is before the target time point (e.g., 12:01), or after the target time point (e.g., 12:01), or before the target time point (e.g., 12:01) and after the target time point (e.g., 12:01) and for a preset duration (e.g., 10s) may be selected from the original video according to the target time point (e.g., 12:01) to obtain at least one video segment, for example, the video clips are 11: 56-12: 01, or 12: 01-12: 06, or 11: 56-12: 01 and 12: 01-12: 06.
The preset duration can be set to be 1-5 s generally, the cumulative duration of at least one video segment can be 10-20s, and the preset duration can be flexibly set according to the actual situation, which is not limited herein.
Step S103, inserting at least one video clip into a preset video template to obtain a composite video, wherein the video template comprises at least one template clip, and the composite video comprises at least one video clip and at least one template clip.
In this embodiment, the video template includes N template segments, where N is 1,2, …, and meanwhile, the video template may further include a slice header and a slice trailer, where N template segments are disposed between the slice header and the slice trailer, and the N template segments and the slice header and the slice trailer form M whites, where M is 1,2, …, and the M whites are not consecutive. In addition, the number of the M blanks is consistent with the number of the selected at least one video segment, and the cumulative duration of the M blanks is consistent with the cumulative duration of the selected at least one video segment, so that at least one video segment can be inserted into the M blanks to obtain the composite video.
Alternatively, the template clip may be, but is not limited to, a template video, a special effect picture, a subtitle, and the like, and the template video may be an aerial video of an amusement park or amusement equipment, for example, a landscape video of an amusement park, an aerial video of a roller coaster track, and the like. Meanwhile, the M blanks in the video template may be set according to the rhythm of the background music, for example, referring to fig. 4, at least one transition point is determined according to the rhythm of the background music, and the positions of the blanks in the video template are set according to the transition points, where one transition point corresponds to one blank.
Optionally, the composite video includes at least one video segment and at least one template segment, and meanwhile, the composite video may further include a title and a trailer, and the at least one video segment and the at least one template segment are disposed between the title and the trailer. At least one video segment and at least one template segment in the composite video may be arranged at intervals, where the intervals may be arranged at one-to-one intervals, for example, in fig. 4, the video segments 1,2, 3, 1 and 2 are arranged at one-to-one intervals; or a one-to-many, many-to-one, many-to-many interval setting is possible, for example, referring to fig. 5, the video segment 1, the video segment 2, and the template segment 1 are set at an interval of one-to-two. The specific manner of the interval setting can be flexibly set according to the actual situation, and is not limited herein.
In a possible situation, after receiving the original video sent by the shooting device 20, the server 10 may perform face recognition on the original video to establish a binding relationship between a face and the original video, so that, on the basis of fig. 2, fig. 6 is another schematic flow chart of the video editing method provided by the present invention, please refer to fig. 6, before step S102, the video editing method further includes:
and step S111, carrying out face recognition on the original video to obtain the corresponding relation between the face and the original video and storing the corresponding relation into the face database.
In this embodiment, after receiving the original video sent by the shooting device 20, the server 10 may perform face recognition on the original video, for example, perform face recognition by using a pre-trained face recognition model to obtain a corresponding relationship between a face and the original video, where the face may be represented by a multidimensional vector, for example, a 128-dimensional vector.
Optionally, after receiving a section of original video sent by the shooting device 20, the server 10 may first perform face detection on the section of original video, and select a frame with the largest face ratio as a reference frame; then, face recognition is performed on the reference frame, and face matching is performed on other frames and the reference frame, if the face matching is not performed, the face matching is discarded, and meanwhile, a new original video frame sent by the shooting device 20 is subsequently received, and also face matching can be performed on the reference frame, and the above steps are repeated to establish a binding relationship between the original video and the face, that is, one face corresponds to one original video.
In addition, the reference frame is not constant, if a video frame and the reference frame correspond to the same face and the face proportion of the video frame is larger than that of the reference frame, the video frame is used as a new reference frame.
In this way, each time the server 10 receives an original video sent by one section of the shooting device 20, face matching is performed once to obtain a corresponding relationship between a face and the original video, and the corresponding relationship is stored in the face database.
Then, for the original video corresponding to each face, the server 10 sequentially performs video editing, that is, executes steps S102 to S103, obtains a composite video corresponding to each original video, that is, a composite video corresponding to each face, and stores the corresponding relationship between the original video and the composite video into the face database, that is, the face database includes the corresponding relationship between the face and the original video and the corresponding relationship between the original video and the composite video.
Meanwhile, the user may watch or download his/her own composite video through the mobile terminal 30, and therefore, referring to fig. 6, after step S103, the video editing method further includes:
step S112, acquiring a video acquisition request sent by the mobile terminal, where the video acquisition request includes a face image.
In this embodiment, after experiencing the sports equipment, if the user wants to watch or download his/her own composite video, the user may enter an applet running under a third-party application installed in the mobile terminal 30, or an application installed in the mobile terminal 30, and so on, to watch or download it; the applet or the application program and the like first prompt the user to upload the face image, and the user can perform self-shooting through the mobile terminal 30 to obtain the face image and upload the face image to the server 10 through the applet or the application program and the like.
And step S113, carrying out face recognition on the face image to obtain a target face corresponding to the face image.
In this embodiment, after receiving a video acquisition request sent by a user through the mobile terminal 30, the server 10 may perform face search on a face image in the video acquisition request to determine a target composite video corresponding to a face in the face image.
Alternatively, the server 10 may perform face recognition on the face image, for example, perform face recognition using a pre-trained face recognition model, to obtain a target face corresponding to the face image, where the target face is represented by a multidimensional vector, for example, a 128-dimensional vector.
And step S114, determining a target synthetic video corresponding to the target face based on the face database.
In this embodiment, after the server 10 performs face recognition on the face image sent by the mobile terminal 30 to obtain a target face corresponding to the face image, a target composite video corresponding to the target face may be determined based on the target face, a corresponding relationship between the face and the original video and a corresponding relationship between the original video and the composite video, which are stored in advance in the face database. And step S115, sending the target composite video to the mobile terminal so that the mobile terminal displays the target composite video.
In this embodiment, after the server 10 acquires the target composite video, the target composite video may be sent to the mobile terminal 30, for example, to an applet running under a third-party application installed in the mobile terminal 30, or an application installed in the mobile terminal 30, so that the mobile terminal displays the target composite video, and the user can view or download the target composite video of the user.
In another possible situation, after receiving the original video sent by the shooting device 20, the server 10 may perform face recognition on the original video to establish a binding relationship between a face and the original video, and when the user wants to watch or download a playing video of the user through the mobile terminal 30, the server 10 may find the original video of the user through a face image of the user and perform video editing to obtain a composite video of the user, and send the composite video to the mobile terminal 30, so, on the basis of fig. 2, fig. 7 is another flow diagram of the video editing method provided by the present invention, please refer to fig. 7, and before step S102, the video editing method further includes:
step S121, performing face recognition on the original video to obtain a corresponding relationship between a face and the original video, and storing the corresponding relationship into a face database, that is, the face database includes a corresponding relationship between a face and the original video.
Step S122, a video acquisition request sent by the mobile terminal is acquired, wherein the video acquisition request comprises a face image.
And S123, carrying out face recognition on the face image to obtain a target face vector corresponding to the face image.
Step S124, based on the face database, determining a target original video corresponding to the target face.
Then, the target original video is subjected to video editing, that is, steps S102 to S103 are performed on the target original video as an original video, so as to obtain a composite video corresponding to the target original video, and the composite video is sent to the mobile terminal 30 that requests the video, and therefore, referring to fig. 7, after step S103, the video editing method further includes:
and step S125, sending the composite video corresponding to the target original video to the mobile terminal so that the mobile terminal displays the composite video corresponding to the target original video.
Compared with the prior art, the method has the following beneficial effects:
firstly, according to the method and the device, the highlights in the original video can be automatically selected according to the motion data of the motion equipment to generate the composite video, so that manual participation is not needed, and the video editing efficiency is improved;
secondly, the shooting device 20 preprocesses the video content which is actually shot to obtain an original video, thereby shortening the time length, reducing the data transmission quantity and lowering the requirements on the bandwidth and the storage space;
thirdly, the original video content shot by the shooting device 20 is single and boring, and meanwhile, the original video content is not beautiful enough, the server 10 selects the wonderful segment from the original video according to the motion data, and inserts the wonderful segment into the preset video template to obtain the composite video with rich pictures, background music, special effect pictures, landscape pictures and the like, so that the video content is rich and vivid.
In order to perform the corresponding steps in the above method embodiments and various possible embodiments, an implementation of the video editing apparatus is given below. Referring to fig. 8, fig. 8 is a block diagram illustrating a video editing apparatus 100 according to an embodiment of the present disclosure. The video editing apparatus 100 is applied to the server 10, and the video editing apparatus 100 includes: the device comprises a receiving module 101, a selecting module 102 and a processing module 103.
The receiving module 101 is configured to receive an original video sent by a shooting device and motion data of a motion device, where the motion data is collected by a motion data collecting device.
Optionally, the original video is obtained by preprocessing the shot video content by the shooting device 20; the step of preprocessing the shot video content by the shooting device 20 includes: the shooting equipment carries out face detection on the video content to obtain a face area of each video frame in the video content; and the shooting equipment calculates the picture proportion of the face area of each video frame in each video frame, and deletes the video frames with the picture proportion smaller than the preset proportion to obtain the original video.
Optionally, the original video is obtained by preprocessing the shot video content by the shooting device 20; the step of preprocessing the shot video content by the shooting device 20 includes: the shooting equipment carries out face detection on the video content to obtain a face area of each video frame in the video content; the shooting equipment obtains the face pixel size corresponding to the face area of each video frame, and deletes the video frame with the face pixel size smaller than the preset minimum face pixel size to obtain the original video.
Optionally, the step of the photographing apparatus 20 preprocessing the photographed video content further includes: the capture device deletes overexposed or blurred video frames in the video content.
A selecting module 102, configured to select at least one video segment from the original video according to the motion data.
Optionally, the motion data includes a plurality of data values and an acquisition time point corresponding to each data value; the selecting module 102 is specifically configured to: acquiring a target selection condition corresponding to the motion data; positioning a target data value from the motion data according to a target selection condition, and acquiring a target time point corresponding to the target data value; and selecting a video clip with preset duration before and/or after the target time point from the original video based on the target time point to obtain at least one video clip.
Optionally, the server 10 stores a plurality of device identifiers and a selection condition corresponding to each device identifier in advance; the manner of acquiring the target selection condition corresponding to the motion data performed by the selection module 102 includes: analyzing the motion data to determine a target device identifier corresponding to the motion data; and determining a target selection condition corresponding to the motion data from the plurality of selection conditions according to the target equipment identifier.
Optionally, the motion data includes air pressure data, the air pressure data includes a plurality of air pressure data values and an acquisition time point corresponding to each air pressure data value, and the target selection condition includes that the air pressure data value reaches a preset air pressure value;
the method for the selection module 102 to position the target data value from the motion data according to the target selection condition and obtain the target time point corresponding to the target data value includes: positioning a target air pressure data value reaching a preset air pressure value from the air pressure data; and acquiring a target time point corresponding to the target air pressure data value.
Optionally, the motion data includes air pressure data, the air pressure data includes a plurality of air pressure data values and an acquisition time point corresponding to each air pressure data value, and the target selection condition includes that the air pressure change rate of two adjacent air pressure data values reaches a preset air pressure change rate;
the selecting module 102 executes a method of locating a target data value from the motion data according to a target selecting condition and obtaining a target time point corresponding to the target data value, including: determining two adjacent air pressure data values with the air pressure change rate reaching a preset air pressure change rate from the plurality of air pressure data values; taking the latter one of the two adjacent air pressure data values as a target air pressure data value; and acquiring a target time point corresponding to the target air pressure data value.
Optionally, the motion data includes position data, the position data includes a plurality of position data values and an acquisition time point corresponding to each position data value, and the target selection condition includes that the position data values correspond to a preset position; the method for the selection module 102 to position the target data value from the motion data according to the target selection condition and obtain the target time point corresponding to the target data value includes: positioning a target position data value corresponding to a preset position from the position data; and acquiring a target time point corresponding to the target position data value.
Optionally, the motion data includes acceleration data, the acceleration data includes a plurality of acceleration data values and a collection time point corresponding to each acceleration data value, and the target selection condition includes that the acceleration data value reaches a preset acceleration value;
the method for the selection module 102 to position the target data value from the motion data according to the target selection condition and obtain the target time point corresponding to the target data value includes: positioning a target acceleration data value reaching a preset acceleration value from the acceleration data; and acquiring a target time point corresponding to the target acceleration data value.
Optionally, the motion data includes acceleration data, the acceleration data includes a plurality of acceleration data values and a collection time point corresponding to each acceleration data value, and the target selection condition includes that the difference values between the acceleration data values of the continuous preset number and the gravitational acceleration are all greater than a preset value;
the selecting module 102 executes a method of locating a target data value from the motion data according to a target selecting condition and obtaining a target time point corresponding to the target data value, including: determining a plurality of acceleration data values which are continuously preset and have difference values with the gravity acceleration larger than a preset value from the plurality of acceleration data values; taking the last one of the preset number of acceleration data values as a target acceleration data value; and acquiring a target time point corresponding to the target acceleration data value.
Optionally, the motion data includes angular velocity data, the angular velocity data includes a plurality of angular velocity data values and an acquisition time point corresponding to each angular velocity data value, and the target selection condition includes that the angular velocity data reaches a preset angular velocity value;
the selecting module 102 executes a method of locating a target data value from the motion data according to a target selecting condition and obtaining a target time point corresponding to the target data value, including: positioning a target angular velocity data value reaching a preset angular velocity value from the angular velocity data; and acquiring a target time point corresponding to the target angular velocity data value.
Optionally, the motion data includes angular velocity data, the angular velocity data includes a plurality of angular velocity data values and an acquisition time point corresponding to each angular velocity data value, and the target selection condition includes that an accumulated value of at least one angular velocity data value within a preset time reaches a preset value;
the selecting module 102 executes a method of locating a target data value from the motion data according to a target selecting condition and obtaining a target time point corresponding to the target data value, including: determining at least one angular velocity data value of which the accumulated value reaches a preset value within a preset time from the plurality of angular velocity data values; taking the last of the at least one angular velocity data value as a target angular velocity data value; and acquiring a target time point corresponding to the target angular velocity data value.
The processing module 103 is configured to insert at least one video segment into a preset video template to obtain a composite video, where the video template includes at least one template segment, and the composite video includes at least one video segment and at least one template segment.
Optionally, the video template further comprises a leader and a trailer, and at least one template segment is arranged between the leader and the trailer; the composite video further comprises a title and a title, and the at least one video segment and the at least one template segment are arranged between the title and the title.
Optionally, the template segment includes at least one of a template video, a special effect picture, and a subtitle.
Optionally, the server 10 establishes a face database in advance, and the processing module 103 is further configured to: and carrying out face recognition on the original video to obtain the corresponding relation between the face and the original video and storing the corresponding relation into a face database.
Optionally, the face database includes a corresponding relationship between a face and an original video, and a corresponding relationship between an original video and a synthesized video;
the processing module 103 is further configured to: acquiring a video acquisition request sent by a mobile terminal, wherein the video acquisition request comprises a face image; carrying out face recognition on the face image to obtain a target face corresponding to the face image; determining a target synthesized video corresponding to a target face based on a face database; and sending the target composite video to the mobile terminal so that the mobile terminal displays the target composite video.
Optionally, the server is pre-established with a face database, and the processing module 103 is further configured to: carrying out face recognition on the original video to obtain the corresponding relation between the face and the original video and storing the corresponding relation in a face database; acquiring a video acquisition request sent by a mobile terminal, wherein the video acquisition request comprises a face image; carrying out face recognition on the face image to obtain a target face corresponding to the face image; determining a target original video corresponding to the target face based on a face database, taking the target original video as an original video, and executing a step of selecting at least one video segment from the original video according to the motion data; and sending the composite video corresponding to the target original video to the mobile terminal so that the mobile terminal displays the composite video corresponding to the target original video.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the video editing apparatus 100 described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Referring to fig. 9, fig. 9 is a block diagram illustrating a server 10 according to an embodiment of the present disclosure. The server 10 includes a processor 11, a memory 12, and a bus 13, and the processor 11 is connected to the memory 12 through the bus 13.
The memory 12 is used for storing a program, such as the video editing apparatus 100 shown in fig. 8, the video editing apparatus 100 includes at least one software functional module which can be stored in the memory 12 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the server 10, and the processor 11, after receiving an execution instruction, executes the program to implement the video editing method disclosed in the above embodiment.
The Memory 12 may include a Random Access Memory (RAM) and may also include a non-volatile Memory (NVM).
The processor 11 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 11. The processor 11 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Micro Control Unit (MCU), a Complex Programmable Logic Device (CPLD), a Field Programmable Gate Array (FPGA), and an embedded ARM.
The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by the processor 11, implements the video editing method disclosed in the above embodiment.
To sum up, the present application provides a video editing method, apparatus, server and storage medium, where the method includes: receiving an original video sent by shooting equipment and motion data of the motion equipment, wherein the motion data is acquired by a motion data acquisition device; selecting at least one video segment from the original video according to the motion data; inserting at least one video clip into a preset video template to obtain a composite video, wherein the video template comprises at least one template clip, and the composite video comprises at least one video clip and at least one template clip. Compared with the prior art, the method and the device have the advantages that the highlight segments in the original video can be automatically selected according to the motion data of the motion equipment to generate the synthetic video, so that manual participation is not needed, and the video editing efficiency is improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (21)

1. A video editing method is applied to a server, the server is in communication connection with a shooting device, the shooting device is installed on a sports device and comprises a sports data acquisition device, and the method comprises the following steps:
receiving an original video sent by the shooting equipment and motion data of the motion equipment, wherein the motion data is acquired by the motion data acquisition device;
selecting at least one video segment from the original video according to the motion data;
inserting the at least one video clip into a preset video template to obtain a composite video, wherein the video template comprises at least one template clip, and the composite video comprises the at least one video clip and the at least one template clip.
2. The method of claim 1, wherein the motion data comprises a plurality of data values and a corresponding acquisition time point for each data value;
the step of selecting at least one video segment from the original video according to the motion data comprises:
acquiring a target selection condition corresponding to the motion data;
positioning a target data value from the motion data according to the target selection condition, and acquiring a target time point corresponding to the target data value;
and based on the target time point, selecting a video clip with preset duration before and/or after the target time point from the original video to obtain at least one video clip.
3. The method of claim 2, wherein the server stores a plurality of device identifiers and selection conditions corresponding to each device identifier in advance;
the step of obtaining the target selection condition corresponding to the motion data includes:
analyzing the motion data to determine a target device identifier corresponding to the motion data;
and determining a target selection condition corresponding to the motion data from a plurality of selection conditions according to the target equipment identifier.
4. The method of claim 2, wherein the motion data comprises air pressure data, the air pressure data comprises a plurality of air pressure data values and a collection time point corresponding to each air pressure data value, and the target selection condition comprises that the air pressure data value reaches a preset air pressure value;
the step of locating a target data value from the motion data according to the target selection condition and acquiring a target time point corresponding to the target data value includes:
locating a target air pressure data value reaching the preset air pressure value from the air pressure data;
and acquiring a target time point corresponding to the target air pressure data value.
5. The method of claim 2, wherein the motion data comprises air pressure data, the air pressure data comprises a plurality of air pressure data values and a collection time point corresponding to each air pressure data value, and the target selection condition comprises that the air pressure change rate of two adjacent air pressure data values reaches a preset air pressure change rate;
the step of locating a target data value from the motion data according to the target selection condition and acquiring a target time point corresponding to the target data value includes:
determining two adjacent air pressure data values with the air pressure change rate reaching the preset air pressure change rate from the plurality of air pressure data values;
taking the latter one of the two adjacent air pressure data values as a target air pressure data value;
and acquiring a target time point corresponding to the target air pressure data value.
6. The method of claim 2, wherein the motion data comprises position data, the position data comprises a plurality of position data values and a collection time point corresponding to each of the position data values, and the target selection condition comprises that a position data value corresponds to a preset position;
the step of locating a target data value from the motion data according to the target selection condition and acquiring a target time point corresponding to the target data value includes:
positioning a target position data value corresponding to the preset position from the position data;
and acquiring a target time point corresponding to the target position data value.
7. The method of claim 2, wherein the motion data comprises acceleration data, the acceleration data comprises a plurality of acceleration data values and a collection time point corresponding to each of the acceleration data values, and the target selection condition comprises that the acceleration data value reaches a preset acceleration value;
the step of locating a target data value from the motion data according to the target selection condition and acquiring a target time point corresponding to the target data value includes:
locating a target acceleration data value reaching the preset acceleration value from the acceleration data;
and acquiring a target time point corresponding to the target acceleration data value.
8. The method of claim 2, wherein the motion data comprises acceleration data, the acceleration data comprises a plurality of acceleration data values and a collection time point corresponding to each acceleration data value, and the target selection condition comprises that the difference between a preset number of acceleration data values and the gravity acceleration is greater than a preset value;
the step of locating a target data value from the motion data according to the target selection condition and acquiring a target time point corresponding to the target data value includes:
determining a plurality of acceleration data values which are continuously preset and have difference values larger than a preset value with the gravity acceleration from the plurality of acceleration data values;
taking the last one of the preset number of acceleration data values as a target acceleration data value;
and acquiring a target time point corresponding to the target acceleration data value.
9. The method according to claim 2, wherein the motion data includes angular velocity data, the angular velocity data includes a plurality of angular velocity data values and a collection time point corresponding to each of the angular velocity data values, and the target selection condition includes that the angular velocity data reaches a preset angular velocity value;
the step of locating a target data value from the motion data according to the target selection condition and acquiring a target time point corresponding to the target data value includes:
positioning a target angular velocity data value reaching the preset angular velocity value from the angular velocity data;
and acquiring a target time point corresponding to the target angular velocity data value.
10. The method of claim 2, wherein the motion data comprises angular velocity data, the angular velocity data comprises a plurality of angular velocity data values and a collection time point corresponding to each of the angular velocity data values, and the target selection condition comprises that an accumulated value of at least one angular velocity data value reaches a preset value within a preset time;
the step of locating a target data value from the motion data according to the target selection condition and acquiring a target time point corresponding to the target data value includes:
determining at least one angular velocity data value of which the accumulated value within the preset time reaches a preset value from the plurality of angular velocity data values;
taking the last of the at least one angular velocity data value as a target angular velocity data value;
and acquiring a target time point corresponding to the target angular velocity data value.
11. The method of claim 1, wherein the server is pre-populated with a face database, and wherein the step of selecting at least one video segment from the original video based on the motion data is preceded by the method further comprising:
and carrying out face recognition on the original video to obtain the corresponding relation between the face and the original video and storing the corresponding relation into the face database.
12. The method of claim 11, wherein the face database comprises a correspondence between a face and an original video, and a correspondence between an original video and a composite video, the server further being communicatively connected to the mobile terminal;
after the step of inserting the at least one video segment into a preset video template to obtain a composite video, the method further includes:
acquiring a video acquisition request sent by the mobile terminal, wherein the video acquisition request comprises a face image;
carrying out face recognition on the face image to obtain a target face corresponding to the face image;
determining a target synthesized video corresponding to the target face based on the face database;
and sending the target composite video to the mobile terminal so that the mobile terminal displays the target composite video.
13. The method of claim 1, wherein the server is pre-established with a face database, the server further communicatively connected to a mobile terminal;
before the step of selecting at least one video segment from the original video according to the motion data, the method further comprises:
carrying out face recognition on the original video to obtain a corresponding relation between a face and the original video and storing the corresponding relation in the face database;
acquiring a video acquisition request sent by the mobile terminal, wherein the video acquisition request comprises a face image;
carrying out face recognition on the face image to obtain a target face corresponding to the face image;
determining a target original video corresponding to the target face based on the face database, taking the target original video as the original video, and executing the step of selecting at least one video segment from the original video according to the motion data;
after the step of inserting the at least one video segment into a preset video template to obtain a composite video, the method further includes:
and sending the composite video corresponding to the target original video to the mobile terminal so that the mobile terminal displays the composite video corresponding to the target original video.
14. The method of claim 1, wherein the original video is pre-processed by the capture device for captured video content;
the method for preprocessing the shot video content by the shooting equipment comprises the following steps:
the shooting equipment carries out face detection on the video content to obtain a face area of each video frame in the video content;
and the shooting equipment calculates the picture proportion of the face area of each video frame in each video frame, and deletes the video frames with the picture proportion smaller than a preset proportion to obtain the original video.
15. The method of claim 1, wherein the original video is pre-processed by the capture device for captured video content;
the method for preprocessing the shot video content by the shooting equipment comprises the following steps:
the shooting equipment carries out face detection on the video content to obtain a face area of each video frame in the video content;
and the shooting equipment acquires the face pixel size corresponding to the face area of each video frame, and deletes the video frame of which the face pixel size is smaller than the preset minimum face pixel size to obtain the original video.
16. The method of claim 14 or 15, wherein the step of the capture device pre-processing the captured video content further comprises:
and deleting the video frames which are over-exposed or blurred in the video content by the shooting equipment.
17. The method of claim 1, wherein the video template further comprises a leader and a trailer, the at least one template segment being disposed between the leader and the trailer;
the composite video further comprises the title and the title, and the at least one video clip and the at least one template clip are arranged between the title and the title.
18. The method of claim 1, wherein the template segment comprises at least one of a template video, a special effect picture, and a subtitle.
19. The video editing device is applied to a server, the server is in communication connection with shooting equipment, the shooting equipment is installed on sports equipment and comprises a sports data acquisition device, and the video editing device comprises:
the receiving module is used for receiving an original video sent by the shooting equipment and the motion data of the motion equipment, wherein the motion data is acquired by the motion data acquisition device;
the selection module is used for selecting at least one video segment from the original video according to the motion data;
the processing module is used for inserting the at least one video clip into a preset video template to obtain a composite video, wherein the video template comprises at least one template clip, and the composite video comprises the at least one video clip and the at least one template clip.
20. A server, characterized in that the server comprises:
one or more processors;
memory storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-18.
21. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-18.
CN201911231580.4A 2019-12-05 2019-12-05 Video editing method, device, server and storage medium Pending CN110996112A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911231580.4A CN110996112A (en) 2019-12-05 2019-12-05 Video editing method, device, server and storage medium
PCT/CN2020/132585 WO2021109952A1 (en) 2019-12-05 2020-11-30 Video editing method, apparatus and server, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911231580.4A CN110996112A (en) 2019-12-05 2019-12-05 Video editing method, device, server and storage medium

Publications (1)

Publication Number Publication Date
CN110996112A true CN110996112A (en) 2020-04-10

Family

ID=70090256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911231580.4A Pending CN110996112A (en) 2019-12-05 2019-12-05 Video editing method, device, server and storage medium

Country Status (2)

Country Link
CN (1) CN110996112A (en)
WO (1) WO2021109952A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111654619A (en) * 2020-05-18 2020-09-11 成都市喜爱科技有限公司 Intelligent shooting method and device, server and storage medium
CN112040278A (en) * 2020-09-16 2020-12-04 成都市喜爱科技有限公司 Video processing method and device, shooting terminal, server and storage medium
CN112203142A (en) * 2020-12-03 2021-01-08 浙江岩华文化科技有限公司 Video processing method and device, electronic device and storage medium
CN112702650A (en) * 2021-01-27 2021-04-23 成都数字博览科技有限公司 Blood donation promotion method and blood donation vehicle
WO2021109952A1 (en) * 2019-12-05 2021-06-10 成都市喜爱科技有限公司 Video editing method, apparatus and server, and computer readable storage medium
CN114363712A (en) * 2022-01-13 2022-04-15 深圳迪乐普智能科技有限公司 AI digital person video generation method, device and equipment based on templated editing
CN114500826A (en) * 2021-12-09 2022-05-13 成都市喜爱科技有限公司 Intelligent shooting method and device and electronic equipment
CN115103206A (en) * 2022-06-16 2022-09-23 北京字跳网络技术有限公司 Video data processing method, device, equipment, system and storage medium
CN115119044A (en) * 2021-03-18 2022-09-27 阿里巴巴新加坡控股有限公司 Video processing method, device, system and computer storage medium
CN115278299A (en) * 2022-07-27 2022-11-01 腾讯科技(深圳)有限公司 Unsupervised training data generation method, unsupervised training data generation device, unsupervised training data generation medium, and unsupervised training data generation equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118233704A (en) * 2022-12-21 2024-06-21 北京字跳网络技术有限公司 Video editing method, device, electronic equipment and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0535178A (en) * 1991-07-26 1993-02-12 Pioneer Electron Corp Recording and reproducing device
US20050052532A1 (en) * 2003-09-08 2005-03-10 David Elooz System and method for filming and recording attractions
CN204425487U (en) * 2015-03-17 2015-06-24 百度在线网络技术(北京)有限公司 Flying Camera head unit, bicycle and system of riding
CN107281709A (en) * 2017-06-27 2017-10-24 深圳市酷浪云计算有限公司 The extracting method and device, electronic equipment of a kind of sport video fragment
CN108694737A (en) * 2018-05-14 2018-10-23 星视麒(北京)科技有限公司 The method and apparatus for making image
CN108769560A (en) * 2018-05-31 2018-11-06 广州富勤信息科技有限公司 The production method of medelling digitized video under a kind of high velocity environment
CN110121105A (en) * 2018-02-06 2019-08-13 上海全土豆文化传播有限公司 Editing video generation method and device
CN110418073A (en) * 2019-07-22 2019-11-05 富咖科技(大连)有限公司 A kind of video automatic collection and synthetic method for Karting

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110996112A (en) * 2019-12-05 2020-04-10 成都市喜爱科技有限公司 Video editing method, device, server and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0535178A (en) * 1991-07-26 1993-02-12 Pioneer Electron Corp Recording and reproducing device
US20050052532A1 (en) * 2003-09-08 2005-03-10 David Elooz System and method for filming and recording attractions
CN204425487U (en) * 2015-03-17 2015-06-24 百度在线网络技术(北京)有限公司 Flying Camera head unit, bicycle and system of riding
CN107281709A (en) * 2017-06-27 2017-10-24 深圳市酷浪云计算有限公司 The extracting method and device, electronic equipment of a kind of sport video fragment
CN110121105A (en) * 2018-02-06 2019-08-13 上海全土豆文化传播有限公司 Editing video generation method and device
CN108694737A (en) * 2018-05-14 2018-10-23 星视麒(北京)科技有限公司 The method and apparatus for making image
CN108769560A (en) * 2018-05-31 2018-11-06 广州富勤信息科技有限公司 The production method of medelling digitized video under a kind of high velocity environment
CN110418073A (en) * 2019-07-22 2019-11-05 富咖科技(大连)有限公司 A kind of video automatic collection and synthetic method for Karting

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021109952A1 (en) * 2019-12-05 2021-06-10 成都市喜爱科技有限公司 Video editing method, apparatus and server, and computer readable storage medium
CN111654619A (en) * 2020-05-18 2020-09-11 成都市喜爱科技有限公司 Intelligent shooting method and device, server and storage medium
CN112040278A (en) * 2020-09-16 2020-12-04 成都市喜爱科技有限公司 Video processing method and device, shooting terminal, server and storage medium
CN112203142A (en) * 2020-12-03 2021-01-08 浙江岩华文化科技有限公司 Video processing method and device, electronic device and storage medium
CN112702650A (en) * 2021-01-27 2021-04-23 成都数字博览科技有限公司 Blood donation promotion method and blood donation vehicle
CN115119044A (en) * 2021-03-18 2022-09-27 阿里巴巴新加坡控股有限公司 Video processing method, device, system and computer storage medium
CN115119044B (en) * 2021-03-18 2024-01-05 阿里巴巴新加坡控股有限公司 Video processing method, device, system and computer storage medium
CN114500826A (en) * 2021-12-09 2022-05-13 成都市喜爱科技有限公司 Intelligent shooting method and device and electronic equipment
CN114500826B (en) * 2021-12-09 2023-06-27 成都市喜爱科技有限公司 Intelligent shooting method and device and electronic equipment
CN114363712A (en) * 2022-01-13 2022-04-15 深圳迪乐普智能科技有限公司 AI digital person video generation method, device and equipment based on templated editing
CN114363712B (en) * 2022-01-13 2024-03-19 深圳迪乐普智能科技有限公司 AI digital person video generation method, device and equipment based on templated editing
CN115103206A (en) * 2022-06-16 2022-09-23 北京字跳网络技术有限公司 Video data processing method, device, equipment, system and storage medium
CN115103206B (en) * 2022-06-16 2024-02-13 北京字跳网络技术有限公司 Video data processing method, device, equipment, system and storage medium
CN115278299A (en) * 2022-07-27 2022-11-01 腾讯科技(深圳)有限公司 Unsupervised training data generation method, unsupervised training data generation device, unsupervised training data generation medium, and unsupervised training data generation equipment
CN115278299B (en) * 2022-07-27 2024-03-19 腾讯科技(深圳)有限公司 Unsupervised training data generation method, device, medium and equipment

Also Published As

Publication number Publication date
WO2021109952A1 (en) 2021-06-10

Similar Documents

Publication Publication Date Title
CN110996112A (en) Video editing method, device, server and storage medium
CN109326310B (en) Automatic editing method and device and electronic equipment
CN110602554B (en) Cover image determining method, device and equipment
KR101731771B1 (en) Automated selection of keeper images from a burst photo captured set
CN107079135B (en) Video data transmission method, system, equipment and shooting device
US20160225405A1 (en) Variable playback speed template for video editing application
US10021431B2 (en) Mobile computing device having video-in-video real-time broadcasting capability
US9836831B1 (en) Simulating long-exposure images
US20220046291A1 (en) Method and device for generating live streaming video data and method and device for playing live streaming video
US11508154B2 (en) Systems and methods for generating a video summary
CN107436921B (en) Video data processing method, device, equipment and storage medium
CN110612721B (en) Video processing method and terminal equipment
CN105262942B (en) Distributed automatic image and video processing
US10084959B1 (en) Color adjustment of stitched panoramic video
CN107977391B (en) Method, device and system for identifying picture book and electronic equipment
CN111368724B (en) Amusement image generation method and system
CN111654619A (en) Intelligent shooting method and device, server and storage medium
CN108540817B (en) Video data processing method, device, server and computer readable storage medium
US9117275B2 (en) Content processing device, integrated circuit, method, and program
US20180278844A1 (en) Photographing method and photographing device of unmanned aerial vehicle, unmanned aerial vehicle, and ground control device
WO2023056896A1 (en) Definition determination method and apparatus, and device
CN114026874A (en) Video processing method and device, mobile device and readable storage medium
CN111444822B (en) Object recognition method and device, storage medium and electronic device
CN114339423B (en) Short video generation method, device, computing equipment and computer readable storage medium
CN109472230B (en) Automatic athlete shooting recommendation system and method based on pedestrian detection and Internet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200410