CN115225944A - Video processing method, video processing device, electronic equipment and computer-readable storage medium - Google Patents

Video processing method, video processing device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN115225944A
CN115225944A CN202211140274.1A CN202211140274A CN115225944A CN 115225944 A CN115225944 A CN 115225944A CN 202211140274 A CN202211140274 A CN 202211140274A CN 115225944 A CN115225944 A CN 115225944A
Authority
CN
China
Prior art keywords
video
data
video data
preset
duration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211140274.1A
Other languages
Chinese (zh)
Other versions
CN115225944B (en
Inventor
周承涛
张乐
杨作兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen MicroBT Electronics Technology Co Ltd
Original Assignee
Shenzhen MicroBT Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen MicroBT Electronics Technology Co Ltd filed Critical Shenzhen MicroBT Electronics Technology Co Ltd
Priority to CN202211140274.1A priority Critical patent/CN115225944B/en
Publication of CN115225944A publication Critical patent/CN115225944A/en
Application granted granted Critical
Publication of CN115225944B publication Critical patent/CN115225944B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present disclosure relates to a video processing method, apparatus, electronic device, and computer-readable storage medium, the video processing method including: shooting a video to obtain first video data; acquiring the positioning position of a shooting place in real time in the shooting process of the video; according to the map data and the positioning position, second video data of a moving process of the shooting place in the map are obtained; acquiring preset video time, and converting the first video data and the second video data into first video data with the preset video time and second video data with the preset video time; and obtaining third video data which synchronously present the content of the first video data and the content of the second video data according to the first video data and the second video data. The method and the device have the advantages that the effect that the change process of the positioning position in the map and the shot video are simultaneously presented in the same video is achieved, the synchronous correlated playing of the change of the movement position and the video content is achieved, and the movement experience is favorably improved.

Description

Video processing method, video processing device, electronic equipment and computer-readable storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a video processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Currently, the use of mobile terminal devices to assist sports fitness has become a mainstream sport. Based on the assistance of the mobile terminal device, historical movement information can be obtained, wherein movement process tracks based on the geographical position information can be obtained according to movement forms of geographical position movement generated by walking, running, riding and the like.
However, at present, the historical motion trail and the video shot in the motion process cannot be synchronously obtained.
Disclosure of Invention
In view of this, the present disclosure provides a video processing method, an apparatus, an electronic device, and a computer-readable storage medium, which implement combining a video shot in a motion process with geographic position information in the motion process in an end-to-end manner, enhance the relevance between the video and the geographic position information, enable a user to obtain video content shot in the motion process and the geographic position information in the video shooting process synchronously based on the same video, and improve the motion experience of the user.
The technical scheme of the disclosure is realized as follows:
a video processing method, comprising:
shooting a video to obtain first video data;
acquiring the positioning position of a shooting place in real time in the shooting process of the video;
obtaining second video data of the shooting place in the moving process of the map according to the map data and the positioning position;
acquiring a preset video time length, and converting the first video data and the second video data into first video data with the preset video time length and second video data with the preset video time length;
and obtaining third video data which synchronously present the content of the first video data and the content of the second video data according to the first video data with the preset video duration and the second video data with the preset video duration.
Further, the obtaining of the second video data of the moving process of the shooting place in the map according to the map data and the positioning position includes:
sequentially presenting icons representing shooting places at the positioning positions in the map data according to the acquisition time sequence of the positioning positions to obtain a plurality of pieces of video frame data of which the contents comprise map images and the icons;
and obtaining the second video data based on the plurality of pieces of video frame data and the acquisition time of the positioning position.
Further, the obtaining of the second video data of the moving process of the shooting place in the map according to the map data and the positioning position further includes:
obtaining a moving track of the shooting place in the map according to the positioning position and the map data;
and presenting the movement track in the video frame data.
Further, the obtaining the second video data based on the multiple pieces of video frame data and the obtaining time of the positioning position includes:
arranging the plurality of pieces of video frame data according to the sequence of the acquisition time of the positioning positions;
inserting transition frame data between the adjacent video frame data, wherein the content of the transition frame data comprises the icon of a transition position between the adjacent positioning positions in the map image content, the number of the transition frame data is determined according to the number of playing video frames in unit time and the acquisition period of the positioning positions, and the transition position is determined according to the number of video frames in unit time contained in the video and the distance between the adjacent positioning positions;
and arranging the plurality of video frame data and the transition frame data according to a time sequence, and obtaining the second video data according to the number of playing video frames in the unit time.
Further, the obtaining of the second video data of the moving process of the shooting place in the map according to the map data and the positioning position includes:
determining a moving range according to all positioning positions obtained in the shooting process;
obtaining second video data comprising first sub video data and second sub video data under the condition that the moving range exceeds a preset range condition threshold value, wherein the first sub video data comprises map content containing all the positioning positions, and the second sub video data comprises map content in a local range containing the positioning positions, which is presented along with the change of the positioning positions;
obtaining the second video data including only the first sub-video data if the movement range does not exceed the preset range condition threshold.
Further, the converting the first video data and the second video data to the first video data with the preset video duration and the second video data with the preset video duration includes:
determining the extraction quantity of first video frames according to the proportion of the duration of the first video data to the duration of the preset video and the quantity of video frame data contained in the first video data;
extracting video frame data from the first video data at equal intervals according to the first video frame extraction quantity;
arranging video frame data extracted from the first video data according to a time sequence, and obtaining the first video data with the preset video duration according to the number of playing video frames in unit time;
and (c) a second step of,
determining the extraction quantity of second video frames according to the proportion of the duration of the second video data to the duration of the preset video and the quantity of video frame data contained in the second video data;
extracting video frame data from the second video data at equal intervals according to the second video frame extraction quantity;
and arranging the video frame data extracted from the second video data according to a time sequence, and obtaining the second video data with the preset video duration according to the number of playing video frames in unit time.
Further, after the extracting the video frame data from the first video data at equal intervals, the video processing method further comprises:
comparing the video frame data within the range of the set frame number before and after the extracted video frame data to obtain the video frame data with the highest definition;
and determining the video frame data extracted from the first video data according to the video frame data with the highest definition.
Further, the comparing the video frame data within the range of the set frame number before and after the extracted video frame data to obtain the video frame data with the highest definition includes:
acquiring video frame data within a preset frame number range before and after the extracted video frame data;
respectively extracting respective edge features from the acquired video frame data to obtain edge feature values of the video frame data;
and determining the video frame data with the maximum edge characteristic value as the video frame data with the highest definition.
Further, the obtaining, according to the first video data with the preset video duration and the second video data with the preset video duration, third video data that synchronously presents the content of the first video data and the content of the second video data includes:
determining the position and the size of the first video data with the preset video duration in the third video data and the position and the size of the second video data with the preset video duration in the third video data according to a preset layout condition;
adjusting the size of the first video data with the preset video duration according to the size of the first video data with the preset video duration in the third video data;
adjusting the size of the second video data with the preset video duration according to the size of the second video data with the preset video duration in the third video data;
combining the first video data with the preset video duration after the size is adjusted and the second video data with the preset video duration after the size is adjusted according to the position of the first video data with the preset video duration in the third video data and the position of the second video data with the preset video duration in the third video data;
and obtaining the third video data based on the combined first video data with the preset video duration and the combined second video data with the preset video duration.
A video processing apparatus comprising:
the first video data acquisition module is configured to execute video shooting to obtain first video data;
a positioning position acquisition module configured to perform acquisition of a positioning position of a shooting place in real time in a shooting process of the video;
the second video data obtaining module is configured to obtain second video data of a moving process of the shooting place in a map according to map data and the positioning position, and the duration of the second video data is equal to that of the first video data;
the time length conversion module is configured to execute obtaining of a preset video time length and convert the first video data and the second video data into first video data of the preset video time length and second video data of the preset video time length, wherein the preset video time length is smaller than the time length of the first video data and the second video data;
and the third video data acquisition module is configured to execute the first video data according to the preset video duration and the second video data according to the preset video duration to obtain third video data which synchronously presents the content of the first video data and the content of the second video data.
An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the executable instructions to implement the video processing method of any of the above.
A computer readable storage medium having at least one instruction which, when executed by a processor of an electronic device, enables the electronic device to implement a video processing method as in any one of the above.
According to the scheme, the video processing method, the video processing device, the electronic equipment and the computer-readable storage medium can acquire the positioning position of the shooting place in real time in the video shooting process, obtain the second video data of the shooting place in the moving process of the map according to the map data and the positioning position, convert the second video data and the first video data obtained by shooting the video into the preset video duration, and finally obtain the third video data synchronously presenting the second video data and the first video data, so that the effect that the changing process of the positioning position of the user in the map and the video shot by the user are simultaneously presented in the same video is achieved, the user can synchronously obtain the motion position of the user in the map and the video content associated with the motion position through the same video, the synchronous associated playing of the change of the motion position and the video content is achieved, the motion experience of the user is facilitated to be improved, and the enthusiasm of the motion of the user is facilitated to be enhanced.
Drawings
FIG. 1 is a flow diagram illustrating a video processing method in accordance with one illustrative embodiment;
FIG. 2 is a flow diagram illustrating obtaining second video data in accordance with an illustrative embodiment;
FIG. 3 is a flow chart illustrating obtaining second video data in accordance with an illustrative embodiment;
FIG. 4 is a flow diagram illustrating presenting a movement trajectory in accordance with an illustrative embodiment;
FIG. 5 is a flowchart illustrating obtaining sub-video data according to conditions in accordance with an illustrative embodiment;
FIG. 6 is a flow diagram illustrating obtaining first video data for a preset video duration in accordance with one illustrative embodiment;
FIG. 7 is a flow diagram illustrating obtaining second video data for a preset video duration in accordance with an illustrative embodiment;
FIG. 8 is a diagram illustrating first video data and second video data having a preset video duration in accordance with one illustrative embodiment;
FIG. 9 is a flow diagram illustrating selection of video frame data in accordance with an exemplary embodiment;
FIG. 10 is a flow diagram illustrating the determination of the highest sharpness video frame data in accordance with one illustrative embodiment;
FIG. 11 is a flowchart illustrating obtaining third video data in accordance with one illustrative embodiment;
FIG. 12A is a diagram illustrating a first layout structure of third video data in accordance with one illustrative embodiment;
FIG. 12B is a diagram illustrating a second layout structure of third video data, in accordance with an illustrative embodiment;
FIG. 12C is a diagram illustrating a third layout structure of third video data, in accordance with one illustrative embodiment;
FIG. 13 is a particular scene flow diagram illustrating a video processing method in accordance with an illustrative embodiment;
FIG. 14 is a diagram illustrating particular scene steps of a video processing method in accordance with one illustrative embodiment;
FIG. 15 is a diagram illustrating further specific scene steps of a video processing method in accordance with one illustrative embodiment;
FIG. 16 is a logical block diagram illustrating a video processing device in accordance with an exemplary embodiment;
fig. 17 shows a structure of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clearly understood, the present disclosure is further described in detail below with reference to the accompanying drawings and examples.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
At present, the people of motion body-building are more and more, in portable body-building modes such as hiking, running, ride, the body-building person can hand-carry the locating information of record motion processes such as smart mobile phone, the wearable equipment of intelligence, and wherein, the wearable equipment of intelligence can include intelligent wrist-watch, intelligent glasses etc.. After the exercise is finished, the exercise track is automatically generated based on the position information in the exercise process, and in the relevant application of the terminal, the exercise path is displayed to the exerciser in a manner of presenting the exercise track. In addition, the exerciser can also take videos in the exercise process through a portable smart phone, a sport camera and the like, and the content of the videos comprises scenery of positions where the exerciser passes in the exercise process.
In the existing mode, the path display and the video shooting cannot be played synchronously in a correlated manner, when a user or a exerciser needs to acquire video information content corresponding to a certain place passed by in the exercise process, the position of the place in a map needs to be determined through positioning information in the exercise process, and the content shot at the place needs to be inquired in a video reviewing mode.
In view of this, embodiments of the present disclosure provide a video processing method, an apparatus, an electronic device, and a computer-readable storage medium, which implement that a change process of a positioning position in a map when a user moves is simultaneously presented in the same video as a video taken by the user, so that the user obtains a movement position of the user in the map and a video associated with the movement position through the same video synchronization, implement synchronous associated playing of the change of the movement position and video content, reduce time for the user to obtain positioning information and the video, help to improve a movement experience of the user, and help to enhance an enthusiasm of user movement.
Fig. 1 is a flowchart illustrating a video processing method according to an exemplary embodiment, and as shown in fig. 1, the video processing method according to the embodiment of the present disclosure mainly includes the following steps 101 to 105.
Step 101, shooting a video to obtain first video data.
In some embodiments, the video is captured by a camera carried by the user during the motion, wherein the camera may be a mobile device with a camera function. Mobile devices include, but are not limited to, smart phones, motion cameras, wearable devices with camera functionality, and the like. The content of the first video data includes environmental content, such as a scene, photographed by a photographing apparatus in an environment where the photographing apparatus is located.
And 102, acquiring the positioning position of the shooting place in real time in the shooting process of the video.
In some embodiments, the location of the shooting location may be obtained by a mobile device that shoots the video, wherein the mobile device that shoots the video has a location module therein, such as a satellite positioning system based navigation location module. For example, a smartphone, a sports camera, and a wearable device used for shooting a video acquire positioning information in real time as a positioning position of a shooting location by using a self-contained positioning module in a video shooting process.
The Satellite Positioning System includes, but is not limited to, a Global Positioning System (GPS), a BeiDou Navigation Satellite System (BDS), a GLONASS System (GLONASS), and a GALILEO Satellite Navigation System (GALILEO).
Besides, the positioning position of the shooting place can be obtained through a wireless communication network base station, a wireless fidelity network (Wi-Fi) and the like.
In some embodiments, in the case that the mobile device for capturing the video does not have the function of acquiring the positioning location, the positioning location of the capturing location can be acquired by the mobile device with the function of acquiring the positioning location carried by the user, which needs to ensure that the two devices for performing video capturing and positioning location acquisition are carried by the user and work simultaneously, and after the video capturing and positioning location acquisition are completed, the video and the positioning location information are aligned based on time, so that the video content captured at the same time is ensured to be the video content captured at the positioning location at the same time.
The step 102 is adopted to implement the temporal association between the obtained positioning position and the video, for example, the association between the positioning position and the video content based on the same time can be implemented, that is, the video content shot at a certain time is the video content shot at the positioning position of the time.
And 103, obtaining second video data of the moving process of the shooting place in the map according to the map data and the positioning position.
Since the location position of the shooting location may exist in the form of latitude and longitude information, in order to visually present the location position of the shooting location, the latitude and longitude information needs to be mapped in a map. By adopting the step 103, the positioning position of the shooting place acquired in the moving process is mapped on the map and presented in the form of the video of the second video data, so that the effect of presenting the movement change of the positioning position of the shooting place in the map in a video mode is realized.
In some embodiments, the duration of the second video data is equal to the duration of the first video data. In some embodiments, the duration of the second video data is not equal to the duration of the first video data.
Fig. 2 is a flow diagram illustrating obtaining second video data in accordance with an illustrative embodiment. As shown in fig. 2, obtaining the second video data of the moving process of the shooting location in the map according to the map data and the positioning position in step 103 includes the following steps 201 to 202.
And step 201, sequentially presenting icons representing shooting places at the positioning positions in the map data according to the acquisition time sequence of the positioning positions, and obtaining a plurality of pieces of video frame data of which the contents comprise map images and the icons.
Each video frame data only contains an icon at one positioning position, that is, each video frame data only presents the icon at the positioning position at the moment corresponding to the video frame data, but not simultaneously presents the icons at the positioning positions at multiple moments.
Step 202, obtaining second video data based on the multiple pieces of video frame data and the obtaining time of the positioning position.
In some embodiments, the temporal length of the second video data is equal to the temporal length of the first video data, thereby ensuring synchronized playback of the second video data and the first video data. However, due to the requirement of the number of frames for playing the video in the unit time, and the positioning position may be much smaller than the number of video frames of the first video data due to the periodic acquisition, if only all the video frame data containing the map image content and the icon content are played according to the requirement of the number of video frames in the unit time, the time length of the obtained second video data may be much smaller than the time length of the first video data. Therefore, in some embodiments, the missing video frames may be supplemented by inserting transition frame data between the video frame data of the adjacent positioning positions according to the video frame number per unit time requirement, and in some embodiments, the transition frame data inserted between the video frame data of the adjacent positioning positions may be a duplicate of the previous video frame data in the video frame data of the adjacent positioning positions. For example, the video frame data of the adjacent positioning positions are video frame data a and video frame data B, respectively, the video frame data a is the video frame data of positioning position a, the video frame data B is the video frame data of positioning position B, and the positioning position a and the positioning position B are the adjacent positioning positions, the previous video frame data is the video frame data a, that is, the acquisition of the positioning position a is earlier than that of the positioning position B, in this case, the transition frame data inserted between the video frame data a and the video frame data B of the adjacent positioning positions a and B may be a duplicate frame of the video frame data a.
In some embodiments, the temporal length of the second video data is not equal to the temporal length of the first video data. Although the time lengths are unequal, the second video data and the first video data cannot be played synchronously, after the first video data and the second video data are converted to the same preset video duration, the first video data with the preset video duration and the second video data with the preset video duration can still be played synchronously. The method for converting the first video data and the second video data to the preset video time length can extract the video frame data from the video frame data before conversion at equal intervals according to the proportion between the number of the video frames in the video data before conversion and the number of the video frames in the video data of the preset video time length, and the extracted video frame data form the video data of the preset video time length. Since the second video data includes all the positioning positions of the shooting place in the process of shooting the video, even if the time length of the second video data is not equal to the time length of the first video data, after the video duration is converted, the content of the first video data and the positioning positions of the second video data which are presented at the same time are still related between the first video data with the preset video duration and the second video data with the preset video duration.
The above is a way to insert transition frame data between video frame data at adjacent positioning positions, and the embodiment of the present disclosure proposes a solution of another embodiment shown in fig. 3.
Fig. 3 is a flowchart illustrating a method for obtaining second video data according to an exemplary embodiment, where the obtaining of the second video data in step 202 based on a plurality of video frame data and an acquisition time of a positioning location, as shown in fig. 3, includes the following steps 301 to 303.
Step 301, arranging the plurality of pieces of video frame data according to the sequence of the acquisition time of the positioning position.
Step 302, inserting transition frame data between adjacent video frame data, wherein the content of the transition frame data includes an icon of a transition position between adjacent positioning positions in the map image content, the number of the transition frame data is determined according to the number of playing video frames in unit time and the acquisition period of the positioning positions, and the transition position is determined according to the number of video frames in unit time included in the video and the distance between the adjacent positioning positions.
For example, the adjacent video frame data are video frame data a and video frame data B, respectively, the video frame data a is the video frame data of localization position a, the video frame data B is the video frame data of localization position B, and the localization position a and the localization position B are the adjacent localization positions. The content of the transition frame data inserted between the video frame data a and the video frame data B contains an icon of a transition position between the localization position a and the localization position B presented in the map image.
In some embodiments, the number of transition frame data is determined according to the number of playing video frames in a unit time and the acquisition period of the positioning position. In some embodiments, the number of transition frame data is the product of the acquisition period of the localization position and the number of playing video frames per unit time minus one. For example, if the number of video frames played per second is 24 frames per unit time, and the acquisition period of the positioning position is, for example, 1 second, then the number of transition frame data between the video frame data a and the video frame data B is 1 × 24-1=23 frames. For example, if the number of playing video frames per second is 24 frames per unit time, and the acquisition period of the positioning position is, for example, 3 seconds, then the number of transition frame data between video frame data a and video frame data B is 3 × 24-1=71 frames. For example, if the number of video frames played per second is 24 frames per unit time, and the acquisition period of the positioning position is, for example, 0.5 second, then the number of transition frame data between video frame data a and video frame data B is 0.5 × 24-1=11 frames.
In some embodiments, the transition position in each transition frame data is determined according to the number of played video frames in unit time, the acquisition period of the positioning position, and the distance between adjacent positioning positions. In some embodiments, the transition location in the transition frame data is determined by:
S i =i×S 0 /(T×N)
where i is the sequence number of transition frame data between two adjacent video frame data (e.g., video frame data A and video frame data B), S 0 Is the distance between adjacent positioning positions, T is the acquisition period of the positioning positions, N is the number of playing video frames in unit time, S i Is the distance between the transition position in the ith transition frame data and the previously acquired positioning position in the adjacent positioning position.
For example, the number of played video frames in a unit time is 24 frames per second, the acquisition period of the positioning position is 1 second, and the positioning position a and the positioning position B are adjacent positioning positions, where the positioning position acquired earlier is the positioning position a, the video frame data a presents the positioning position a, and the video frame data B presents the positioning position B, then: s in the 1 st transition frame data between video frame data A and video frame data B 1 =S 0 /24, i.e. the transition position in the 1 st transition frame data is at a distance S from the positioning position a between the positioning position a and the positioning position b 0 24; s in 2 nd transition frame data between video frame data A and video frame data B 2 =2×S 0 /24, i.e. the distance between the transition position in the 2 nd transition frame data and the positioning position a between the positioning position a and the positioning position b is 2 × S 0 24; and so on; s in 23 th transition frame data between video frame data A and video frame data B 23 =23×S 0 /24, that is, the distance between the transition position in the 23 rd transition frame data and the positioning position a between the positioning position a and the positioning position b is 23 × S 0 /24 (distance S from the locating position b) 0 /24)。
For example, the number of video frames played in a unit time is 24 frames per second, and the acquisition period of the positioning position isAnd 3 seconds, the positioning position a and the positioning position B are adjacent positioning positions, wherein the positioning position obtained in advance is the positioning position a, the video frame data A presents the positioning position a, and the video frame data B presents the positioning position B, so that: s in the 1 st transition frame data between the video frame data A and the video frame data B 1 =S 0 /(3 × 24), i.e., the distance S between the location a and the location b of the transition position in the 1 st transition frame data and the location a 0 72; s in 2 nd transition frame data between video frame data A and video frame data B 2 =2×S 0 /(3 × 24), that is, the distance between the transition position in the 2 nd transition frame data and the localization position a between the localization position a and the localization position b is 2 × S 0 72; and so on; s in 71 th transition frame data between video frame data A and video frame data B 71 =71×S 0 /(3 × 24), that is, the transition position in the 71 th transition frame data is 71 × S away from the localization position a between the localization position a and the localization position b 0 /72 (distance S from the locating position b) 0 /72)。
For example, the number of playing video frames in a unit time is 24 frames per second, the acquisition period of the positioning position is 0.5 second, and the positioning position a and the positioning position B are adjacent positioning positions, where the positioning position acquired earlier is the positioning position a, the video frame data a presents the positioning position a, and the video frame data B presents the positioning position B, then: s in the 1 st transition frame data between video frame data A and video frame data B 1 =S 0 /(0.5 × 24), that is, the distance between the transition position in the 1 st transition frame data and the localization position a between the localization position a and the localization position b is S 0 12; s in 2 nd transition frame data between video frame data A and video frame data B 2 =2×S 0 /(0.5 × 24), that is, the distance between the transition position in the 2 nd transition frame data and the localization position a between the localization position a and the localization position b is 2 × S 0 12; and so on in turn; s in 11 th transition frame data between video frame data A and video frame data B 11 =11×S 0 /(0.5 × 24), that is, the distance between the transition position in the 11 th transition frame data and the localization position a between the localization position a and the localization position b is 11 × S 0 /12 (and location position)b is a distance S 0 /12)。
The above description is only for illustration and is not intended to limit the technical solutions of the present application.
And 303, arranging the plurality of pieces of video frame data and the plurality of pieces of transition frame data according to a time sequence, and obtaining second video data according to the number of the played video frames in unit time.
In this way, since the acquisition of the positioning position is performed throughout the entire video capturing process, the time length of the second video data obtained by the above-described processing based on the positioning position and the map data is equal to the time length of the first video data obtained by capturing the video. In this case, if the second video data and the first video data are played synchronously, the effect of playing the content of the first video data and the positioning position information of the second video data in a correlated manner is obtained, that is, the content of the current frame displayed by the first video data is the content shot by the positioning position of the current frame of the second video data.
In the embodiment in which only the icon is presented in the second video data, the moving track of the shooting location can be determined only by browsing the entire second video data, and the moving track of the shooting location cannot be obtained in a video frame at a specific moment, so that a problem of low experience degree brought to a user due to lack of presented content may be brought about. Accordingly, in some embodiments, the movement trajectory of the shooting location may be presented in the second video data based on the positioning position. Fig. 4 is a flowchart illustrating a moving track according to an exemplary embodiment, and as shown in fig. 4, in some embodiments, the second video data of the moving process of the shooting location in the map is obtained according to the map data and the positioning position in step 103, and the following steps 401 to 402 may be further included.
Step 401, obtaining a moving track of the shooting location in the map according to the positioning location and the map data.
In some embodiments, the movement trajectory may be composed of a plurality of line segments connecting adjacent localization positions. In some embodiments, the movement trajectory may also be in the form of an arc trajectory connecting the various positioning locations, in the form of an arc trajectory having the effect of a smooth transition compared to a movement trajectory made up of a plurality of line segments. In the moving track form formed by a plurality of line segments, the change angle of the track is sharp and unsmooth. Considering that the user forms a real motion track according to the timely adjusted advancing direction in the motion process, the track change angle is smooth, so that the arc track is closer to the real motion track.
And step 402, presenting the moving track in video frame data.
In some embodiments, the movement trajectory may be presented in each video frame of the second video data.
By adopting the mode, the moving track is presented in the video frame data, so that when the second video data is played, the positions passed by the whole moving process are directly summarized and presented in the video content, and the requirement of the whole overview of the moving position change is met.
Fig. 5 is a flowchart illustrating obtaining sub-video data according to conditions according to an exemplary embodiment, and as shown in fig. 5, obtaining second video data of a moving process of a shooting location in a map according to map data and a positioning position in step 103 may further include the following steps 501 to 503.
Step 501, determining a moving range according to all positioning positions obtained in the shooting process.
In some embodiments, step 501 may further comprise:
determining two positioning positions with the largest distance in all the positioning positions according to all the positioning positions obtained in the shooting process; and determining the moving range according to the two positioning positions with the maximum distance.
For example, the movement when the user photographs may be in a manner of approaching a straight line movement, and then the two positioning positions having the largest distance among all the positioning positions may be the start position and the end position.
As another example, the movement of the user during the shooting may be in the form of movement within an area, such as a closed path in a playground or a park, in which case the movement range may be determined by comparing the distances between the respective positioning locations.
And 502, under the condition that the moving range exceeds a preset range condition threshold value, obtaining second video data comprising first sub video data and second sub video data, wherein the first sub video data comprises map content containing all positioning positions, and the second sub video data comprises map content in a local range including the positioning positions and presented along with the change of the positioning positions.
And 503, obtaining second video data only including the first sub video data under the condition that the moving range does not exceed the preset range condition threshold.
In the above manner, when the moving distance is too long and the range is too large, the obtained first sub-video data can present the map content of the global overview of the shooting process, that is, the map content including all the positioning positions included in the first sub-video data is the map content of the global overview, and the obtained second sub-video data can present the map content of the local detail information moving along with the movement of the positioning positions in the shooting process, that is, the second sub-video data includes the map content in the local range including the positioning positions and including the detail information, and the local range can be determined according to the preset local range parameter, wherein the local range parameter is a set distance centering on the positioning positions, for example, if the local range parameter is set to 100 meters, the second sub-video data includes the map content including the detail information in the 100 meters including the positioning positions and including the detail information and including the positioning positions and including the presentation along with the change of the positioning positions. The value range of the local range parameter can be more than or equal to 50 meters and less than or equal to 1000 meters.
Whether the moving distance is too far or not and whether the range is too large or not are determined by a preset distance condition threshold value, and the preset distance condition threshold value can be set before the second video data are generated. In an alternative, the second video data including the first sub video data and the second sub video data or the second video data including only the first sub video data may be obtained according to the selection of the user by setting whether to select the option of generating two sub video data. The value range of the preset distance condition threshold may be greater than or equal to 1000 meters.
And 104, acquiring preset video time, and converting the first video data and the second video data into the first video data with the preset video time and the second video data with the preset video time.
And the preset video time length is less than the time lengths of the first video data and the second video data.
In some cases, there is a demand for shortening video, for example, the time for capturing video is more than 1 hour, and the time length of the first video data and the second video data is more than 1 hour, but the demand for the time length for playing video may be in the range of several tens of seconds to several minutes. In response to such a requirement, the embodiment of the present disclosure includes a step of time length conversion of the first video data and the second video data.
Fig. 6 is a flowchart illustrating a method for obtaining first video data with a preset video duration according to an exemplary embodiment, where as shown in fig. 6, converting the first video data and the second video data to the first video data with the preset video duration and the second video data with the preset video duration in step 104 includes the following steps 601 to 603.
Step 601, determining the extraction number of the first video frames according to the ratio of the duration of the first video data to the duration of the preset video and the number of video frame data contained in the first video data.
For example, if the duration of the first video data is 1 hour (3600 seconds), and the preset video duration is 30 seconds, the ratio of the duration of the first video data to the preset video duration is 120, which means that a video composed of 1 frame extracted from every 120 frames in the first video data is a 30-second video, and the first video data is 24 frames per second, the first video data includes 3600 × 24=86400 frames, and the number of extracted first videos is 86400/120=720 frames.
Step 602, extracting video frame data from the first video data at equal intervals according to the first video frame extraction quantity.
For example, the first video extraction number is 720 frames, and video frame data are extracted from 86400 frames of the first video data at equal intervals, i.e., 1 st, 121 th, and 241 th frames \8230 \ 8230;.
Step 603, arranging the video frame data extracted from the first video data according to a time sequence, and obtaining the first video data with a preset video duration according to the number of the played video frames in unit time.
For example, 720 frames of video frame data extracted from the first video data are arranged in time sequence, and the first video data with the duration of 30 seconds is obtained according to the playing number of video frames of 24 frames per second.
In addition to the above process of acquiring the first video data with the preset video duration, step 104 further includes a process of acquiring the second video data with the preset video duration.
Fig. 7 is a flowchart illustrating a method for obtaining second video data with a preset video duration according to an exemplary embodiment, where as shown in fig. 7, the step 104 of converting the first video data and the second video data into the first video data with the preset video duration and the second video data with the preset video duration includes the following steps 701 to 703.
And 701, determining the extraction quantity of the second video frames according to the proportion of the time length of the second video data to the preset video time length and the quantity of video frame data contained in the second video data.
And step 702, extracting the video frame data from the second video data at equal intervals according to the second video frame extraction quantity.
And 703, arranging the video frame data extracted from the second video data according to a time sequence, and obtaining the second video data with preset video duration according to the number of the played video frames in unit time.
Since the second video data is obtained based on the positioning position of the video shooting process of the first video data and has the same time length as the first video data, and the moving process content of the shooting point in the second video data in the map is synchronous with the content of the first video data, the time lengths between the first video data of the preset video time length and the second video data of the preset video time length obtained in the same manner as the steps 601 to 603 and the steps 701 to 703 are also equal, and the moving process content of the shooting point in the second video data of the preset video time length in the map is synchronous with the content of the first video data of the preset video time length.
Fig. 8 is a diagram illustrating first video data and second video data for obtaining a preset video duration according to an exemplary embodiment. As shown in fig. 8 in conjunction with fig. 6 and 7, the uniform sampling in fig. 8 refers to the process of extracting video frame data from the first video data and the second video data at equal intervals in step 602 and step 702, and the compression in fig. 8 corresponds to the contents in step 603 and step 703.
In some embodiments, when performing compression, an adjustment to the resolution of the video may also be further included.
In the video data, there may be a case where the definition of a certain video frame data in the video data is insufficient, and if the video frame data with the just insufficient definition is extracted, there may be a problem that the display effect of the video data with the preset video duration is poor, and therefore, the embodiment of the present disclosure may further include a process of adjusting the extracted video frame data. Fig. 9 is a flowchart illustrating selecting video frame data according to an exemplary embodiment, and as shown in fig. 9, after extracting video frame data from the first video data at equal intervals in step 602, the video processing method according to the embodiment of the present disclosure may further include the following steps 901 to 902.
And step 901, comparing the video frame data within the preset frame number range before and after the extracted video frame data to obtain the video frame data with the highest definition.
The definition can be determined by extracting edge features in the video frame data to obtain edge feature values for representing the definition, wherein the larger the edge feature value is, the clearer the video frame data is, and the edge feature values can be obtained by methods such as a sobel (sobel) algorithm.
Step 902, determining the video frame data extracted from the first video data according to the video frame data with the highest definition.
By adopting the above method, although the situation that the first video data with the preset video duration and the second video data with the preset video duration are slightly different in the playing content can be obtained finally, the synchronous presenting effect of the first video data with the preset video duration and the second video data with the preset video duration can not be influenced due to the small range of the set frame number.
Fig. 10 is a flowchart illustrating the determination of the video frame data with the highest definition according to an exemplary embodiment, and as shown in fig. 10, the comparing of the video frame data within the set frame number range before and after the extracted video frame data in step 901 to obtain the video frame data with the highest definition further includes the following steps 1001 to 1003.
Step 1001, video frame data within a set frame number range before and after the extracted video frame data is acquired.
Step 1002, extracting respective edge features from the acquired video frame data respectively, and obtaining an edge feature value of each video frame data.
In some embodiments, a sobel algorithm is used to extract respective edge feature values of the respective video frame data.
And 1003, determining the video frame data with the maximum edge characteristic value as the video frame data with the highest definition.
And 105, obtaining third video data which synchronously present the content of the first video data and the content of the second video data according to the first video data with the preset video duration and the second video data with the preset video duration.
In some embodiments, step 105 directly splices the first video data with the preset video duration and the second video data with the preset video duration to obtain third video data.
Fig. 11 is a flowchart illustrating obtaining third video data according to an exemplary embodiment, where as shown in fig. 11, the obtaining of the third video data synchronously presenting the content of the first video data and the content of the second video data according to the first video data with the preset video duration and the second video data with the preset video duration in step 105 includes the following steps 1101 to 1105.
Step 1101, determining the position and size of the first video data with the preset video duration in the third video data and the position and size of the second video data with the preset video duration in the third video data according to the preset layout condition.
Step 1102, adjusting the size of the first video data with the preset video time length according to the size of the first video data with the preset video time length in the third video data.
1103, adjusting the size of the second video data with the preset video time length according to the size of the second video data with the preset video time length in the third video data.
And 1104, combining the first video data with the preset video duration after the size is adjusted and the second video data with the preset video duration after the size is adjusted according to the position of the first video data with the preset video duration in the third video data and the position of the second video data with the preset video duration in the third video data.
Step 1105, obtaining third video data based on the combined first video data with the preset video duration and the second video data with the preset video duration.
Fig. 12A is a diagram illustrating a first layout structure of third video data according to an exemplary embodiment. As shown in fig. 12A, in the first layout structure, the third video data 1200 includes a first video area 1201 and a second video area 1202, where the first video area 1201 is located below the second video area 1202, the first video area 1201 is used to present first video data with a preset video duration, and the second video area 1202 is used to present second video data with a preset video duration. Based on the embodiment shown in fig. 12A, the position between first video area 1201 and second video area 1202 can also be switched.
Fig. 12B is a diagram illustrating a second layout structure of third video data according to an exemplary embodiment. As shown in fig. 12B, in the second layout structure, the third video data 1200 includes a first video region 1201 and a second video region 1202, where the second video region 1202 occupies the entire area of the third video data 1200, the first video region 1201 is smaller than the second video region 1202, and the first video region 1201 is arranged at the lower left corner of the second video region 1202 in a picture-in-picture manner, the first video region 1201 is used for presenting the first video data of the preset video duration, and the second video region 1202 is used for presenting the second video data of the preset video duration. Based on the embodiment shown in fig. 12B, the first video region 1201 can be arranged in picture-in-picture at other positions of the second video region 1202, for example, the upper left position, the upper right position, the lower right position, and the like. The position between the first video area 1201 and the second video area 1202 can also be reversed based on the embodiment shown in fig. 12B.
Fig. 12C is a diagram illustrating a third layout structure of third video data, according to an example embodiment. As shown in fig. 12C, in the case that the second video data includes first sub video data and second sub video data, in the third layout structure, the third video data 1200 includes a first video region 1201, a second video region 1202 and a third video region 1203, where the first video region 1201 is used for presenting the first video data of the preset video duration, the second video region 1202 is used for presenting the first sub video data, and the third video region 1203 is used for presenting the second sub video data. Wherein second video region 1202 occupies the entire region of third video data 1200, first video region 1201 is smaller than second video region 1202, and first video region 1201 is arranged in picture-in-picture at the lower left corner of second video region 1202, third video region 1203 is smaller than second video region 1202, and third video region 1203 is arranged in picture-in-picture at the lower right corner of second video region 1202. Based on the embodiment shown in fig. 12C, first video region 1201 and/or third video region 1203 may be arranged in other positions of second video region 1202, such as an upper left corner position, an upper right corner position, a lower right corner position, and the like, in a picture-in-picture form. Based on the embodiment shown in fig. 12B, the respective positions can also be switched between the first video region 1201 and the second video region 1202, between the first video region 1201 and the third video region 1203, between the second video region 1202 and the third video region 1203, or between the first video region 1201 and the second video region 1202 and the third video region 1203.
The video processing method of the embodiment of the disclosure acquires the positioning position of the shooting place in real time in the process of shooting the video, obtains the second video data of the shooting place in the moving process of the map according to the map data and the positioning position, converts the second video data and the first video data obtained by shooting the video into the preset video duration, and finally obtains the third video data which synchronously presents the second video data and the first video data, thereby realizing the effect that the changing process of the positioning position of the user in the map during the movement and the video shot by the user are simultaneously presented in the same video, enabling the user to synchronously obtain the movement position of the user in the map and the video content associated with the movement position through the same video, realizing the synchronous associated playing of the movement position change and the video content, being beneficial to improving the movement experience of the user, and being beneficial to enhancing the enthusiasm of the movement of the user.
Fig. 13 is a specific scene flowchart of a video processing method according to an exemplary embodiment, and fig. 14 is a specific scene step diagram of a video processing method according to an exemplary embodiment, where as shown in fig. 13 and fig. 14, the specific application scene mainly includes the following steps 1401 to 1415.
Step 1401, shooting a video to obtain first video data, and acquiring a positioning position of a shooting place in real time in the video shooting process.
And 1402, sequentially presenting icons representing shooting places at the positioning positions in the map data according to the acquisition time sequence of the positioning positions to obtain a plurality of pieces of video frame data of which the contents comprise map images and the icons.
And step 1403, obtaining the moving track of the shooting point in the map according to the positioning position and the map data.
And step 1404, presenting the moving track in the video frame data.
Wherein step 1403 and step 1404 are optional steps.
And step 1405, arranging the plurality of pieces of video frame data according to the acquisition time sequence of the positioning positions.
In step 1406, transition frame data is inserted between adjacent video frame data.
The content of the transition frame data comprises icons of transition positions between adjacent positioning positions in the map image, the number of the transition frame data is determined according to the number of playing video frames in unit time and the acquisition period of the positioning positions, and the transition position is determined according to the number of the video frames in unit time contained in the video and the distance between the adjacent positioning positions.
And step 1407, arranging the plurality of pieces of video frame data and the plurality of pieces of transition frame data according to a time sequence, and obtaining second video data according to the number of the played video frames in unit time.
Wherein the duration of the second video data is equal to the duration of the first video data.
Step 1408, acquiring a preset video time length.
Step 1409, determining the extraction number of the first video frames according to the ratio of the duration of the first video data to the duration of the preset video and the number of video frame data included in the first video data.
And step 1410, extracting video frame data from the first video data at equal intervals according to the first video frame extraction quantity.
Optionally, after step 1410, the following steps may also be performed: acquiring video frame data within a preset frame number range before and after the extracted video frame data; respectively extracting respective edge features from the acquired video frame data to obtain edge feature values of the video frame data; determining the video frame data with the maximum edge characteristic value as the video frame data with the highest definition; and determining the video frame data extracted from the first video data according to the video frame data with the highest definition.
Step 1411, arranging the video frame data extracted from the first video data according to a time sequence, and obtaining the first video data with a preset video duration according to the number of playing video frames in unit time.
And step 1412, determining the extraction number of the second video frames according to the ratio of the time length of the second video data to the preset video time length and the number of the video frame data included in the second video data.
And 1413, extracting the video frame data from the second video data at equal intervals according to the extraction quantity of the second video frames.
And step 1414, arranging the video frame data extracted from the second video data according to a time sequence, and obtaining the second video data with preset video duration according to the number of the played video frames in unit time.
And the preset video time length is less than the time lengths of the first video data and the second video data.
Step 1415, according to the preset layout condition, determining the position and size of the first video data with the preset video duration in the third video data, and the position and size of the second video data with the preset video duration in the third video data.
And 1416, adjusting the size of the first video data with the preset video time length according to the size of the first video data with the preset video time length in the third video data.
And 1417, adjusting the size of the second video data with the preset video time length according to the size of the second video data with the preset video time length in the third video data.
Step 1418, combining the first video data with the preset video duration after the size adjustment and the second video data with the preset video duration after the size adjustment according to the position of the first video data with the preset video duration in the third video data and the position of the second video data with the preset video duration in the third video data.
Step 1419, obtaining third video data based on the combined first video data with the preset video duration and the second video data with the preset video duration.
Fig. 15 is a diagram illustrating another specific scene step of a video processing method according to an exemplary embodiment, and as shown in fig. 15, the specific application scene mainly includes the following steps 1501 to 1507.
Step 1501, shooting a video to obtain first video data, acquiring the positioning position of the shooting place in real time in the shooting process of the video, and then executing step 1502.
Step 1502, determining a moving range according to all positioning positions obtained in the shooting process, and then executing step 1503.
And step 1503, judging whether the moving range exceeds a preset range condition threshold value, if so, executing step 1504, otherwise, executing step 1505.
Step 1504, obtaining second video data including the first sub video data and the second sub video data, and then performing step 1506.
The first sub-video data comprises map content containing all positioning positions, and the second sub-video data comprises map content in a local range containing the positioning positions, wherein the map content is presented along with the change of the positioning positions.
Step 1505 obtains second video data including only the first sub video data, followed by step 1506.
Step 1506, obtaining a preset video duration, converting the first video data and the second video data into the first video data with the preset video duration and the second video data with the preset video duration, and then executing step 1507.
And the preset video time length is less than the time lengths of the first video data and the second video data.
Step 1507, according to the first video data with the preset video duration and the second video data with the preset video duration, third video data which synchronously present the content of the first video data with the preset video duration and the content of the second video data with the preset video duration is obtained.
Fig. 16 is a logical block diagram illustrating a video processing apparatus according to an exemplary embodiment, and as shown in fig. 16, the video processing apparatus includes a first video data obtaining module 1601, a positioning position obtaining module 1602, a second video data obtaining module 1603, a duration conversion module 1604, and a third video data obtaining module 1605.
The first video data obtaining module 1601 is configured to perform shooting of a video, resulting in first video data.
A positioning position obtaining module 1602 configured to perform obtaining a positioning position of a shooting location in real time during shooting of the video.
A second video data obtaining module 1603 configured to perform obtaining second video data of a moving process of the shooting place in the map according to the map data and the positioning position, wherein the duration of the second video data is equal to the duration of the first video data.
The duration conversion module 1604 is configured to perform acquiring a preset video duration, and convert the first video data and the second video data into a first video data of the preset video duration and a second video data of the preset video duration, where the preset video duration is smaller than the duration of the first video data and the second video data.
The third video data obtaining module 1605 is configured to execute the first video data with the preset video duration and the second video data with the preset video duration, so as to obtain third video data which synchronously present the content of the first video data with the preset video duration and the content of the second video data with the preset video duration.
In some embodiments, the second video data obtaining module 1603 further comprises:
the video frame data acquisition sub-module is configured to sequentially present the icons representing the shooting places at the positioning positions in the map data according to the acquisition time sequence of the positioning positions to obtain a plurality of pieces of video frame data of which the contents comprise map images and the icons;
a second video data obtaining sub-module configured to perform obtaining time based on the plurality of video frame data and the positioning position, resulting in second video data.
In some embodiments, the second video data obtaining module 1603 further comprises:
the mobile track acquisition sub-module is configured to execute obtaining of a mobile track of the shooting place in the map according to the positioning position and the map data;
and the movement track presenting sub-module is configured to perform presentation of the movement track in the video frame data.
In some embodiments, the second video data obtaining sub-module further comprises:
a video frame data arrangement submodule configured to perform arrangement of a plurality of pieces of video frame data in order of acquisition time of the positioning positions;
the transition frame data insertion sub-module is configured to insert transition frame data between adjacent video frame data, wherein the content of the transition frame data comprises icons of transition positions between adjacent positioning positions presented in the map image, the number of the transition frame data is determined according to the number of playing video frames in unit time and the acquisition period of the positioning positions, and the transition positions are determined according to the number of video frames in unit time contained in the video and the distance between the adjacent positioning positions;
and the second video data generation submodule is configured to arrange the plurality of pieces of video frame data and the transition frame data according to the time sequence and obtain second video data according to the number of the played video frames in the unit time.
In some embodiments, the second video data obtaining module 1603 comprises:
a moving range determination submodule configured to perform determination of a moving range from all the positioning positions obtained in the photographing process;
a sub video data obtaining sub module configured to perform: under the condition that the moving range exceeds a preset range condition threshold value, obtaining second video data comprising first sub video data and second sub video data, wherein the first sub video data comprises map content containing all positioning positions, and the second sub video data comprises map content in a local range containing the positioning positions and presented along with the change of the positioning positions; and obtaining second video data only comprising the first sub video data under the condition that the moving range does not exceed the preset range condition threshold value.
In some embodiments, the duration conversion module 1604 comprises:
the first video frame extraction quantity determining submodule is configured to determine the first video frame extraction quantity according to the proportion of the duration of the first video data to the preset video duration and the quantity of video frame data contained in the first video data;
a first video frame extraction sub-module configured to perform extraction of video frame data from the first video data at equal intervals according to a first video frame extraction number;
the first preset time duration video acquisition submodule is configured to arrange video frame data extracted from the first video data according to a time sequence, and obtain first video data with preset video time duration according to the number of playing video frames in unit time;
and the number of the first and second groups,
the second video frame extraction quantity determining submodule is configured to determine the second video frame extraction quantity according to the proportion of the duration of the second video data to the preset video duration and the quantity of video frame data contained in the second video data;
a second video frame extraction sub-module configured to perform extraction of video frame data from the second video data at equal intervals according to a second video frame extraction number;
and the second preset time duration video acquisition submodule is configured to arrange the video frame data extracted from the second video data according to a time sequence and obtain the second video data with preset video time duration according to the number of the played video frames in unit time.
In some embodiments, the video processing apparatus further comprises:
the video frame data definition screening module is configured to compare video frame data in a set frame number range before and after the extracted video frame data to obtain video frame data with the highest definition;
and the video frame data determining module is configured to execute video frame data with the highest definition and determine video frame data extracted from the first video data.
In some embodiments, the video frame data sharpness screening module comprises:
the to-be-screened video frame data acquisition submodule is configured to acquire video frame data within a preset frame number range before and after the extracted video frame data;
the edge characteristic value acquisition submodule is configured to extract respective edge characteristics from the acquired video frame data respectively to obtain edge characteristic values of the video frame data;
and the video frame data with the highest definition is determined as the video frame data with the highest definition by the aid of the definition-highest video frame data determining submodule.
In some embodiments, the third video data acquisition module 1605 includes:
a position and size determining submodule configured to perform determining, according to a preset layout condition, a position and size of first video data of a preset video duration in third video data and a position and size of second video data of the preset video duration in the third video data;
the first video data size adjusting sub-module is configured to adjust the size of the first video data of the preset video duration according to the size of the first video data of the preset video duration in the third video data;
the second video data size adjusting submodule is configured to adjust the size of the second video data with the preset video duration according to the size of the second video data with the preset video duration in the third video data;
the combining sub-module is configured to combine the first video data with the preset video duration after the size is adjusted and the second video data with the preset video duration after the size is adjusted according to the position of the first video data with the preset video duration in the third video data and the position of the second video data with the preset video duration in the third video data;
and the third video data acquisition submodule is configured to execute the first video data based on the combined preset video duration and the second video data based on the combined preset video duration to obtain third video data.
With regard to the video processing apparatus in the above-described embodiment, the specific manner in which each unit performs operations has been described in detail in the embodiment related to the video processing method, and will not be elaborated here.
It should be noted that: the foregoing embodiments are merely illustrated by the division of the functional modules, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above.
Fig. 17 shows a block diagram of an electronic device according to an exemplary embodiment of the present disclosure. The electronic device 1700 may be: a smart phone, a tablet computer, an MP3 (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4) player, a notebook computer, or a desktop computer.
The electronic device 1700 includes a processor 1701 and a memory 1702. The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
In some embodiments, a non-transitory computer readable storage medium in the memory 1702 is used to store at least one instruction for execution by the processor 1701 to implement the video processing methods provided by the various embodiments of the present disclosure.
In some embodiments, the electronic device 1700 can further include peripheral devices. The processor 1701, memory 1702 and peripheral devices may be connected by buses or signal lines. The peripheral device may include at least one of a radio frequency circuit, a touch display screen, a camera assembly, an audio circuit, a positioning assembly, and a power source. The radio frequency circuit is used for receiving and transmitting radio frequency signals, also called electromagnetic signals, and realizes information interaction between the electronic device 1700 and the outside. The display screen is used for displaying a graphical user interface. The camera assembly is used for acquiring images or videos. The audio circuit comprises a microphone and a loudspeaker and is used for collecting audio and playing the audio. The positioning component is used for positioning the current positioning position of the terminal. The power supply is used to power the various components in the electronic device 1700.
In some embodiments, the electronic device 1700 may also include one or more sensors. The one or more sensors include, but are not limited to, acceleration sensors, gyroscope sensors, pressure sensors, fingerprint sensors, optical sensors, and proximity sensors.
Those skilled in the art will appreciate that the above-described architecture is not intended to be limiting of the electronic device 1700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including at least one instruction, which is executable by a processor in a computer device to perform the video processing method in the above embodiments, is also provided.
Alternatively, the computer-readable storage medium may be a non-transitory computer-readable storage medium, and the non-transitory computer-readable storage medium may include a ROM (Read-Only Memory), a RAM (Random-Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like, for example.
In an exemplary embodiment, a computer program product is also provided, which includes one or more instructions executable by a processor of a computer device to perform the video processing methods provided by the various embodiments described above.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (12)

1. A video processing method, comprising:
shooting a video to obtain first video data;
acquiring the positioning position of a shooting place in real time in the shooting process of the video;
obtaining second video data of the shooting place in the moving process of the map according to the map data and the positioning position;
acquiring preset video time, and converting the first video data and the second video data into first video data with the preset video time and second video data with the preset video time;
and obtaining third video data which synchronously presents the content of the first video data and the content of the second video data according to the first video data with the preset video duration and the second video data with the preset video duration.
2. The video processing method according to claim 1, wherein said obtaining second video data of a moving process of the shooting location in the map according to the map data and the positioning position comprises:
sequentially presenting icons representing shooting places at the positioning positions in the map data according to the acquisition time sequence of the positioning positions to obtain a plurality of pieces of video frame data of which the contents comprise map images and the icons;
and obtaining the second video data based on the plurality of pieces of video frame data and the acquisition time of the positioning position.
3. The video processing method according to claim 2, wherein the obtaining second video data of a moving process of the shooting location in the map according to the map data and the positioning position further comprises:
obtaining a moving track of the shooting place in the map according to the positioning position and the map data;
and presenting the movement track in the video frame data.
4. The video processing method according to claim 2, wherein said deriving the second video data based on the plurality of video frame data and the acquisition time of the positioning location comprises:
arranging the plurality of pieces of video frame data according to the sequence of the acquisition time of the positioning positions;
inserting transition frame data between the adjacent video frame data, wherein the content of the transition frame data comprises the icon of a transition position between the adjacent positioning positions in the map image content, the number of the transition frame data is determined according to the number of playing video frames in unit time and the acquisition period of the positioning positions, and the transition position is determined according to the number of video frames in unit time contained in the video and the distance between the adjacent positioning positions;
and arranging the plurality of video frame data and the transition frame data according to a time sequence, and obtaining the second video data according to the number of playing video frames in the unit time.
5. The video processing method according to claim 1, wherein obtaining second video data of a moving process of the shooting location in the map according to the map data and the positioning position comprises:
determining a moving range according to all positioning positions obtained in the shooting process;
obtaining second video data comprising first sub video data and second sub video data under the condition that the moving range exceeds a preset range condition threshold value, wherein the first sub video data comprises map content containing all the positioning positions, and the second sub video data comprises map content in a local range containing the positioning positions, which is presented along with the change of the positioning positions;
obtaining the second video data including only the first sub-video data if the movement range does not exceed the preset range condition threshold.
6. The video processing method according to claim 1, wherein said converting the first video data and the second video data into the first video data with the preset video duration and the second video data with the preset video duration comprises:
determining the extraction quantity of first video frames according to the proportion of the duration of the first video data to the duration of the preset video and the quantity of video frame data contained in the first video data;
extracting video frame data from the first video data at equal intervals according to the first video frame extraction quantity;
arranging video frame data extracted from the first video data according to a time sequence, and obtaining the first video data with the preset video duration according to the number of playing video frames in unit time;
and (c) a second step of,
determining the extraction quantity of second video frames according to the proportion of the duration of the second video data to the duration of the preset video and the quantity of video frame data contained in the second video data;
extracting video frame data from the second video data at equal intervals according to the second video frame extraction quantity;
and arranging the video frame data extracted from the second video data according to a time sequence, and obtaining the second video data with the preset video duration according to the number of playing video frames in unit time.
7. The video processing method of claim 6, wherein after said extracting video frame data from said first video data at equal intervals, said video processing method further comprises:
comparing the video frame data within the range of the set frame number before and after the extracted video frame data to obtain the video frame data with the highest definition;
and determining the video frame data extracted from the first video data according to the video frame data with the highest definition.
8. The video processing method of claim 7, wherein comparing the video frame data within the range of the set number of frames before and after the extracted video frame data to obtain the video frame data with the highest definition comprises:
acquiring video frame data within a preset frame number range before and after the extracted video frame data;
respectively extracting respective edge characteristics from the acquired video frame data to obtain edge characteristic values of the video frame data;
and determining the video frame data with the maximum edge characteristic value as the video frame data with the highest definition.
9. The method according to claim 1, wherein the obtaining third video data that synchronously presents the content of the first video data and the content of the second video data according to the first video data with the preset video duration and the second video data with the preset video duration comprises:
determining the position and the size of the first video data with the preset video duration in the third video data and the position and the size of the second video data with the preset video duration in the third video data according to a preset layout condition;
adjusting the size of the first video data with the preset video duration according to the size of the first video data with the preset video duration in the third video data;
adjusting the size of the second video data with the preset video duration according to the size of the second video data with the preset video duration in the third video data;
combining the first video data with the preset video duration after the size is adjusted and the second video data with the preset video duration after the size is adjusted according to the position of the first video data with the preset video duration in the third video data and the position of the second video data with the preset video duration in the third video data;
and obtaining the third video data based on the combined first video data with the preset video duration and the combined second video data with the preset video duration.
10. A video processing apparatus, comprising:
the first video data acquisition module is configured to execute shooting of a video to obtain first video data;
a positioning position acquisition module configured to perform real-time acquisition of a positioning position of a shooting location in a shooting process of the video;
a second video data obtaining module configured to obtain second video data of a moving process of the shooting place in a map according to map data and the positioning position, wherein the duration of the second video data is equal to that of the first video data;
the time length conversion module is configured to execute obtaining of a preset video time length and convert the first video data and the second video data into first video data of the preset video time length and second video data of the preset video time length, wherein the preset video time length is smaller than the time length of the first video data and the second video data;
the third video data acquisition module is configured to execute the first video data according to the preset video duration and the second video data according to the preset video duration to obtain third video data which synchronously present the content of the first video data and the content of the second video data.
11. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the executable instructions to implement the video processing method of any of claims 1 to 9.
12. A computer-readable storage medium, wherein at least one instruction of the computer-readable storage medium, when executed by a processor of an electronic device, enables the electronic device to implement the video processing method of any of claims 1 to 9.
CN202211140274.1A 2022-09-20 2022-09-20 Video processing method, video processing device, electronic equipment and computer-readable storage medium Active CN115225944B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211140274.1A CN115225944B (en) 2022-09-20 2022-09-20 Video processing method, video processing device, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211140274.1A CN115225944B (en) 2022-09-20 2022-09-20 Video processing method, video processing device, electronic equipment and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN115225944A true CN115225944A (en) 2022-10-21
CN115225944B CN115225944B (en) 2022-12-09

Family

ID=83617318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211140274.1A Active CN115225944B (en) 2022-09-20 2022-09-20 Video processing method, video processing device, electronic equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115225944B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005123750A (en) * 2003-10-14 2005-05-12 Tsukuba Multimedia:Kk Video-map interlocking system and interlocking method thereof
CN102680992A (en) * 2012-06-05 2012-09-19 代松 Method for utilizing video files containing global positioning system (GPS) information to synchronously determine movement track
CN104394343A (en) * 2014-11-18 2015-03-04 国家电网公司 Target map track and history video synchronous playback control method
US20160300386A1 (en) * 2015-04-13 2016-10-13 International Business Machines Corporation Sychronized display of street view map and video stream
CN106534734A (en) * 2015-09-11 2017-03-22 腾讯科技(深圳)有限公司 Method and device for playing video and displaying map, and data processing method and system
WO2021088681A1 (en) * 2019-11-08 2021-05-14 深圳市道通智能航空技术股份有限公司 Long-distance flight-tracking method and device for unmanned aerial vehicle, apparatus, and storage medium
CN112866805A (en) * 2021-04-23 2021-05-28 北京金和网络股份有限公司 Video acceleration processing method and device and electronic equipment
CN113179432A (en) * 2021-04-19 2021-07-27 青岛海信移动通信技术股份有限公司 Display method and display equipment for video acquisition position
CN114449336A (en) * 2022-01-20 2022-05-06 杭州海康威视数字技术股份有限公司 Vehicle track animation playing method, device and equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005123750A (en) * 2003-10-14 2005-05-12 Tsukuba Multimedia:Kk Video-map interlocking system and interlocking method thereof
CN102680992A (en) * 2012-06-05 2012-09-19 代松 Method for utilizing video files containing global positioning system (GPS) information to synchronously determine movement track
CN104394343A (en) * 2014-11-18 2015-03-04 国家电网公司 Target map track and history video synchronous playback control method
US20160300386A1 (en) * 2015-04-13 2016-10-13 International Business Machines Corporation Sychronized display of street view map and video stream
CN106534734A (en) * 2015-09-11 2017-03-22 腾讯科技(深圳)有限公司 Method and device for playing video and displaying map, and data processing method and system
WO2021088681A1 (en) * 2019-11-08 2021-05-14 深圳市道通智能航空技术股份有限公司 Long-distance flight-tracking method and device for unmanned aerial vehicle, apparatus, and storage medium
CN113179432A (en) * 2021-04-19 2021-07-27 青岛海信移动通信技术股份有限公司 Display method and display equipment for video acquisition position
CN112866805A (en) * 2021-04-23 2021-05-28 北京金和网络股份有限公司 Video acceleration processing method and device and electronic equipment
CN114449336A (en) * 2022-01-20 2022-05-06 杭州海康威视数字技术股份有限公司 Vehicle track animation playing method, device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王美珍等: "基于可定位视频的电子导游系统", 《测绘通报》 *

Also Published As

Publication number Publication date
CN115225944B (en) 2022-12-09

Similar Documents

Publication Publication Date Title
US10573351B2 (en) Automatic generation of video and directional audio from spherical content
US8773589B2 (en) Audio/video methods and systems
KR101946019B1 (en) Video processing apparatus for generating paranomic video and method thereof
US11587317B2 (en) Video processing method and terminal device
US8839131B2 (en) Tracking device movement and captured images
US20150139601A1 (en) Method, apparatus, and computer program product for automatic remix and summary creation using crowd-sourced intelligence
KR20190127865A (en) How to Assign Virtual Tools, Servers, Clients, and Storage Media
US10347298B2 (en) Method and apparatus for smart video rendering
US20190253747A1 (en) Systems and methods for integrating and delivering objects of interest in video
JP2020086983A (en) Image processing device, image processing method, and program
CN105453571A (en) Broadcasting providing apparatus, broadcasting providing system, and method of providing broadcasting thereof
JP2014236426A (en) Image processor and imaging system
WO2013076720A1 (en) A system and methods for personalizing an image
CN115225944B (en) Video processing method, video processing device, electronic equipment and computer-readable storage medium
GB2553659A (en) A System for creating an audio-visual recording of an event
CN112584036A (en) Holder control method and device, computer equipment and storage medium
WO2018027067A1 (en) Methods and systems for panoramic video with collaborative live streaming
CN112287771A (en) Method, apparatus, server and medium for detecting video event
WO2022153955A1 (en) Golf digest creation system, movement imaging unit, and digest creation device
TWI535282B (en) Method and electronic device for generating multiple point of view video
JP2020107196A (en) Image processing device or image processing server
TWI628626B (en) Multiple image source processing methods
TWI614683B (en) Method, system for providing location-based information in response to a speed, and non-transitory computer-readable medium
CN116264640A (en) Viewing angle switching method, device and system for free viewing angle video
CN113973169A (en) Photographing method, photographing terminal, server, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant