CN112911363A - Track video generation method, terminal device and computer-readable storage medium - Google Patents

Track video generation method, terminal device and computer-readable storage medium Download PDF

Info

Publication number
CN112911363A
CN112911363A CN202110056004.1A CN202110056004A CN112911363A CN 112911363 A CN112911363 A CN 112911363A CN 202110056004 A CN202110056004 A CN 202110056004A CN 112911363 A CN112911363 A CN 112911363A
Authority
CN
China
Prior art keywords
track
information
video
track video
generation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110056004.1A
Other languages
Chinese (zh)
Other versions
CN112911363B (en
Inventor
何岸
肖永兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DO Technology Co ltd
Original Assignee
DO Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DO Technology Co ltd filed Critical DO Technology Co ltd
Priority to CN202110056004.1A priority Critical patent/CN112911363B/en
Publication of CN112911363A publication Critical patent/CN112911363A/en
Application granted granted Critical
Publication of CN112911363B publication Critical patent/CN112911363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application discloses a track video generation method, a terminal device and a computer readable storage medium, wherein the track video generation method comprises the following steps: responding to a generation instruction of the track video, acquiring track information of a user and audio information added to the track video; establishing a corresponding relation between the track information and the audio information; and generating a track video according to the track information and the corresponding relation between the track information and the audio information. According to the scheme, the track video with the synchronous audio and track is generated and displayed by establishing the corresponding relation between the track information and the audio information, and the interestingness of watching the track video by a user is improved.

Description

Track video generation method, terminal device and computer-readable storage medium
Technical Field
The present application relates to the field of motion monitoring technologies, and in particular, to a track video generation method, a terminal device, and a computer-readable storage medium.
Background
At present, a lot of applications for sports can automatically store sports data of sports users when the users move, and the stored sports data can be displayed to the sports users only after corresponding processing, so that the sports users can be helped to know own sports conditions more intuitively. Therefore, in order to enable the sports user to more intuitively acquire the self-movement condition, the related art extracts the movement track of the user from the user movement data of the sports user and generates a movement track video.
However, the motion trail video extracted from the user motion data in the related art is relatively single and does not contain some content capable of bringing visual or auditory experience to the motion user, so that the user experience is low.
Disclosure of Invention
The application at least provides a track video generation method, terminal equipment and a computer readable storage medium.
A first aspect of the present application provides a track video generation method, where the track video generation method includes:
responding to a generation instruction of the track video, acquiring track information of a user and audio information added to the track video;
establishing a corresponding relation between the track information and the audio information;
and generating the track video according to the track information and the corresponding relation between the track information and the audio information.
In some embodiments, the track video generation method further includes:
responding to the generation instruction of the track video, ending the current motion task, and acquiring all track information of the user, wherein the track information comprises positioning information and motion information;
the step of establishing the corresponding relationship between the track information and the audio information includes:
and establishing a corresponding relation between the positioning information and the audio information, and establishing a corresponding relation between the motion information and the audio information.
In some embodiments, the audio information comprises an audio duration, and the motion information comprises a motion duration;
the step of establishing the corresponding relationship between the track information and the audio information includes:
dividing the positioning information into a plurality of track segments with the same time length according to the proportional relation between the motion time length and the audio time length;
and generating motion characteristic information according to the corresponding relation between the positioning information and the motion information, and establishing the corresponding relation between the motion characteristic information and the track segment.
In some embodiments, the step of generating the track video according to the track information and the corresponding relationship between the track information and the audio information includes:
taking the audio time length as the video time length of the track video;
and generating the track video according to the video duration and the corresponding relation between the motion characteristic information and the track segment.
In some embodiments, after the step of generating the track video according to the track information and the corresponding relationship between the track information and the audio information, the track video generating method further includes:
and loading the track video on a display interface, and popping up a display window for displaying the motion characteristic information on a corresponding track segment.
In some embodiments, the track video generation method further comprises:
responding to a starting instruction of the movement task, and continuously acquiring the positioning information of the user according to a preset time period;
and deleting the positioning information of which the distance intervals with the two adjacent positioning points are larger than the preset distance.
In some embodiments, after the step of generating the track video according to the track information and the corresponding relationship between the track information and the audio information, the track video generating method further includes:
and loading the track video on a display interface, displaying track coordinates in the track video, and setting the center of the track coordinates at the center of the display interface.
In some embodiments, the step of displaying track coordinates in the track video comprises:
acquiring all track coordinates in the track video;
determining the scaling of the track video according to the proportional relation between the maximum distance in all the track coordinates and the display distance of the display interface;
and displaying the track coordinates in the track video according to the scaling.
In some embodiments, the step of displaying track coordinates in the track video comprises:
acquiring a starting track coordinate and/or an end track coordinate in the track coordinates;
and setting the head portrait of the user on the starting track coordinate and/or the end track coordinate.
A second aspect of the present application provides a terminal device, which includes a memory and a processor coupled to each other, where the processor is configured to execute program instructions stored in the memory to implement the track video generation method in the first aspect.
A third aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the track video generation method of the first aspect described above.
According to the scheme, the terminal equipment responds to a track video generation instruction, and obtains track information of a user and audio information added to the track video; establishing a corresponding relation between the track information and the audio information; and generating a track video according to the track information and the corresponding relation between the track information and the audio information. According to the scheme, the track video with the synchronous audio and track is generated and displayed by establishing the corresponding relation between the track information and the audio information, and the interestingness of watching the track video by a user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of an embodiment of a track video generation method provided in the present application;
FIG. 2 is a schematic flow chart diagram illustrating a track video generation method according to another embodiment of the present disclosure;
FIG. 3 is a block diagram of an embodiment of a terminal device provided in the present application;
fig. 4 is a schematic diagram of a framework of another embodiment of a terminal device provided in the present application;
FIG. 5 is a block diagram of an embodiment of a computer-readable storage medium provided herein.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a track video generation method according to an embodiment of the present disclosure. For example, the trajectory video generation method may be executed by a terminal device or a server or other processing device, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the track video generation method may be implemented by a processor calling computer readable instructions stored in a memory. For example, the terminal device of the embodiment of the present disclosure may also be an intelligent wearable device, such as a smart band, a smart watch, and the like, for displaying the exercise information.
Specifically, the track video generation method of the embodiment of the present disclosure may include the following steps:
step S11: and responding to a generation instruction of the track video, acquiring track information of a user and audio information added to the track video.
Wherein, the user can click the motion at terminal equipment's APP user interface and begin the icon, and terminal equipment APP sends the start instruction of motion task to wearable sports equipment, for example motion bracelet, motion wrist-watch etc.. From the beginning of the movement, the terminal device starts to collect the track information of the user, wherein the track information of the user may specifically include positioning information and movement information.
On the one hand, the terminal device APP obtains positioning information of the user in the motion process through the built-in positioning device, the positioning information refers to position information of the user, and the position information can be obtained through positioning calculation based on a GPS (global positioning system), a Beidou satellite navigation system or other positioning systems.
For example, after the sport starts, the mobile phone APP may collect the positioning information of the user through the GPS module of the mobile phone, and store the positioning information in the APP local database. The GPS module collects the positioning information of the user once every two seconds and sends the positioning information to the APP.
Furthermore, the terminal device APP can also correct the motion trajectory of the user in the process of collecting the positioning information, and the correction method can adopt a method in the prior art, for example, deleting the positioning information of which the distance interval between the positioning information and two adjacent positioning points is greater than a preset distance, which can help to improve the accuracy of the final trajectory video.
On the other hand, the terminal device APP obtains the motion information of the user from the wearable motion device. In the process of the user movement, the terminal device APP acquires the movement information of the user from the wearable movement device at preset time intervals, for example, at intervals of 30 seconds or 1 minute, or when the user movement is finished, the terminal device APP acquires the movement information of the user all at once.
The motion information refers to motion information detected by the motion sensor and the biosensor in the motion process of the user, or motion information generated after operation is performed on the basis of other detected information. The categories of motion information include, but are not limited to: heart rate, calories, exercise time, number of steps, time, distance, pace, average speed, exercise time, etc. Sensors include, but are not limited to: accelerometers, gyroscopes, geomagnetics, altimeters, etc., biosensor bioelectrode sensors, semiconductor biosensors, optical biosensors, etc.
When a user clicks a button for finishing the motion and generating a motion track video on a user interface of the terminal device APP, the terminal device does not acquire positioning information from the GPS module any more, and the wearable motion device does not acquire the motion information of the user any more.
Further, the user can select music added to the motion trail video through a user interface of the terminal device APP, and the terminal device obtains audio information of an audio file added to the motion trail video by the user. The music selected by the user can be music preset by the terminal equipment or local music downloaded in a memory of the mobile phone.
Step S12: and establishing a corresponding relation between the track information and the audio information.
Wherein, during the movement of the user, the position sensor of the terminal device, the movement sensor of the wearable movement device and the biological sensor can generate the positioning information and the movement information which are associated with the movement time. However, the duration of the movement time and the audio duration selected by the user are likely to be different, and therefore, the terminal device needs to establish a corresponding relationship between the positioning information and the audio duration, and a corresponding relationship between the movement information and the audio duration.
Specifically, the terminal device may associate the positioning information, the motion information, and the audio duration respectively according to a proportional relationship between the motion duration and the audio duration.
Step S13: and generating a track video according to the track information and the corresponding relation between the track information and the audio information.
And the terminal equipment generates a corresponding motion track video according to the corresponding relation between the positioning information and the audio time length and the corresponding relation between the motion information and the audio time length.
Specifically, the terminal device can generate a motion trail video by using a motion trail function built in the map APP. For example, the terminal device uses a high-resolution map API interface to set GPS coordinates on a MovingPointOverlay set (path planning set), set the duration settotaltduration of the track video, and then call a startsmooth move () method to play the track video. The map API interface is a set of map service-based application interfaces freely provided for developers, provides functions of basic map display, search, positioning, reverse/geographic coding, route planning, LBS cloud storage and retrieval and the like, and is suitable for map application development of various devices such as a PC (personal computer) end, a mobile end and a server under multiple operating systems.
In the embodiment of the disclosure, the terminal device responds to a generation instruction of the track video, and acquires track information of a user and audio information added to the track video; establishing a corresponding relation between the track information and the audio information; and generating a track video according to the track information and the corresponding relation between the track information and the audio information. According to the scheme, the track video with the synchronous audio and track is generated and displayed by establishing the corresponding relation between the track information and the audio information, and the interestingness of watching the track video by a user is improved.
In addition, step S12 and step S13 of the track video generation method in the foregoing embodiment have other implementation manners, please refer to fig. 2 in detail, and fig. 2 is a schematic flow chart of another embodiment of the track video generation method provided in the present application.
Specifically, the track video generation method of the present embodiment includes the following steps:
step S21: and dividing the positioning information into a plurality of track segments with the same time length according to the proportional relation between the motion time length and the audio time length.
The terminal equipment divides the motion track formed by connecting continuous positioning information into a plurality of track sections with the same time length according to the proportional relation between the motion time length and the audio time length.
Specifically, the terminal device may divide the audio time duration into a plurality of audio segments according to a preset time interval or a time interval input by a user, and then divide the motion time duration into a plurality of track segments according to a proportional relationship between the motion time duration and the audio time duration. Since the positioning information is associated with the movement time, the terminal device may divide the positioning information into corresponding track segments.
Step S22: and generating motion characteristic information according to the corresponding relation between the positioning information and the motion information, and establishing the corresponding relation between the motion characteristic information and the track segment.
The terminal equipment generates motion characteristic information according to the corresponding relation between the positioning information and the motion information in each track segment, and establishes the corresponding relation between the motion characteristic information and the track segments. The exercise characteristic information includes exercise information corresponding to an exercise target or user-specific physiological data, the exercise information corresponding to the exercise target includes target step number achievement data (1 ten thousand steps), target distance achievement data (5 km), calorie consumption, and the like, and the user-specific physiological information includes a heart rate interval, an exercise intensity interval, a maximum heart rate, and the like.
Step S23: and taking the audio time length as the video time length of the track video.
The terminal equipment sets the audio time length as the video time length of the track video, and scales the motion time length to the video time length suitable for the track video according to the proportional relation of the audio time length and the motion time length.
Step S24: and generating a track video according to the video duration and the corresponding relation between the motion characteristic information and the track segment.
After the terminal equipment generates the track video according to the video duration and the corresponding relation between the motion characteristic information and the track section, the track video is loaded on the display interface of the APP, and a display window for displaying the corresponding motion characteristic information is popped up on the corresponding track section, so that a user can listen to music while checking the motion track, and can check the motion track and the motion characteristic value, and the interest of the user in motion is improved.
Furthermore, when the terminal equipment loads the track video on the display interface, the track coordinates in the track video can be displayed, and the center of the track coordinates is arranged at the center of the display interface of the APP, so that a user can visually check the whole motion track conveniently. The terminal equipment can also acquire a starting track coordinate and/or an end track coordinate in the track coordinates according to the movement time so as to set the head portrait of the user on the starting track coordinate and/or the end track coordinate.
Specifically, the terminal device determines the maximum transverse distance and the maximum longitudinal distance in the track coordinates, and then determines the scaling of the track video on the display interface according to the proportional relation between the maximum transverse distance and the maximum longitudinal distance and the display distance of the display interface. The terminal device further displays the track coordinates in the track video according to the scaling.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Referring to fig. 3, fig. 3 is a schematic diagram of a framework of an embodiment of a terminal device provided in the present application. The terminal device 30 includes an acquisition module 31, a creation module 32, and a generation module 33.
The acquiring module 31 is configured to, in response to a generation instruction of the track video, acquire track information of a user and audio information added to the track video; an establishing module 32, configured to establish a corresponding relationship between the track information and the audio information; and a generating module 33, configured to generate the track video according to the track information and the corresponding relationship between the track information and the audio information.
Referring to fig. 4, fig. 4 is a schematic diagram of a framework of another embodiment of a terminal device provided in the present application. The terminal device 40 includes a memory 41 and a processor 42 coupled to each other, and the processor 42 is configured to execute program instructions stored in the memory 41 to implement the steps of any of the above-described embodiments of the track video generation method. In one particular implementation scenario, terminal device 40 may include, but is not limited to: a microcomputer, a server, and in addition, the terminal device 40 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 42 is configured to control itself and the memory 41 to implement the steps of any of the above-described embodiments of the track video generation method. Processor 42 may also be referred to as a CPU (Central Processing Unit). The processor 42 may be an integrated circuit chip having signal processing capabilities. The Processor 42 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 42 may be commonly implemented by an integrated circuit chip.
Referring to fig. 5, fig. 5 is a block diagram illustrating an embodiment of a computer-readable storage medium provided in the present application. The computer readable storage medium 50 stores program instructions 501 capable of being executed by a processor, the program instructions 501 being for implementing the steps of any of the above-described track video generation method embodiments.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (11)

1. A track video generation method is characterized by comprising the following steps:
responding to a generation instruction of the track video, acquiring track information of a user and audio information added to the track video;
establishing a corresponding relation between the track information and the audio information;
and generating the track video according to the track information and the corresponding relation between the track information and the audio information.
2. The track video generation method of claim 1,
the track video generation method further comprises the following steps:
responding to the generation instruction of the track video, ending the current motion task, and acquiring all track information of the user, wherein the track information comprises positioning information and motion information;
the step of establishing the corresponding relationship between the track information and the audio information includes:
and establishing a corresponding relation between the positioning information and the audio information, and establishing a corresponding relation between the motion information and the audio information.
3. The track video generation method of claim 2,
the audio information comprises audio time length, and the motion information comprises motion time length;
the step of establishing the corresponding relationship between the track information and the audio information includes:
dividing the positioning information into a plurality of track segments with the same time length according to the proportional relation between the motion time length and the audio time length;
and generating motion characteristic information according to the corresponding relation between the positioning information and the motion information, and establishing the corresponding relation between the motion characteristic information and the track segment.
4. The track video generation method of claim 3,
the step of generating the track video according to the track information and the corresponding relationship between the track information and the audio information comprises the following steps:
taking the audio time length as the video time length of the track video;
and generating the track video according to the video duration and the corresponding relation between the motion characteristic information and the track segment.
5. The track video generation method of claim 4,
after the step of generating the track video according to the track information and the corresponding relationship between the track information and the audio information, the track video generating method further includes:
and loading the track video on a display interface, and popping up a display window for displaying the motion characteristic information on a corresponding track segment.
6. The track video generation method of claim 2,
the track video generation method further comprises the following steps:
responding to a starting instruction of the movement task, and continuously acquiring the positioning information of the user according to a preset time period;
and deleting the positioning information of which the distance intervals with the two adjacent positioning points are larger than the preset distance.
7. The track video generation method of claim 1,
after the step of generating the track video according to the track information and the corresponding relationship between the track information and the audio information, the track video generating method further includes:
and loading the track video on a display interface, displaying track coordinates in the track video, and setting the center of the track coordinates at the center of the display interface.
8. The track video generation method of claim 7,
the step of displaying the track coordinates in the track video includes:
acquiring all track coordinates in the track video;
determining the scaling of the track video according to the proportional relation between the maximum distance in all the track coordinates and the display distance of the display interface;
and displaying the track coordinates in the track video according to the scaling.
9. The track video generation method of claim 7,
the step of displaying the track coordinates in the track video includes:
acquiring a starting track coordinate and/or an end track coordinate in the track coordinates;
and setting the head portrait of the user on the starting track coordinate and/or the end track coordinate.
10. A terminal device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the track video generation method of any one of claims 1 to 9.
11. A computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the track video generation method of any of claims 1 to 9.
CN202110056004.1A 2021-01-15 2021-01-15 Track video generation method, terminal device and computer readable storage medium Active CN112911363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110056004.1A CN112911363B (en) 2021-01-15 2021-01-15 Track video generation method, terminal device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110056004.1A CN112911363B (en) 2021-01-15 2021-01-15 Track video generation method, terminal device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112911363A true CN112911363A (en) 2021-06-04
CN112911363B CN112911363B (en) 2023-04-25

Family

ID=76113480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110056004.1A Active CN112911363B (en) 2021-01-15 2021-01-15 Track video generation method, terminal device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112911363B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023011356A1 (en) * 2021-07-31 2023-02-09 花瓣云科技有限公司 Video generation method and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160375340A1 (en) * 2015-06-26 2016-12-29 Lawrence Maxwell Monari Sports Entertainment System for Sports Spectators
CN109359203A (en) * 2018-09-21 2019-02-19 北京卡路里信息技术有限公司 The processing method and processing device of motion profile video
CN110071862A (en) * 2019-03-19 2019-07-30 北京卡路里信息技术有限公司 The processing method and processing device of motion profile video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160375340A1 (en) * 2015-06-26 2016-12-29 Lawrence Maxwell Monari Sports Entertainment System for Sports Spectators
CN109359203A (en) * 2018-09-21 2019-02-19 北京卡路里信息技术有限公司 The processing method and processing device of motion profile video
CN110071862A (en) * 2019-03-19 2019-07-30 北京卡路里信息技术有限公司 The processing method and processing device of motion profile video

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023011356A1 (en) * 2021-07-31 2023-02-09 花瓣云科技有限公司 Video generation method and electronic device

Also Published As

Publication number Publication date
CN112911363B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
US10580388B2 (en) Screen display method, apparatus and mobile terminal
US10176255B2 (en) Mobile terminal, recommendation system, and recommendation method
CN106168673B (en) Sensor information using method and electronic device using the same
WO2016097376A1 (en) Wearables for location triggered actions
US20070213178A1 (en) Mobile communication terminal
US11274932B2 (en) Navigation method, navigation device, and storage medium
CN103017767B (en) Use the method and apparatus of the position of the accuracy measurement terminal of measurement position
CN113794987B (en) Electronic device and method for providing location data
US20160370401A1 (en) Data analysis device, data analysis method and storage medium
US9788164B2 (en) Method and apparatus for determination of kinematic parameters of mobile device user
EP3077902A1 (en) Wearable map and image display
US10267644B2 (en) Map display device, computer readable storage medium, and map display method
CN112788583A (en) Equipment searching method and device, storage medium and electronic equipment
CN112911363B (en) Track video generation method, terminal device and computer readable storage medium
CN112150983B (en) Screen brightness adjusting method and device, storage medium and electronic equipment
US9933403B2 (en) Method for alarming gas and electronic device thereof
CN111751573B (en) Mobile terminal and moving direction determining method thereof
JP4229146B2 (en) NAVIGATION DEVICE AND PROGRAM
Szakacs-Simon et al. Android application developed to extend health monitoring device range and real-time patient tracking
US10257586B1 (en) System and method for timing events utilizing video playback on a mobile device
CN114912065A (en) Method and device for calculating movement distance, wearable device and medium
US20110191056A1 (en) Information service providing system, information service providing device, and method therefor
US20060206260A1 (en) Positioning apparatus and positioning method
JP6722424B2 (en) Point giving device, point giving system, point giving method and program
US10386185B2 (en) Information processing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant