CN112911363B - Track video generation method, terminal device and computer readable storage medium - Google Patents

Track video generation method, terminal device and computer readable storage medium Download PDF

Info

Publication number
CN112911363B
CN112911363B CN202110056004.1A CN202110056004A CN112911363B CN 112911363 B CN112911363 B CN 112911363B CN 202110056004 A CN202110056004 A CN 202110056004A CN 112911363 B CN112911363 B CN 112911363B
Authority
CN
China
Prior art keywords
track
information
video
motion
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110056004.1A
Other languages
Chinese (zh)
Other versions
CN112911363A (en
Inventor
何岸
肖永兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DO Technology Co ltd
Original Assignee
DO Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DO Technology Co ltd filed Critical DO Technology Co ltd
Priority to CN202110056004.1A priority Critical patent/CN112911363B/en
Publication of CN112911363A publication Critical patent/CN112911363A/en
Application granted granted Critical
Publication of CN112911363B publication Critical patent/CN112911363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a track video generation method, a terminal device and a computer readable storage medium, wherein the track video generation method comprises the following steps: responding to a generation instruction of the track video, acquiring track information of a user and audio information added to the track video; establishing a corresponding relation between track information and audio information; and generating track video according to the track information and the corresponding relation between the track information and the audio information. According to the scheme, the track video with synchronous audio and track is generated and displayed by establishing the corresponding relation between the track information and the audio information, so that the interestingness of watching the track video by a user is improved.

Description

Track video generation method, terminal device and computer readable storage medium
Technical Field
The present disclosure relates to the field of motion monitoring technologies, and in particular, to a track video generating method, a terminal device, and a computer readable storage medium.
Background
At present, many applications for sports automatically store sports data of a sports user when the user moves, and the stored sports data can help the sports user to know the self-sports condition more intuitively only after corresponding processing is performed and then displayed to the sports user. Therefore, in order to enable the sports user to acquire the own sports condition more intuitively, the related art extracts the motion trail of the user from the user motion data of the sports user and generates a motion trail video.
However, the motion trail video extracted from the motion data of the user in the related art is relatively single, and does not contain some content capable of bringing visual or audible experience to the user, so that the user experience is low.
Disclosure of Invention
The application provides at least a track video generation method, terminal equipment and a computer readable storage medium.
The first aspect of the present application provides a track video generating method, which includes:
responding to the generation instruction of the track video, acquiring track information of a user and audio information added to the track video;
establishing a corresponding relation between the track information and the audio information;
and generating the track video according to the track information and the corresponding relation between the track information and the audio information.
In some embodiments, the track video generating method further includes:
responding to the generation instruction of the track video, ending the current motion task, and acquiring all track information of a user, wherein the track information comprises positioning information and motion information;
the step of establishing the correspondence between the track information and the audio information includes:
and establishing a corresponding relation between the positioning information and the audio information, and establishing a corresponding relation between the motion information and the audio information.
In some embodiments, the audio information comprises an audio duration and the motion information comprises a motion duration;
the step of establishing the correspondence between the track information and the audio information includes:
dividing the positioning information into a plurality of track segments with the same duration according to the proportional relation between the motion duration and the audio duration;
and generating motion characteristic information according to the corresponding relation between the positioning information and the motion information, and establishing the corresponding relation between the motion characteristic information and the track segment.
In some embodiments, the step of generating the track video according to the track information and the corresponding relation between the track information and the audio information includes:
taking the audio time length as the video time length of the track video;
and generating the track video according to the video duration and the corresponding relation between the motion characteristic information and the track segment.
In some embodiments, after the step of generating the track video according to the track information and the corresponding relation between the track information and the audio information, the track video generating method further includes:
and loading the track video on a display interface, and popping up a display window for displaying the motion characteristic information on a corresponding track segment.
In some embodiments, the track video generating method further comprises:
responding to a starting instruction of a movement task, and continuously acquiring positioning information of the user according to a preset time period;
and deleting the positioning information with the distance interval between the positioning information and two adjacent positioning points being larger than the preset distance.
In some embodiments, after the step of generating the track video according to the track information and the corresponding relation between the track information and the audio information, the track video generating method further includes:
and loading the track video on a display interface, displaying track coordinates in the track video, and setting the center of the track coordinates in the center of the display interface.
In some embodiments, the step of displaying track coordinates in the track video includes:
acquiring all track coordinates in the track video;
determining the scaling of the track video according to the proportional relation between the maximum distance in all track coordinates and the display distance of the display interface;
and displaying the track coordinates in the track video according to the scaling scale.
In some embodiments, the step of displaying track coordinates in the track video includes:
acquiring an initial track coordinate and/or an end track coordinate in the track coordinates;
and setting the head portraits of the users on the initial track coordinates and/or the end track coordinates.
A second aspect of the present application provides a terminal device, including a memory and a processor coupled to each other, where the processor is configured to execute program instructions stored in the memory, so as to implement the track video generating method in the first aspect.
A third aspect of the present application provides a computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the track video generating method of the first aspect described above.
According to the scheme, the terminal equipment responds to the generation instruction of the track video to acquire the track information of the user and the audio information added to the track video; establishing a corresponding relation between track information and audio information; and generating track video according to the track information and the corresponding relation between the track information and the audio information. According to the scheme, the track video with synchronous audio and track is generated and displayed by establishing the corresponding relation between the track information and the audio information, so that the interestingness of watching the track video by a user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the technical aspects of the application.
FIG. 1 is a flowchart illustrating an embodiment of a track video generating method provided in the present application;
FIG. 2 is a flowchart illustrating another embodiment of a track video generating method provided herein;
FIG. 3 is a schematic diagram of a frame of an embodiment of a terminal device provided in the present application;
fig. 4 is a schematic frame diagram of another embodiment of a terminal device provided in the present application;
FIG. 5 is a schematic diagram of a framework of one embodiment of a computer readable storage medium provided herein.
Detailed Description
The following describes the embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Referring to fig. 1, fig. 1 is a flowchart of an embodiment of a track video generating method provided in the present application. The main execution body of the track video generating method according to the embodiment of the present disclosure may be a terminal device, for example, the track video generating method may be executed by the terminal device or a server or other processing device, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the track video generation method may be implemented by way of a processor invoking computer readable instructions stored in a memory. For example, the terminal device of the embodiment of the present disclosure may also be a smart wearable device, such as a smart bracelet, a smart watch, or a device for displaying sports information.
Specifically, the track video generation method of the embodiment of the present disclosure may include the following steps:
step S11: and responding to the generation instruction of the track video, acquiring track information of the user and audio information added to the track video.
The user may click on the sports start icon on the APP user interface of the terminal device, where the terminal device APP sends a start instruction of a sports task to the wearable sports device, for example, a sports bracelet, a sports watch, and the like. From the beginning of the movement, the terminal device starts to collect the track information of the user, wherein the track information of the user can specifically comprise positioning information and movement information.
On the one hand, the terminal device APP obtains positioning information in the motion process of the user through built-in positioning equipment, wherein the positioning information refers to the position information of the user and can be obtained by positioning calculation based on a GPS (global positioning system), a Beidou satellite navigation system or other positioning systems.
For example, after the sports starts, the mobile phone APP can collect positioning information of the user through a GPS module of the mobile phone and store the positioning information in an APP local database. The GPS module collects the positioning information of the user once every two seconds and sends the positioning information to the APP.
Furthermore, the terminal device APP can also rectify the motion trail of the user during the process of collecting the positioning information, and the rectification method can adopt a method in the prior art, for example, the positioning information with the distance interval between two adjacent positioning points being greater than the preset distance is deleted, which can help to improve the accuracy of the final trail video.
On the other hand, the terminal device APP acquires motion information of the user from the wearable motion device. In the process of the movement of the user, the terminal device APP acquires the movement information of the user from the wearable movement device every preset time, for example, every 30 seconds or 1 minute, or acquires all the movement information of the user once again when the movement of the user is finished.
The motion information is generated after the motion sensor and the biological sensor detect the motion information in the motion process of the user or the operation is performed based on other detected information. The categories of motion information include, but are not limited to: heart rate, calories, exercise time, number of steps, time, distance, pace, average speed, exercise time, etc. The sensors include, but are not limited to: accelerometers, gyroscopes, geomagnetisms, altimeters, etc., biosensor bioelectrode sensors, semiconductor biosensors, optical biosensors, etc.
When the user clicks a button for ending the movement and generating a movement track video on a user interface of the terminal equipment APP, the terminal equipment does not acquire positioning information from the GPS module any more, and the wearable movement equipment does not acquire movement information of the user any more.
Further, the user can select music added to the motion trail video through a user interface of the terminal device APP, and the terminal device acquires audio information of an audio file added to the motion trail video by the user. The music selected by the user can come from preset music of the terminal equipment or local music downloaded in the memory of the mobile phone.
Step S12: and establishing a corresponding relation between the track information and the audio information.
During the movement of the user, the position sensor of the terminal device, the movement sensor of the wearable movement device and the biological sensor can generate positioning information and movement information associated with movement time. However, the duration of the motion time and the audio duration selected by the user are likely to be different, and therefore, the terminal device needs to establish a correspondence between the positioning information and the audio duration, and a correspondence between the motion information and the audio duration.
Specifically, the terminal device may associate the positioning information, the motion information, and the audio duration according to a proportional relationship between the motion duration and the audio duration, respectively.
Step S13: and generating track video according to the track information and the corresponding relation between the track information and the audio information.
The terminal equipment generates a corresponding motion trail video according to the corresponding relation between the positioning information and the audio time length and the corresponding relation between the motion information and the audio time length.
Specifically, the terminal device may generate a motion trail video using a motion trail function built in the map APP. For example, the terminal device uses the german map API interface to set the GPS coordinates on the MovingPointOverlay set (path planning set), sets the duration setTotalDuration of the track video, and then calls the startSmoothMove () method to play the track video. The map API interface is a set of application interfaces based on map services, which are provided for developers for free, and provides functions of basic map display, searching, positioning, inverse/geocoding, route planning, LBS cloud storage and retrieval and the like, and is suitable for map application development under multiple operating systems of various devices such as a PC end, a mobile end and a server.
In the embodiment of the disclosure, a terminal device responds to a generation instruction of a track video to acquire track information of a user and audio information added to the track video; establishing a corresponding relation between track information and audio information; and generating track video according to the track information and the corresponding relation between the track information and the audio information. According to the scheme, the track video with synchronous audio and track is generated and displayed by establishing the corresponding relation between the track information and the audio information, so that the interestingness of watching the track video by a user is improved.
In addition, in the above embodiment, there are other implementations of the track video generating method in step S12 and step S13, and with continued reference to fig. 2, fig. 2 is a schematic flow chart of another embodiment of the track video generating method provided in the present application.
Specifically, the track video generation method of the present embodiment includes the steps of:
step S21: and dividing the positioning information into a plurality of track segments with the same duration according to the proportional relation between the motion duration and the audio duration.
The terminal equipment divides a motion track formed by connecting continuous positioning information into a plurality of track segments with the same duration according to the proportional relation between the motion duration and the audio duration.
Specifically, the terminal device may divide the audio duration into a plurality of audio segments according to a preset time interval or a time interval input by a user, and then divide the motion duration into a plurality of track segments according to a proportional relationship between the motion duration and the audio duration. Since the positioning information is associated with the movement time, the terminal device may divide the positioning information into corresponding track segments.
Step S22: and generating motion characteristic information according to the corresponding relation between the positioning information and the motion information, and establishing the corresponding relation between the motion characteristic information and the track segment.
The terminal equipment generates motion characteristic information according to the corresponding relation between the positioning information and the motion information in each track segment, and establishes the corresponding relation between the motion characteristic information and the track segment. The exercise characteristic information includes exercise information corresponding to an exercise target including target step number achievement data (1 ten thousand steps), target distance achievement data (5 km), calorie consumption, etc., or user-specific physiological data including a heart rate section, exercise intensity section, maximum heart rate, etc.
Step S23: and taking the audio time length as the video time length of the track video.
The terminal equipment sets the audio time length as the video time length of the track video, and scales the motion time length to the video time length of the track video according to the proportional relation between the audio time length and the motion time length.
Step S24: and generating the track video according to the video duration and the corresponding relation between the motion characteristic information and the track segment.
After the track video is generated by the terminal equipment according to the video duration and the corresponding relation between the motion characteristic information and the track section, loading the track video on a display interface of the APP, and popping up a display window for displaying the corresponding motion characteristic information on the corresponding track section, so that a user can listen to music while viewing the motion track and the motion characteristic value, and the interestingness of the motion of the user is improved.
Further, when the track video is loaded on the display interface, the terminal device can display track coordinates in the track video, and set the center of the track coordinates at the center of the display interface of the APP, so that a user can intuitively check the whole motion track. The terminal device may further obtain a start track coordinate and/or an end track coordinate in the track coordinates according to the motion time, so as to set the head portrait of the user on the start track coordinate and/or the end track coordinate.
Specifically, the terminal equipment determines the maximum transverse distance and the maximum longitudinal distance in the track coordinates, and then determines the scaling of the track video on the display interface according to the proportional relation between the maximum transverse distance and the maximum longitudinal distance and the display distance of the display interface. The terminal device then displays the track coordinates in the track video according to the scaling.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Referring to fig. 3, fig. 3 is a schematic frame diagram of an embodiment of a terminal device provided in the present application. The terminal device 30 comprises an acquisition module 31, a setup module 32 and a generation module 33.
The acquiring module 31 is configured to acquire track information of a user and audio information added to the track video in response to a generation instruction of the track video; a building module 32, configured to build a correspondence between the track information and the audio information; and the generating module 33 is configured to generate the track video according to the track information and the corresponding relationship between the track information and the audio information.
Referring to fig. 4, fig. 4 is a schematic frame diagram of another embodiment of a terminal device provided in the present application. The terminal device 40 comprises a memory 41 and a processor 42 coupled to each other, the processor 42 being adapted to execute program instructions stored in the memory 41 for implementing the steps of any of the track video generating method embodiments described above. In one particular implementation scenario, terminal device 40 may include, but is not limited to: the microcomputer and the server, and the terminal device 40 may also include mobile devices such as a notebook computer and a tablet computer, which are not limited herein.
In particular, the processor 42 is adapted to control itself and the memory 41 to implement the steps of any of the track video generation method embodiments described above. The processor 42 may also be referred to as a CPU (Central Processing Unit ). The processor 42 may be an integrated circuit chip having signal processing capabilities. The processor 42 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 42 may be commonly implemented by an integrated circuit chip.
Referring to fig. 5, fig. 5 is a schematic diagram of a frame of an embodiment of a computer readable storage medium provided in the present application. The computer readable storage medium 50 stores program instructions 501 executable by a processor, the program instructions 501 for implementing the steps of any of the track video generation method embodiments described above.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (7)

1. A track video generation method, characterized in that the track video generation method comprises:
responding to the generation instruction of the track video, acquiring track information of a user and audio information added to the track video;
establishing a corresponding relation between the track information and the audio information;
generating the track video according to the track information and the corresponding relation between the track information and the audio information;
further, the track video generating method further includes:
responding to the generation instruction of the track video, ending the current motion task, and acquiring all track information of a user, wherein the track information comprises positioning information and motion information;
the step of establishing the correspondence between the track information and the audio information includes:
establishing a corresponding relation between the positioning information and the audio information, and establishing a corresponding relation between the motion information and the audio information;
further, the audio information includes an audio duration, and the motion information includes a motion duration;
the step of establishing the correspondence between the track information and the audio information includes:
dividing the positioning information into a plurality of track segments with the same duration according to the proportional relation between the motion duration and the audio duration;
generating motion characteristic information according to the corresponding relation between the positioning information and the motion information, and establishing the corresponding relation between the motion characteristic information and the track section, wherein the motion characteristic information comprises motion information corresponding to a motion target or user specific physiological data, the motion information corresponding to the motion target comprises target step number achievement data, target distance achievement data and calorie consumption, and the user specific physiological information comprises a heart rate interval, a motion intensity interval and a maximum heart rate;
further, the step of generating the track video according to the track information and the corresponding relation between the track information and the audio information includes:
taking the audio time length as the video time length of the track video;
generating the track video according to the corresponding relation between the video duration and the motion characteristic information and the track segment;
further, after the step of generating the track video according to the track information and the corresponding relation between the track information and the audio information, the track video generating method further includes:
and loading the track video on a display interface, and popping up a display window for displaying the motion characteristic information on a corresponding track segment.
2. The method for generating a video of a trajectory according to claim 1, wherein,
the track video generation method further comprises the following steps:
responding to a starting instruction of a movement task, and continuously acquiring positioning information of the user according to a preset time period;
and deleting the positioning information with the distance interval between the positioning information and two adjacent positioning points being larger than the preset distance.
3. The method for generating a video of a trajectory according to claim 1, wherein,
after the step of generating the track video according to the track information and the corresponding relation between the track information and the audio information, the track video generating method further comprises the following steps:
and loading the track video on a display interface, displaying track coordinates in the track video, and setting the center of the track coordinates in the center of the display interface.
4. The method for generating a video of a trajectory as claimed in claim 3, wherein,
the step of displaying the track coordinates in the track video includes:
acquiring all track coordinates in the track video;
determining the scaling of the track video according to the proportional relation between the maximum distance in all track coordinates and the display distance of the display interface;
and displaying the track coordinates in the track video according to the scaling scale.
5. The method for generating a video of a trajectory as claimed in claim 3, wherein,
the step of displaying the track coordinates in the track video includes:
acquiring an initial track coordinate and/or an end track coordinate in the track coordinates;
and setting the head portraits of the users on the initial track coordinates and/or the end track coordinates.
6. A terminal device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the track video generation method of any one of claims 1 to 5.
7. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the track video generation method of any of claims 1 to 5.
CN202110056004.1A 2021-01-15 2021-01-15 Track video generation method, terminal device and computer readable storage medium Active CN112911363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110056004.1A CN112911363B (en) 2021-01-15 2021-01-15 Track video generation method, terminal device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110056004.1A CN112911363B (en) 2021-01-15 2021-01-15 Track video generation method, terminal device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112911363A CN112911363A (en) 2021-06-04
CN112911363B true CN112911363B (en) 2023-04-25

Family

ID=76113480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110056004.1A Active CN112911363B (en) 2021-01-15 2021-01-15 Track video generation method, terminal device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112911363B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690272A (en) * 2021-07-31 2023-02-03 花瓣云科技有限公司 Video generation method and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9700781B2 (en) * 2015-06-26 2017-07-11 Lawrence Maxwell Monari Sports entertainment tracking system for mobile sports spectators
CN109359203B (en) * 2018-09-21 2022-09-06 北京卡路里信息技术有限公司 Method and device for processing motion trail video
CN110071862B (en) * 2019-03-19 2022-02-22 北京卡路里信息技术有限公司 Method and device for processing motion trail video

Also Published As

Publication number Publication date
CN112911363A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
US11188190B2 (en) Generating animation overlays in a communication session
US9356792B1 (en) Recording events for social media
EP3805982B1 (en) Gesture recognition method, apparatus and device
US11204959B1 (en) Automated ranking of video clips
CN108337368B (en) Method for updating positioning data and mobile terminal
CN108833262B (en) Session processing method, device, terminal and storage medium
US8327367B2 (en) Information service providing system, information service providing device, and method therefor
EP4167192A1 (en) Image processing method and apparatus for augmented reality, electronic device and storage medium
US11876763B2 (en) Access and routing of interactive messages
CN112788583B (en) Equipment searching method and device, storage medium and electronic equipment
CN108307039B (en) Application information display method and mobile terminal
CN108958634A (en) Express delivery information acquisition method, device, mobile terminal and storage medium
EP4165504A1 (en) Software development kit engagement monitor
CN112911363B (en) Track video generation method, terminal device and computer readable storage medium
CN112181141A (en) AR positioning method, AR positioning device, electronic equipment and storage medium
CN112150983B (en) Screen brightness adjusting method and device, storage medium and electronic equipment
CN113609358A (en) Content sharing method and device, electronic equipment and storage medium
CN111751573B (en) Mobile terminal and moving direction determining method thereof
US10223065B2 (en) Locating and presenting key regions of a graphical user interface
US8566060B2 (en) Information service providing system, information service providing device, and method therefor
CN114827651B (en) Information processing method, information processing device, electronic equipment and storage medium
US10257586B1 (en) System and method for timing events utilizing video playback on a mobile device
KR20170081911A (en) An apparatus and a method for performing a function of an electronic device corresponding to a location
CN111638819B (en) Comment display method, device, readable storage medium and system
CN114238859A (en) Data processing system, method, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant