CN108924761B - Information presentation method, electronic equipment and computer readable storage medium - Google Patents

Information presentation method, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN108924761B
CN108924761B CN201810558051.4A CN201810558051A CN108924761B CN 108924761 B CN108924761 B CN 108924761B CN 201810558051 A CN201810558051 A CN 201810558051A CN 108924761 B CN108924761 B CN 108924761B
Authority
CN
China
Prior art keywords
position information
information
user
audio signal
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810558051.4A
Other languages
Chinese (zh)
Other versions
CN108924761A (en
Inventor
符博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201810558051.4A priority Critical patent/CN108924761B/en
Publication of CN108924761A publication Critical patent/CN108924761A/en
Application granted granted Critical
Publication of CN108924761B publication Critical patent/CN108924761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72442User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72457User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Environmental & Geological Engineering (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the invention discloses an information presentation method, which comprises the following steps: acquiring first position information and second position information of a user; wherein the first position information and the second position information have time corresponding relation; determining an audio output effect based on at least a difference in position of the first and second position information; wherein the audio output effect is at least indicative of the first location information being different from the second location information. The embodiment of the invention also discloses electronic equipment and a computer storage medium at all times.

Description

Information presentation method, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of information technologies, and in particular, to an information presentation method, an electronic device, and a computer-readable storage medium.
Background
With the progress of science and technology and the change of human life style, people can obtain various information closely related to life by means of electronic equipment; specifically, when running, a user can acquire and view information related to running, such as information of running position, distance, speed and the like, through the electronic device; however, the information can be presented to the user through the display screen of the electronic device after the user finishes running, and the information is presented in real time in the running process, so that the sight of the user is interfered inevitably, and danger is caused; in addition, running is mostly single sports, no one accompanies, and users feel bored and solitary easily. Therefore, an information presentation method is needed to solve the problem that the running information cannot be presented to the user in real time in the prior art.
Disclosure of Invention
In view of this, embodiments of the present invention are intended to provide an information presenting method, an electronic device, and a computer-readable storage medium, which solve the problem in the prior art that running information cannot be presented to a user in real time.
The technical scheme of the invention is realized as follows:
the embodiment of the invention provides an information presentation method, which comprises the following steps:
acquiring first position information and second position information of a user; wherein the first position information and the second position information have time corresponding relation;
determining an audio output effect based on at least a difference in position of the first and second position information; wherein the audio output effect is at least indicative of the first location information being different from the second location information.
An embodiment of the present invention further provides an electronic device, where the electronic device includes: an acquisition unit and a processing unit; wherein,
the acquiring unit is used for acquiring first position information and second position information of a user; wherein the first position information and the second position information have time corresponding relation;
the processing unit is used for determining an audio output effect at least based on the position difference of the first position information and the second position information; wherein the audio output effect is at least indicative of the first location information being different from the second location information.
An embodiment of the present invention further provides an electronic device, including: a processor and a memory configured to store a computer program capable of running on the processor,
wherein the processor is configured to perform the steps of any of the above methods when executing the computer program.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any of the above methods.
According to the information presentation method, the electronic device and the computer storage medium provided by the embodiment of the invention, the first position information and the second position information of the user are acquired; the first position information and the second position information have time corresponding relation; then determining an audio output effect based on at least a difference in position of the first position information and the second position information; wherein the audio output effect is at least capable of characterizing the difference between the first location information and the second location information; therefore, the electronic equipment can determine the position difference between the position information according to two different kinds of position information of the user, and then the position difference information is presented to the user in real time through the audio output sound effect; therefore, under the condition of not interfering the sight of the user, the running information is presented to the user through the audio output sound effect, and the safety and the interestingness of information presentation are improved; in addition, the sound effect is output, so that the user can feel the sense of accompanying and accompanying, and the boring sense and the solitary sense of the user are eliminated.
Drawings
Fig. 1 is a schematic flowchart of an information presenting method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another information presentation method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of acceleration in different directions of an accelerator of an electronic device according to an embodiment of the invention;
FIG. 4 is a flow chart illustrating a further information presentation method according to an embodiment of the present invention;
FIG. 5 is a system architecture diagram according to an embodiment of the present invention;
fig. 6 is a schematic structural component diagram of an electronic device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
So that the manner in which the features and aspects of the embodiments of the present invention can be understood in detail, a more particular description of the embodiments of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings.
An embodiment of the present invention provides a method, as shown with reference to fig. 1, comprising the steps of:
step 101, acquiring first position information and second position information of a user.
And the first position information and the second position information have time corresponding relation.
In other embodiments of the present invention, the step 101 of acquiring the first location information and the second location information of the user may be implemented by an electronic device; here, the electronic device may be any type of mobile terminal device having an audio playing function; in practical applications, the electronic device includes but is not limited to: smart phones, tablet computers, smart wearable devices (smart devices that can be worn on the body of a user), and so forth. In practical application, the electronic device may obtain real-time position information of the user during running through a Global Positioning System (GPS) to obtain a motion track of the user; in this embodiment, the first location information and the second location information may be location information when the user is running; the first position information is different from the second position information and is position information respectively acquired by the electronic equipment from two different motion tracks. For example, the user has finished running once a day, the electronic device records all position information of the user during running as a first motion track, the user has finished running for a second time on the same route on the second day, and similarly, the electronic device records all position information of running on the second day as a second motion track.
In addition, having a time correspondence between the first location information and the second location information may include: the time from the first position information to the starting time or the ending time of the second position information in different motion tracks is the same; illustratively, the first position information may be a position at which 1 minute 35 seconds from the start time of the first motion trajectory; the second position is a position which is 1 minute 35 seconds away from the start time of the second motion trajectory, and therefore, the first position information and the second position information have a time correspondence in the scene.
It should be noted that the time correspondence between the first location information and the second location information may be set in other manners, and may be set by a person skilled in the art, and the embodiment of the present invention is not limited specifically.
Step 102, determining an audio output effect based on at least a position difference of the first position information and the second position information.
Wherein the audio output effect is at least capable of characterizing the difference between the first location information and the second location information.
In other embodiments of the present invention, the step 102 of determining the audio output effect may be implemented by the electronic device based on at least a difference in the positions of the first position information and the second position information. Here, the first location information and the second location information of the user may be acquired through step 101, and thus, based on a difference between the first location information and the second location information, a location difference between the first location information and the second location information may be acquired; specifically, acquiring the position between the first position information and the second position information may be implemented in the following two ways:
the first implementation mode comprises the following steps:
under the condition that a first motion track corresponding to the first position information and a second motion track corresponding to the second position information have the same route (namely, the routes of two runs of the user are the same), the electronic equipment determines the linear distance between the first position information and the second position information according to the longitude and latitude information of the first position information and the longitude and latitude information of the second position information, and obtains the position difference between the first position information and the second position information.
The second implementation mode comprises the following steps:
under the condition that a first motion track corresponding to the first position information and a second motion track corresponding to the second position information have different routes (namely, the routes of two runs of the user are different), the electronic equipment can synchronize the starting time of the first motion track and the starting time of the second motion track, obtain the distance from the first position information to the starting position of the first track and the distance from the second position information to the starting position of the second motion track through the time correspondence relationship, further determine the relative distance between the first position information and the second position information, and obtain the position difference between the first position information and the second position information.
In other embodiments of the present invention, determining the audio output effect may include determining parameters such as phase, intensity, etc. of the output audio; the audio may be a fixed sound emitted during running, such as footstep sound, breath sound, mouth sound, and the like. In this embodiment, the intensity and phase of the audio can be adjusted according to the position difference, so that the audio parameters received by the left ear and the right ear of the user are different, and the user can position the source direction of the audio; in the present embodiment, the output sound effect of the audio may be determined by a Head Related Transfer Functions (HRTFs).
Further, the audio output effect being capable of characterizing at least the difference between the first location information and the second location information includes: and through the audio parameters of the audio output, the user is enabled to determine the source of the audio signal, and further distinguish the difference between the first position information and the second position information.
According to the information presentation method provided by the embodiment of the invention, the first position information and the second position information of a user are acquired; the first position information and the second position information have time corresponding relation; then determining an audio output effect based on at least a difference in position of the first position information and the second position information; wherein the audio output effect is at least capable of characterizing the difference between the first location information and the second location information; therefore, the electronic equipment can determine the position difference between the position information according to two different kinds of position information of the user, and then the position difference information is presented to the user in real time through the audio output sound effect; therefore, under the condition of not interfering the sight of the user, the running information is presented to the user through the audio output sound effect, and the safety and the interestingness of information presentation are improved; in addition, the sound effect is output, so that the user can feel the sense of accompanying and accompanying, and the boring sense and the solitary sense of the user are eliminated.
Based on the foregoing embodiment, an embodiment of the present invention provides an information presenting method, in this embodiment, a first motion trajectory and a second motion trajectory of a user have the same route (i.e., a running route of two runs of the user is the same), and referring to fig. 2, the method includes the following steps:
step 201, the electronic device determines track information and a first starting time of a first motion track within a preset time period.
The first motion track may include a motion track of running currently performed by the user, that is, a motion track recorded in real time while the user is running; the preset time period can be 1-3 minutes for the user to start running; the first start time may be a time when the user's running starts, i.e., a time when the first motion profile starts.
Through step 201, the electronic device can determine the motion trajectory of the user within 1-3 minutes of starting running and the first starting time of starting running.
Step 202, the electronic device determines a matched second motion track based on track information of the first motion track in a preset time period.
In this embodiment, all motion trajectories of the user may be stored in the electronic device; the second motion profile may be one of historical motion profiles of the user stored in the electronic device.
In other embodiments of the present invention, the electronic device can obtain the real-time motion trajectory of the user within 1-3 minutes of starting to run currently through step 201; then, the electronic equipment selects a historical motion track which is the same as the track information of the first motion track in a preset time period from all the stored historical motion tracks to obtain a second motion track; in brief, within 1 to 3 minutes after the user starts to run, the electronic device may select a track that is the same as the current running track from the historical motion tracks according to the motion tracks within 1 to 3 minutes, so as to obtain the second motion track.
It should be noted that, steps 201 to 202 are processes of automatically acquiring, by the electronic device, a second motion trajectory matched with the first motion trajectory, and in other embodiments of the present invention, the user may also manually select the second motion trajectory. Specifically, the user selects, through the electronic device, a second motion trajectory that is the same as the motion trajectory currently running (i.e., the first motion trajectory).
Step 203, the electronic device synchronizes a first start time of the first motion trajectory with a second start time of the second motion trajectory.
The second motion track comprises at least one piece of second position information and the motion relative time of each piece of second position information.
In other embodiments of the present invention, the second starting time is a time when the second motion trajectory starts, and the first starting time corresponds to the first motion trajectory and is a time when the first motion trajectory starts; here, the electronic device synchronizes the first start time and the second start time, which may include the electronic device corresponding the first start time and the second start time and obtaining a corresponding relationship; or the first starting time and the second starting time are initialized to zero at the same time, that is, the first motion track and the second motion track are recorded from the 0 moment at the same time.
For example, if the first start time is 17 hours 37 minutes 05 seconds, the second start time is 19 hours 57 minutes 35 seconds; the electronic device may correspond the first start time and the second start time to complete time synchronization, and the corresponding relationship obtained after the correspondence is: the first start time is 20 minutes and 30 seconds to the second start time-2 hours. It should be noted that the synchronization method between the first start time and the second start time may be in other manners, and may be set by a person skilled in the art, and the embodiment of the present invention is not limited in particular.
Step 204, the electronic device acquires first position information of the user, and determines a relative movement time of the first position information in the first movement track.
The first position information can be the current position information of the user and can be represented by longitude and latitude information of the position where the current user is located; here, the first position information is also one position information included in the first motion trajectory. The relative moment of motion may comprise a moment at which the first location information occurs, or a time interval from a first start moment when the first location information occurs; illustratively, the user starts to count from 0 seconds when running, and the position where the user runs for 5 minutes and 30 seconds is the first position, then the 5 minutes and 30 seconds can be the relative moment of exercise.
Step 205, the electronic device acquires second position information corresponding to the second motion trajectory based on the relative motion time.
The second position information comprises historical position information of the user and can be represented by longitude and latitude information of the history; here, the second position information may be understood as one position information included in the second motion profile.
In this embodiment, the electronic device may obtain, based on the relative movement time determined in step 204, second position information corresponding to the relative movement time in the second movement trajectory; specifically, if the relative movement time is the time when the first position information occurs, the electronic device may obtain the time when the second position information occurs based on the corresponding relationship between the first start time and the second start time, and further obtain the second position information from the second movement trajectory; and if the relative movement moment is the time interval between the occurrence time of the first position information and the first starting time, the electronic equipment acquires the position of the second movement track at the same time interval from the second starting time based on the time interval to obtain second position information.
Step 206, the electronic device determines a position difference between the first position information and the second position information based on the first position information and the second position information.
In practical application, when a first motion track corresponding to the first position information and a second motion track corresponding to the second position information have the same route (that is, the routes of two runs of the user are the same), the electronic device determines the linear distance between the first position information and the second position information according to the longitude and latitude information where the first position information is located and the longitude and latitude information where the second position information is located, and obtains the position difference between the first position information and the second position information.
Step 207, the electronic device determines the direction and intensity of the step audio signal based on the position difference between the first position information and the second position information.
In this embodiment, since the first motion trajectory and the second motion trajectory are the same, step 206 may obtain a position difference between the first position information and the second position information according to the latitude and longitude information of the first position and the latitude and longitude information of the second position.
In other embodiments of the present invention, the step audio signal may be obtained in one of two ways:
the first method is as follows: the electronic device generates a footstep audio signal based on the historical footstep acoustic frequency information.
Here, the historical footstep sound frequency information may be a pace rhythm of a user's historical running record; specifically, when a user runs, acceleration information in a three-dimensional direction of a space can be obtained through an accelerator of the electronic device; in the coordinate system shown in fig. 3, the horizontal axis represents time, the vertical axis represents acceleration, and an accelerator of the electronic device can acquire the acceleration of the user in the horizontal X direction, the acceleration in the horizontal Y direction, and the acceleration in the vertical ground Z direction; by acquiring the data of the accelerator of the electronic equipment in the direction vertical to the ground, the pace rhythm of the user during running can be obtained. As shown in fig. 3, the user stepped 5.5 times during the recording time.
In other embodiments of the present invention, a sound resembling a footstep sound is generated based on the frequency of the user's footstep cadence.
The second method comprises the following steps: the electronic equipment generates virtual footstep sound frequency information based on the current running frequency of the user; and generating a step audio signal based on the virtual step sound frequency information.
Here, the electronic device may obtain, through the accelerator, current acceleration information of the user in the spatial three-dimensional direction in real time, and in the same manner and on the same principle, obtain, according to data in the direction perpendicular to the ground, a pace at which the user currently runs, and generate a step audio signal.
In other embodiments of the present invention, the electronic device may also automatically generate a virtual footstep audio signal.
Further, the electronic device may determine a direction and an intensity of the step audio signal based on a difference in the positions of the first and second locations; specifically, the position of the current user is the position corresponding to the first position information, and according to the obtained position difference, the spatial position relationship between the position corresponding to the second position information and the current position of the user can be obtained; if the position corresponding to the second position information is positioned in front of the current position of the user, the direction of the sound emitted by the step audio signal can be determined as the front; correspondingly, if the position corresponding to the second position information is located behind the current position of the user, the sound emitting direction of the step audio signal is determined to be the rear direction. Preferably, the direction from which the step audio signal emits sound may be determined by adjusting the phase of the step audio signal.
In addition, the closer the position corresponding to the second position information is to the current position of the user, the greater the strength of the step audio signal is determined, namely, the greater the sound of the step audio is; correspondingly, if the distance from the current position of the user is farther, the strength of the step audio signal is determined to be smaller, namely the sound of the step audio is determined to be smaller; preferably, the electronic device can store the corresponding relation between the distance and the strength of the step audio signal; the electronic equipment determines the strength of the step audio signal by inquiring the corresponding relation between the distance and the step audio signal strength.
Preferably, the parameters of the footstep audio signal output may be determined by HRTFs.
In addition, in other embodiments of the present invention, step 206 can also be implemented by:
step 2071, the electronic device obtains gesture data of the user.
Wherein the pose data is capable of characterizing at least a facial orientation of the user.
Step 2072, the electronic device determines the strength of the step audio signal based on the position difference, and determines the direction of the step audio signal based on the head pose data and the position difference.
In practical application, when the head of a user rotates, the positions of the ears of the user are changed, so that the user determines that the direction of the emitted audio has a close relation with the positions of the ears of the user; therefore, in the present embodiment, it is also necessary to determine the effect of audio output according to the posture of the user.
Preferably, the gesture data of the user can be acquired through sensing devices such as a gravity sensor and the like arranged in the earphone; here, the posture data of the user may include data of head turn or face orientation of the user.
In step 2072, first, after acquiring the position difference, the electronic device determines the strength of the step audio signal, that is, the sound level of the audio signal, based on the distance between the position corresponding to the second position information and the position corresponding to the first position information; specifically, the farther the position corresponding to the second position information is from the position corresponding to the first position information, the smaller the intensity of the step audio signal is determined to be, and the closer the position corresponding to the second position information is to the position corresponding to the first position information, the larger the intensity of the step audio signal is determined to be. Secondly, after the electronic equipment acquires the posture data of the user, determining the direction of the step audio signal according to the face orientation of the user and the spatial orientation of the position corresponding to the second position information at the position corresponding to the first position information; here, the direction from which the step audio signal emits sound may be determined by adjusting the phase of the step audio signal.
For example, if the position corresponding to the second position information is located in front of the position corresponding to the first position information and the face of the user is oriented to the left, the footstep sound should be in front of the right ear of the user; in this case, the step audio signal may be adjusted to be transmitted to the right ear slightly earlier than to the left ear, so that the user can sense acoustically that the step sound is in a particular orientation at that time.
Preferably, the output parameters of the step audio signal may be determined from the HRTFs.
Step 208, the electronic device determines an audio output effect of the step audio signal based on the direction and the intensity.
After the direction and the intensity of the step audio signal are determined, the electronic equipment outputs the step audio through the sound generating device according to the determined direction and intensity; here, the electronic device can output binaural three-dimensional footstep sounds through the earphone, so that the user can feel the position of historical running in real time, adjust the current running state, and better surpass the past user.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
According to the information presentation method provided by the embodiment of the invention, the first position information and the second position information of a user are acquired; the first position information and the second position information have time corresponding relation; then determining an audio output effect based on at least a difference in position of the first position information and the second position information; wherein the audio output effect is at least capable of characterizing the difference between the first location information and the second location information; therefore, the electronic equipment can determine the position difference between the position information according to two different kinds of position information of the user, and then the position difference information is presented to the user in real time through the audio output sound effect; therefore, under the condition of not interfering the sight of the user, the running information is presented to the user through the audio output sound effect, and the safety and the interestingness of information presentation are improved; in addition, the sound effect is output, so that the user can feel the sense of accompanying and accompanying, and the boring sense and the solitary sense of the user are eliminated.
Based on the foregoing embodiment, an embodiment of the present invention provides an information presenting method, in this embodiment, a first motion trajectory and a second motion trajectory of a user have different routes (i.e., running routes of two runs of the user are different), and as shown in fig. 4, the method includes the following steps:
step 401, the electronic device obtains a second start time corresponding to the second motion trajectory, and determines a first start time of the first motion trajectory.
The second motion track is a record in the running historical motion track of the user; the second start time may be a time at which the second motion profile starts. The first motion trail is a motion trail of the user running currently, and the first start time may be a time when the first motion trail starts.
Step 402, the electronic device synchronizes a second start time of the second motion trajectory with the first start time.
The second motion track comprises at least one piece of second position information and the motion relative time of each piece of second position information.
In the embodiment of the present invention, since the first motion trajectory and the second motion trajectory are different, the electronic device may correspond the first motion trajectory and the second motion trajectory by a time relationship by synchronizing a first start time corresponding to the first motion trajectory and a second start time corresponding to the second motion trajectory.
Step 403, the electronic device obtains first position information of the user, and determines a relative movement time of the first position information in the first movement track.
The first location information may be current location information of the user; here, the first position information may be expressed in terms of a distance from a start position of the first motion trajectory.
Illustratively, the electronic device first acquires a position 1000 meters away from the starting position as a first position, acquires the time taken by the user from the start of running to 1000 meters as 1 minute and 5 seconds from the first motion trajectory, and takes the 1 minute and 5 seconds as the relative time of motion.
And step 404, the electronic device acquires second position information corresponding to the second motion track based on the relative motion time.
Wherein the second location information comprises historical location information of the user; here, the second position information may be represented by a distance from the start position of the second motion trajectory.
Illustratively, as shown in step 403, the relative time of the movement is 1 minute and 5 seconds, when the electronic device obtains the second movement trace for 1 minute and 5 seconds, the distance from the user to the start position of the second movement trace is 1325 meters, and then the second position information is 1325 meters.
Step 405, the electronic device obtains a position difference between the first position information and the second position information according to the first position information and the second position information.
In this embodiment, when the first motion trajectory corresponding to the first position information and the second motion trajectory corresponding to the second position information have different routes (that is, routes of two runs by the user are different), the electronic device may synchronize the start time of the first motion trajectory and the start time of the second motion trajectory, obtain the distance from the first position information to the start position of the first trajectory and the distance from the second position information to the start position of the second motion trajectory through the time correspondence, further determine the relative distance between the first position information and the second position information, and obtain the position difference between the first position information and the second position information.
Exemplarily, through the steps 403 and 404, it is determined that the first position information is a position 1000 meters away from the starting position of the first motion trajectory; the second position information is a position 1325 meters away from the starting position of the second motion track; further, it may be determined that the position difference between the second position information and the first position information is 325 meters, and the position corresponding to the second position information is located in front of the position corresponding to the first position information 325.
In step 406, the electronic device obtains the gesture data of the user.
Wherein the pose data is capable of characterizing at least a facial orientation of the user.
Step 407, the electronic device determines the strength of the step audio signal based on the position difference, and determines the direction of the step audio signal based on the head pose data and the position difference.
Step 408, the electronic device determines an audio output effect of the step audio signal based on the direction and the intensity.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
According to the information presentation method provided by the embodiment of the invention, the first position information and the second position information of a user are acquired; the first position information and the second position information have time corresponding relation; then determining an audio output effect based on at least a difference in position of the first position information and the second position information; wherein the audio output effect is at least capable of characterizing the difference between the first location information and the second location information; therefore, the electronic equipment can determine the position difference between the position information according to two different kinds of position information of the user, and then the position difference information is presented to the user in real time through the audio output sound effect; therefore, under the condition of not interfering the sight of the user, the running information is presented to the user through the audio output sound effect, and the safety and the interestingness of information presentation are improved; in addition, the sound effect is output, so that the user can feel the sense of accompanying and accompanying, and the boring sense and the solitary sense of the user are eliminated.
The information presentation method provided in this application embodiment may be applied to a system architecture shown in fig. 5, where the system architecture may include: the electronic equipment respectively acquires the position information in the historical running track and the position information in the current running track through a GPS; acquiring speeds and step rhythms in the historical running track and the current running track through an accelerator, wherein the speeds and steps in the current running track can be acquired selectively (in fig. 5, brackets indicate that the contents can be acquired or not acquired); furthermore, the electronic device calculates a position difference according to the acquired data, and finally generates a three-dimensional footstep sound based on the position difference, so that the user can distinguish the difference between the current position and the historical running position.
In order to implement the method according to the embodiment of the present invention, an embodiment of the present invention further provides an electronic device, as shown in fig. 6, where the electronic device includes: an acquisition unit 61 and a processing unit 62;
an acquisition unit 61 configured to acquire first location information and second location information of a user; the first position information and the second position information have time corresponding relation;
a processing unit 62 for determining an audio output effect based on at least a position difference of the first position information and the second position information; wherein the audio output effect is at least capable of characterizing the difference between the first location information and the second location information.
In other embodiments of the present invention, the electronic device further comprises a determination unit 63 and a synchronization unit 64; wherein,
a determining unit 63, configured to determine track information and a first starting time of the first motion track within a preset time period;
a synchronization unit 64, configured to synchronize a first start time of the first motion trajectory with a second start time in a second motion trajectory; the second motion track comprises at least one piece of second position information and the motion relative time of each piece of second position information.
In other embodiments of the present invention, the obtaining unit 61 is further configured to obtain a second starting time corresponding to the second motion trajectory;
a determining unit 63, further configured to determine a first starting time of the first motion trajectory;
a synchronization unit 64, configured to synchronize a second start time of a second motion trajectory with the first start time; the second motion track comprises at least one piece of second position information and the motion relative time of each piece of second position information.
In other embodiments of the present invention, the obtaining unit 61 is specifically configured to obtain first position information of a user, and determine a relative movement time of the first position information in a first movement trajectory; and acquiring second position information corresponding to the second motion trail based on the relative motion time.
In other embodiments of the present invention, the processing unit 62 is specifically configured to determine the direction and intensity of the step audio signal based on the position difference between the first position information and the second position information; based on the direction and the intensity, an audio output effect of the step audio signal is determined.
In other embodiments of the present invention, the processing unit 62 is further configured to obtain gesture data of the user; wherein the pose data is capable of characterizing at least a facial orientation of the user; determining an intensity of the step audio signal based on the position difference, and determining a direction of the step audio signal based on the head pose data and the position difference; based on the direction and the intensity, an audio output effect of the step audio signal is determined.
In other embodiments of the present invention, the processing unit 62 is further configured to generate a step audio signal based on the historical step sound frequency information; or, based on the current running frequency of the user, generating virtual footstep sound frequency information; generating a step audio signal based on the virtual step sound frequency information.
According to the electronic equipment provided by the embodiment of the invention, the first position information and the second position information of a user are acquired; the first position information and the second position information have time corresponding relation; then determining an audio output effect based on at least a difference in position of the first position information and the second position information; wherein the audio output effect is at least capable of characterizing the difference between the first location information and the second location information; therefore, the electronic equipment can determine the position difference between the position information according to two different kinds of position information of the user, and then the position difference information is presented to the user in real time through the audio output sound effect; therefore, under the condition of not interfering the sight of the user, the running information is presented to the user through the audio output sound effect, and the safety and the interestingness of information presentation are improved; in addition, the sound effect is output, so that the user can feel the sense of accompanying and accompanying, and the boring sense and the solitary sense of the user are eliminated.
Based on the hardware implementation of each unit in the electronic device, in order to implement the information presentation method provided in the embodiment of the present invention, an embodiment of the present invention further provides an electronic device, as shown in fig. 7, where the apparatus 70 includes: a processor 71 and a memory 72 configured to store a computer program capable of running on the processor,
wherein the processor 71 is configured to perform the method steps in the preceding embodiments when running the computer program.
In practice, of course, the various components of the device 70 are coupled together by a bus system 73, as shown in FIG. 7. It will be appreciated that the bus system 73 is used to enable communications among the components. The bus system 73 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 73 in fig. 7.
In an exemplary embodiment, the present invention further provides a computer readable storage medium, such as a memory 72, comprising a computer program, which is executable by a processor 71 of an electronic device 70 to perform the steps of the aforementioned method. The computer-readable storage medium may be a magnetic random access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM), among other memories.
The technical schemes described in the embodiments of the present invention can be combined arbitrarily without conflict.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (9)

1. A method of information presentation, the method comprising:
acquiring first position information and second position information of a user; wherein the first position information and the second position information have time corresponding relation;
generating a step audio signal based on the step sound frequency information;
determining an audio output effect of the step audio signal based on at least a difference in position of the first and second position information; wherein the audio output effect is at least indicative of the first positional information being different from the second positional information;
wherein the determining an audio output effect of the step audio signal based on at least a difference in position of the first position information and the second position information comprises: determining a direction and intensity of the step audio signal based on a position difference of the first position information and the second position information; based on the direction and the intensity, an audio output effect of the step audio signal is determined.
2. The method of claim 1, further comprising:
determining track information and a first starting moment of a first motion track in a preset time period;
determining a matched second motion track based on track information of the first motion track in a preset time period;
synchronizing a first starting time of the first motion track with a second starting time in a second motion track;
the second motion track comprises at least one piece of second position information and the motion relative time of each piece of second position information.
3. The method of claim 1, further comprising:
acquiring a second starting moment corresponding to the second motion track, and determining a first starting moment of the first motion track;
synchronizing a second starting time of the second motion trajectory with the first starting time; the second motion track comprises at least one piece of second position information and the motion relative time of each piece of second position information.
4. The method of claim 2 or 3, wherein the obtaining the first location information and the second location information of the user comprises:
acquiring first position information of a user, and determining the corresponding relative movement time of the first position information in the first movement track;
and acquiring second position information corresponding to the second motion trail based on the relative motion time.
5. The method of claim 1, wherein determining an audio output effect based on at least a difference in position of the first position information and the second position information comprises:
acquiring gesture data of a user; wherein the pose data is capable of characterizing at least a facial orientation of the user;
determining an intensity of the step audio signal based on the position difference, and determining a direction of the step audio signal based on the head pose data and the position difference;
based on the direction and the intensity, an audio output effect of the step audio signal is determined.
6. The method according to any one of claims 1 to 3, wherein the first location information comprises current location information of the user, and the second location information comprises historical location information of the user.
7. The method of claim 1 or 5, wherein before determining the audio output effect based on at least the difference in the positions of the first and second position information, the method further comprises:
generating a step audio signal based on the historical step sound frequency information;
or,
generating virtual footstep sound frequency information based on the current running frequency of the user; generating the step audio signal based on the virtual step sound frequency information.
8. An electronic device, the electronic device comprising: an acquisition unit and a processing unit; wherein,
the acquiring unit is used for acquiring first position information and second position information of a user; wherein the first position information and the second position information have time corresponding relation;
the processing unit is used for generating a step audio signal based on the step sound frequency information;
the processing unit is further configured to determine an audio output effect of the step audio signal based on at least a position difference between the first position information and the second position information; wherein the audio output effect is at least indicative of the first positional information being different from the second positional information; wherein the determining an audio output effect of the step audio signal based on at least a difference in position of the first position information and the second position information comprises: determining a direction and intensity of the step audio signal based on a position difference of the first position information and the second position information; based on the direction and the intensity, an audio output effect of the step audio signal is determined.
9. An electronic device, the electronic device comprising: a processor and a memory configured to store a computer program capable of running on the processor,
wherein the processor is configured to perform the steps of the method of any one of claims 1 to 7 when running the computer program.
CN201810558051.4A 2018-06-01 2018-06-01 Information presentation method, electronic equipment and computer readable storage medium Active CN108924761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810558051.4A CN108924761B (en) 2018-06-01 2018-06-01 Information presentation method, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810558051.4A CN108924761B (en) 2018-06-01 2018-06-01 Information presentation method, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108924761A CN108924761A (en) 2018-11-30
CN108924761B true CN108924761B (en) 2020-11-20

Family

ID=64410711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810558051.4A Active CN108924761B (en) 2018-06-01 2018-06-01 Information presentation method, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108924761B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101898021A (en) * 2009-05-31 2010-12-01 苏州百源软件设计有限公司 Exercising machine having network and game simulator functions
CN103364756A (en) * 2012-04-05 2013-10-23 三星电子(中国)研发中心 Virtual same time-space motion system and method
CN105528187A (en) * 2014-10-24 2016-04-27 虹映科技股份有限公司 moving image system and method
CN105549947A (en) * 2015-12-21 2016-05-04 联想(北京)有限公司 Audio device control method and electronic device
CN107349573A (en) * 2016-05-10 2017-11-17 北京环形山网络科技有限公司 Virtual reality body-building system based on smart mobile phone and Internet of Things

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101898021A (en) * 2009-05-31 2010-12-01 苏州百源软件设计有限公司 Exercising machine having network and game simulator functions
CN103364756A (en) * 2012-04-05 2013-10-23 三星电子(中国)研发中心 Virtual same time-space motion system and method
CN105528187A (en) * 2014-10-24 2016-04-27 虹映科技股份有限公司 moving image system and method
CN105549947A (en) * 2015-12-21 2016-05-04 联想(北京)有限公司 Audio device control method and electronic device
CN107349573A (en) * 2016-05-10 2017-11-17 北京环形山网络科技有限公司 Virtual reality body-building system based on smart mobile phone and Internet of Things

Also Published As

Publication number Publication date
CN108924761A (en) 2018-11-30

Similar Documents

Publication Publication Date Title
US11758346B2 (en) Sound localization for user in motion
CN102855116B (en) Messaging device and information processing method
US10657727B2 (en) Production and packaging of entertainment data for virtual reality
US10444843B2 (en) Systems and methods for converting sensory data to haptic effects
US20190313201A1 (en) Systems and methods for sound externalization over headphones
EP3584539B1 (en) Acoustic navigation method
CN114885274B (en) Spatialization audio system and method for rendering spatialization audio
CN109059929B (en) Navigation method, navigation device, wearable device and storage medium
US20200221245A1 (en) Information processing apparatus, information processing method and program
US20190130644A1 (en) Provision of Virtual Reality Content
CN101668567A (en) Personal training device using multi-dimensional spatial audio
CN117859077A (en) System and method for generating three-dimensional map of indoor space
US20210366450A1 (en) Information processing apparatus, information processing method, and recording medium
WO2016183606A1 (en) Sexual interaction device and method for providing an enhanced computer mediated sexual experience to a user
US11994676B2 (en) Method and system for resolving hemisphere ambiguity in six degree of freedom pose measurements
US10667073B1 (en) Audio navigation to a point of interest
CN109983784B (en) Information processing apparatus, method and storage medium
CN108924761B (en) Information presentation method, electronic equipment and computer readable storage medium
WO2015033446A1 (en) Running assistance system and head mount display device used in same
JP6884854B2 (en) Audio providing device, audio providing method and program
WO2023069988A1 (en) Anchored messages for augmented reality
JP7484290B2 (en) MOBILE BODY POSITION ESTIMATION DEVICE AND MOBILE BODY POSITION ESTIMATION METHOD
CN107632703A (en) Mixed reality audio control method and service equipment based on binocular camera
US10841568B2 (en) Method and device to visualize image during operation of ride in amusement park and method and device to manage corresponding device
JP7038798B2 (en) Position estimation device, position estimation method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant