WO2021128287A1 - 一种数据生成方法及装置 - Google Patents

一种数据生成方法及装置 Download PDF

Info

Publication number
WO2021128287A1
WO2021128287A1 PCT/CN2019/129214 CN2019129214W WO2021128287A1 WO 2021128287 A1 WO2021128287 A1 WO 2021128287A1 CN 2019129214 W CN2019129214 W CN 2019129214W WO 2021128287 A1 WO2021128287 A1 WO 2021128287A1
Authority
WO
WIPO (PCT)
Prior art keywords
spatial
information
data
spatial object
object information
Prior art date
Application number
PCT/CN2019/129214
Other languages
English (en)
French (fr)
Inventor
袁庭球
张立斌
张慧敏
刘畅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2022539102A priority Critical patent/JP2023508418A/ja
Priority to KR1020227024388A priority patent/KR20220111715A/ko
Priority to EP19958038.2A priority patent/EP4060522A4/en
Priority to CN201980102878.7A priority patent/CN114787799A/zh
Priority to PCT/CN2019/129214 priority patent/WO2021128287A1/zh
Publication of WO2021128287A1 publication Critical patent/WO2021128287A1/zh
Priority to US17/848,748 priority patent/US20220322009A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/687Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/909Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/18Methods or devices for transmitting, conducting or directing sound
    • G10K11/26Sound-focusing or directing, e.g. scanning

Definitions

  • the embodiments of the present application relate to the technical field of smart terminals, and in particular, to a data generation method and device.
  • smart electronic devices with navigation functions such as mobile phones
  • their navigation services they generally plan a path based on the start point and end point input by the user, and output the path planning result in the form of voice playback.
  • the voice playback prompt is not intuitive enough, when users are not familiar with the local environment, they cannot accurately match their location with the path on the map.
  • the navigation system is limited by positioning accuracy and often requires users to go wrong. It takes a period of time to perceive and give yaw prompts, especially when using cumbersome navigation data on a planned route. Users need to frequently open the map to confirm whether they are heading in the right direction. The operation is cumbersome and the user experience is greatly reduced.
  • the embodiments of the present application provide a data generation method and device, which can enable the user to more intuitively and efficiently know the position information of the spatial object when indicating the spatial object information to the user, thereby greatly improving the user experience.
  • the embodiments of the present application provide a data generation method, which can be used in the technical field of smart terminals.
  • the data generating device obtains spatial object information, where the spatial object information is used to obtain the position information of the spatial object relative to the data generating device, and the spatial object information may include, for example, a text-based navigation from the data generating device to the navigation destination.
  • the data can also include navigation data in the form of an audio stream from the data generating device to the navigation destination. It can also include the object content and absolute coordinates of the spatial objects around the data generating device, and it can also include the objects of the spatial objects around the data generating device.
  • the data generating device generates content information and orientation information according to the spatial object information, the orientation information is used to indicate the orientation of the spatial object pointed to by the spatial object information relative to the data generating device, and the content information is used to describe the spatial object.
  • the orientation information includes at least one of position information and direction information.
  • the spatial objects include but are not limited to navigation destinations, events that occur in the space, people, animals, or objects in the space, and the spatial objects are the navigation destinations.
  • the content information is used to describe the path planning from the data generating device to the navigation destination
  • the spatial object is an event that occurs in the space, or a person, animal or object in the space
  • the content information is used to describe the spatial object relative to the data
  • Generate the direction of the device and the object content of the spatial object generate spatial sound data according to the orientation information and content information
  • the spatial sound data is used to play the spatial sound
  • the sound source position of the spatial sound corresponds to the azimuth information.
  • the spatial sound data is used to play the spatial sound, and the sound corresponding to the spatial sound
  • the source location is consistent with the location of the navigation destination relative to the data generating device, and the sound content corresponding to the spatial sound is consistent with the path plan from the data generating device to the navigation destination.
  • the spatial acoustic data includes position information and content information, or the spatial acoustic data includes at least two mono signals generated according to the position information and content information, and the at least two mono signals are used to be combined with each other. Acousto-electric transducer modules corresponding to two mono signals are played at the same time to generate spatial sound.
  • the data generating device generating position information according to the spatial object information includes: generating the position information according to at least one of the position or posture of the data generating device and the spatial object information.
  • the position information includes position information and direction information.
  • the data generating device may generate the space based on the spatial position of the data generating device and the spatial position of the spatial object around the data generating device.
  • the data generating device may measure the posture of the data generating device through a gyroscope, inertial sensor or other elements.
  • the posture is the heading of the vehicle; when the data generating device is a mobile phone or In the case of a navigator, the posture is the orientation of the screen on the mobile phone or the navigator; when the data generating device is a two-channel headset, the posture is the face orientation of the user wearing the two-channel headset;
  • the data generating device is a separate device composed of a car and a mobile phone located in the car, the posture is the orientation of the front of the car, or the orientation of the screen of the mobile phone located in the car; when the data When the generating device is a separate device composed of a car and a two-channel headset located in the car, the posture is the orientation of the front of the car, or the face orientation of the user in the car wearing the two-channel headset .
  • the position information of the spatial object relative to the data generating device is generated.
  • the orientation information is generated according to the posture and the spatial object information, so that the sound source position of the spatial sound heard by the end user is consistent with the orientation information of the spatial object relative to the data generating device, so as to improve the accuracy of the user's intuitive experience .
  • the data generation device acquiring the spatial object information includes: receiving the spatial object information; or, collecting the spatial object information through a sensor.
  • the receiving methods include cellular communication, wireless local area network, global interoperability microwave access, Bluetooth communication technology, Zigbee communication technology, optical communication, satellite communication, infrared communication, transmission line communication, hardware interface, wire reception on the hardware circuit board, At least one of obtaining information from a software module or reading information from a storage device.
  • the sensor includes at least one of a photosensitive sensor, a sound sensor, an image sensor, an infrared sensor, a thermal sensor, a pressure sensor, or an inertial sensor.
  • the data generation device can generate spatial acoustic data according to the received spatial object information, or generate spatial acoustic data according to the spatial object information collected by the sensor, that is, the data presentation method provided by this solution can be applied in a variety of application scenarios , Extends the application scenarios of this solution, and improves the flexibility of this solution.
  • the data generating device receives the spatial object information, including receiving the spatial object information in at least one of the following three ways: the data generating device receives the audio stream data generated by the application program, and combines the audio The stream data is determined as the received spatial object information.
  • the audio stream data can be navigation data in the form of audio streams. Voice recognition is performed on the navigation data in the form of audio streams to obtain position information and content information; or, the data generation device receives the application program to generate And determine the interface data as the received spatial object information.
  • the interface data may be navigation data in text form, and the data generating device will use the field value and location field of the content field included in the navigation data in text form.
  • the data generating device receives the map data stored on the network side or the terminal side, and determines the interface data as the received spatial object information, and the map data includes the space around the data generating device The object content and coordinates of the object.
  • the aforementioned coordinates can be absolute coordinates or relative coordinates.
  • the data generating device generates position information according to the coordinates of the space object surrounding the data generating device, and generates the object content of the space object surrounding the device according to the position information and data. Generate content information.
  • the senor includes at least one of a photosensitive sensor, a sound sensor, an image sensor, an infrared sensor, a thermal sensor, a pressure sensor, or an inertial sensor.
  • the data generating device generates content information and orientation information according to the spatial object information, including: when the data generating device determines that the spatial object information is spatial object information that meets a preset condition, according to the spatial object information Generate content information and location information. Specifically, the data generating device determines whether the spatial position indicated by the orientation information is located within the preset spatial position area, or the data generating device determines whether the spatial position indicated by the orientation information is located in the preset spatial direction, or the data generating device determines Whether the target content indicated by the content information is the preset target content. In a case where the judgment result of any one or more of the foregoing is yes, the data generating device determines that the spatial object information satisfies the preset condition.
  • the spatial acoustic data before generating spatial acoustic data, it will be judged whether the spatial object information meets the preset condition, and only when the judgment result is that the preset condition is satisfied, will the spatial acoustic data be generated based on the spatial object information, that is, The spatial object information is filtered, which not only avoids the waste of computer resources caused by the spatial object information that does not meet the preset conditions, but also avoids excessive disturbance to the user, and improves the user viscosity of the solution.
  • the method further includes: in a case where the data generating apparatus determines that the spatial object information is spatial object information that meets a preset condition, generating volume increase indication information, the volume increase indication information is used to indicate increase The volume of the spatial sound corresponding to the spatial object information that meets the preset condition is increased, and the volume increase instruction information may carry a volume value that needs to be increased.
  • the playback volume of the spatial object of the preset object content can be increased to attract the user's attention, and prevent the user from missing the spatial object of the preset object content, which is beneficial to improve the safety of the navigation process, and can also avoid The user misses the space object of interest, which increases the user viscosity of this solution.
  • the method further includes: in a case where the data generating device determines that the spatial object information is spatial object information that meets a preset condition, generating volume reduction indication information, the volume reduction indication information is used to indicate reduction The volume of the spatial sound corresponding to the spatial object information meeting the preset condition is small, and the volume reduction instruction information may carry a volume value that needs to be increased.
  • the spatial object information that satisfies the preset condition is the spatial object information including the preset spatial location area, the preset spatial direction, or the preset object content.
  • the preset spatial location area refers to the spatial location area relative to the spatial location of the data generation device or the audio playback device;
  • the preset spatial direction may be the relative direction relative to the data generation device or the audio playback device, for example, in the audio playback device.
  • the preset spatial direction is the face orientation of the user wearing the dual-channel headset.
  • the preset spatial direction is the direction of movement on the mobile phone or the navigator.
  • the preset spatial direction is the front of the car, and the preset spatial direction can also be an absolute spatial position direction; the preset object content can be input by the user in advance, or it can be autonomous by the data generating device definite.
  • the preset spatial direction is the face orientation of the user wearing the dual-channel headset, and the face orientation of the user wearing the dual-channel headset It can be measured according to the gyroscope, inertial sensor or other components configured in the dual-channel headset.
  • the data generating device generating spatial acoustic data according to the position information and content information includes: the data generating device compares the audio stream data corresponding to the content information according to the position information, the content information, and the posture of the data generating device. Perform a rendering operation to generate spatial sound data; or, the data generation device performs a rendering operation on the audio stream data corresponding to the content information according to the orientation information, content information, and the posture of the audio playback device to generate spatial sound data.
  • the spatial sound data includes at least two mono signals generated according to the position information and the content information.
  • the rendering specifically involves incorporating spatial orientation information into the audio stream data through a specific algorithm or data processing operation, and finally generating at least two mono signals, the at least two mono signals being used to correspond to the two mono signals Acousto-electric transducer modules of the same time play to generate spatial sound.
  • an embodiment of the present application provides a data generation device, which includes an acquisition module and a generation module.
  • the acquisition module is used to obtain the spatial object information
  • the spatial object information is used to obtain the position information of the spatial object relative to the data generating device
  • the generating module is used to generate content information and position information according to the spatial object information
  • the position information is used to indicate the spatial object The location of the spatial object pointed to by the information relative to the data generating device.
  • the content information is used to describe the spatial object and generate the module. It is also used to generate spatial sound data based on the location information and content information.
  • the spatial sound data is used to play the spatial sound.
  • the position of the sound source corresponds to the azimuth information.
  • the spatial acoustic data includes orientation information and content information
  • the spatial acoustic data includes at least two mono signals generated according to the orientation information and content information
  • the at least two mono signals are used to be combined with two Acousto-electric transducer modules corresponding to monophonic signals are played simultaneously to generate spatial sound.
  • the generating module is specifically configured to generate orientation information according to at least one of the position or posture of the data generating device and the spatial object information.
  • the acquisition module is specifically used to receive spatial object information, or to collect spatial object information through a sensor.
  • the acquisition module is specifically used to receive spatial object information in at least one of the following three ways: receiving audio stream data generated by an application program, or receiving interface data generated by an application program, or , To receive the map data stored on the network side or the terminal side.
  • the senor includes at least one of a photosensitive sensor, a sound sensor, an image sensor, an infrared sensor, a thermal sensor, a pressure sensor, or an inertial sensor.
  • the generating module is specifically configured to generate content information and orientation information according to the spatial object information when it is determined that the spatial object information is the spatial object information that satisfies a preset condition.
  • the generating module is further configured to generate volume increase indication information when it is determined that the spatial object information is spatial object information meeting preset conditions, and the volume increase indication information is used to indicate increase The volume of the spatial sound corresponding to the spatial object information that satisfies the preset condition.
  • the generating module is further configured to generate volume reduction indication information when it is determined that the spatial object information is spatial object information that satisfies preset conditions, and the volume reduction indication information is used to indicate reduction.
  • the spatial object information that satisfies the preset condition is the spatial object information including the preset spatial location area, the preset spatial direction, or the preset object content.
  • the generating module is specifically configured to perform rendering operations on the audio stream data corresponding to the content information according to the orientation information, content information, and the posture of the audio playback device, so as to generate spatial acoustic data, spatial acoustic data It includes at least two mono signals generated according to the position information and content information.
  • the data generating device includes at least one of a headset, a mobile phone, a portable computer, a navigator, or a car.
  • the data generating device may be an integrated device composed of an independently working device, or a separate device composed of multiple different devices that work together.
  • the preset spatial direction is the face orientation of the user wearing the two-channel headset, and the audio playback device is used to play spatial sound.
  • an embodiment of the present application provides a data generation device, including a memory and a processor, the memory stores computer program instructions, and the processor runs the computer program instructions to execute the data processing generation method described in the first aspect.
  • the data generating device further includes a transceiver for receiving spatial object information.
  • the data generating device further includes a sensor for collecting spatial object information.
  • the data generating device includes at least one of a headset, a mobile phone, a portable computer, a navigator, or a car.
  • the processor may also be used to execute the steps executed by the data generating apparatus in each possible implementation manner of the first aspect.
  • the processor may also be used to execute the steps executed by the data generating apparatus in each possible implementation manner of the first aspect.
  • the first aspect please refer to the first aspect, which will not be repeated here.
  • the embodiments of the present application provide a computer-readable storage medium in which a computer program is stored, and when it is driven on a computer, the computer can execute the data described in the first aspect. Processing generation method.
  • the embodiments of the present application provide a computer program that, when running on a computer, causes the computer to execute the data processing generation method described in the first aspect.
  • the embodiments of the present application provide a chip system including a processor, which is used to support a server or a data processing generating device to implement the functions involved in the above aspects, for example, sending or processing the functions involved in the above methods. Data and/or data.
  • the chip system further includes a memory, and the memory is used to store necessary program instructions and data for the server or the communication device.
  • the chip system can be composed of chips, and can also include chips and other discrete devices.
  • FIG. 1 is a schematic diagram of a data generation method provided in Embodiment 1 of this application;
  • FIG. 2 is a schematic diagram of the data generation method provided in the second embodiment of the application.
  • FIG. 3 is a schematic diagram of a data generation method provided in Embodiment 3 of this application.
  • FIG. 4 is a schematic diagram of still another implementation manner of the data generation method provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of yet another implementation manner of the data generation method provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of still another implementation manner of the data generation method provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of yet another implementation manner of the data generation method provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram of an implementation manner of a data generation method provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of another implementation manner of the data generation method provided by an embodiment of the application.
  • FIG. 10 is a schematic structural diagram of a data generating device provided by an embodiment of the application.
  • FIG. 11 is a schematic diagram of another structure of a data generating device provided by an embodiment of the application.
  • the embodiment of the application provides a data generation method and related equipment, which play spatial sound corresponding to the orientation information in the navigation data, and the user can determine the correct forward direction and lifting method according to the playback position of the sound source of the spatial sound heard It is more intuitive and improves the efficiency of the navigation process.
  • the spatial object is another type of object, it provides a more intuitive and efficient data presentation method.
  • the embodiments of the present application can be applied to various scenes for playing spatial object information, where the spatial object information includes description information for the spatial object, and the spatial object refers to being located in a three-dimensional (3-dimension, 3D) space.
  • Objects in three-dimensional space have corresponding spatial positions, which can include solid objects and non-solid objects in three-dimensional space.
  • Spatial objects include, but are not limited to, navigation destinations, events that occur in the space, and people in the space , Animals in space or static objects in space.
  • the application scenarios of the embodiments of the present application include but are not limited to walking navigation, car navigation, using map data stored on the network side or terminal side to introduce surrounding space objects, or using sensors to sense and introduce surrounding space objects (the actual situation is not limited to this.
  • Four typical application scenarios The following describes the aforementioned four typical application scenarios one by one.
  • the following first takes a pedestrian navigation scenario as an example to introduce three implementation modes of the data generation method provided in the embodiment of the present application.
  • FIG. 1 is a schematic diagram of the data generation method provided in the first embodiment of this application.
  • the data generation method is executed by the data generation device 10 in FIG. 1.
  • Fig. 1 shows the headset form of the data generating device 10. It should be understood that the use of a headset as the data generating device 10 is only an example, and the data generating device 10 may also include other types of equipment. The specific implementation process in this application scenario will be described in detail below.
  • the headset may be provided with a navigation application program.
  • the headset obtains the navigation starting point and the navigation destination, and sends the navigation starting point and the navigation destination to the navigation server 11 through the aforementioned navigation application.
  • the navigation server 11 uses the map data stored therein to determine the navigation data between the navigation start point and the navigation destination.
  • the headset receives the navigation data sent by the navigation server 11 through the navigation application.
  • the headset After acquiring the navigation data, the headset generates content information and orientation information according to the navigation data, where the orientation information is the orientation information of the navigation destination relative to the headset.
  • the headset can also acquire the posture of the headset, and generate orientation information based on the posture of the headset and navigation data.
  • the earphone generates spatial sound data after generating position information and content information.
  • the earphone can also determine whether the object content indicated by the content information is the preset object content. In the case that the object content indicated by the content information is the preset object content, the earphone generating volume is increased. Large indication information, the headset can generate spatial sound data according to the orientation information, content information, and volume increase indication information.
  • the spatial acoustic data includes orientation information and content information, or the spatial acoustic data includes two mono signals generated according to the orientation information and content information, and the two mono signals are used to be combined with the two mono signals.
  • Acousto-electric transducer modules corresponding to the acoustic signal are played at the same time to generate spatial sound.
  • the headset in the case where the object content indicated by the content information is not the preset object content, the headset generates volume reduction instruction information, and the headset may generate spatial sound data according to the orientation information, content information, and volume reduction instruction information.
  • the playback volume of the spatial sound may also be adjusted according to the spatial sound data. Specifically, if the headset generates volume increase instruction information, the playback volume of the spatial sound is increased, and if the headset generates volume reduction instruction information, the playback volume of the spatial sound is reduced.
  • the headset can also obtain the real-time posture of the headset before generating two mono signals based on the spatial sound data, and then re-render the content information in the form of the audio stream according to the real-time posture of the headset, and the re-rendered result
  • the two monophonic signals are transmitted to the two acoustic-electric transducer modules of the earphone, and the spatial sound is played through the acoustic-electric transducer module, so that the sound source position of the spatial sound heard by the user is consistent with the real-time position information of the spatial object relative to the earphone.
  • the spatial sound data is generated according to the orientation information and content information of the navigation destination.
  • the spatial sound data is used to play the spatial sound.
  • the position of the sound source of the spatial sound is consistent with the position of the navigation destination relative to the headset.
  • the playback content of the sound is consistent with the path plan from the data generating device to the navigation destination, so that the user can determine the correct direction according to the sound source position of the spatial sound heard, and the information presentation method is more intuitive, eliminating the need to open the map frequently To confirm whether you are heading in the right direction, the operation is simple, and the convenience, safety and user experience of navigation are improved.
  • the data generating device 10 shown in FIG. 1 as a headset form is only an example. In practical applications, the data generating device 10 may also be a car, a mobile phone, a portable computer, a navigator or other portable terminal equipment.
  • FIG. 2 is a schematic diagram of the data generation method provided in the second embodiment of this application.
  • the data generation method is executed by the data generation device 20 in FIG. 2.
  • the data generating device 20 in FIG. 2 is a mobile phone
  • the audio playback device 21 is a headset
  • the mobile phone and the headset are independent devices. It should be understood that FIG. 2 is only an example, and the specific implementation process in this application scenario will be described in detail below.
  • the mobile phone (that is, an example of the data generating device 20) receives the navigation data sent by the navigation server 22. After receiving the navigation data, the mobile phone generates content information and orientation information according to the spatial object information, and then generates spatial acoustic data.
  • the specific implementation manner is similar to the specific implementation manner in which the data generation device 10 in the embodiment corresponding to FIG. 1 executes the foregoing steps, and will not be repeated here.
  • the spatial sound data in this embodiment includes orientation information and content information, and the spatial sound data is sent to the earphone (that is, an example of the audio playback device 20), and the earphone
  • a rendering operation is performed to generate two mono signals, and the two mono signals are transmitted to the acoustic-electric transducer module, and the spatial sound is played through the two acoustic-electric transducer modules of the headset.
  • the headset can also adjust the playback volume of the spatial sound according to the spatial sound data. Specifically, if the generated volume increase instruction information is generated, the playback volume of the spatial sound is increased, and if the generated volume decrease instruction information is generated, the playback volume of the spatial sound is decreased.
  • the attitude of the earphone can also be obtained, and the content information in the form of the audio stream is re-rendered according to the attitude of the earphone, and the two mono signals obtained by the re-rendering operation are re-rendered.
  • the acoustic signal is transmitted to the two acoustic-electric transducer modules of the headset, and the spatial sound is played through the acoustic-electric transducer module, so that the sound source position of the spatial sound heard by the user is consistent with the orientation information of the spatial object.
  • the specific representation of the data generating device 20 in FIG. 2 as a mobile phone is only an example. In practical applications, the data generating device 20 may also be represented in the shape of a portable computer, a navigator or other portable terminal equipment.
  • FIG. 3 is a schematic diagram of an implementation of the data generation method provided by an embodiment of the application.
  • the data generation device 30 is a mobile phone
  • the audio playback device 31 is a headset.
  • the mobile phone and the headset are independent devices. It should be understood that Figure 3 is only an example. The specific implementation process in this application scenario will be described in detail below.
  • the mobile phone (that is, an example of the data generating device 30) receives the navigation data sent by the navigation server 32. After receiving the spatial object information, the mobile phone generates content information and orientation information according to the spatial object information, and then performs a rendering operation based on the content information and orientation information to generate spatial acoustic data.
  • the spatial acoustic data in this embodiment refers to at least two Mono signal.
  • the specific implementation manner is similar to the specific implementation manner in which the data generation device 10 in the embodiment corresponding to FIG. 1 executes the foregoing steps, and will not be repeated here.
  • the mobile phone sends two monophonic signals to the earphone (that is, an example of the audio playback device 31), and the earphone inputs the two monophonic signals to the acoustic-electric transducer module to play spatial sound.
  • the headset can acquire the attitude of the headset and send the attitude of the headset to the mobile phone.
  • the mobile phone re-renders the content information in the audio stream according to the attitude of the headset, and the mobile phone will re-render the two mono signals obtained by the re-rendering operation.
  • the earphone inputs two monophonic signals to the acoustic-electric transducer module to play the spatial sound.
  • the specific representation of the data generating device 30 in FIG. 3 as a mobile phone form is only an example. In practical applications, the data generating device 30 may also be represented in the form of a portable computer, a navigator or other portable terminal equipment.
  • the specific implementations of the data generating device, audio playback device, and navigation server in the car navigation scenario can all refer to the above-mentioned embodiments. Do not repeat them here.
  • the specific display form of the data generating device can be a car or an audio playback device in addition to the example in the embodiment corresponding to FIG.
  • the specific display form of the data generating device may be a car in addition to the example in the embodiment corresponding to FIG. 1.
  • the posture may specifically be the posture of the front of the car, the posture of the wheels of the car, or the posture of other components.
  • FIG. 4 is a schematic diagram of a scenario of a data generation method provided by an embodiment of the application.
  • Figure 4 shows a car navigation scene in which a person is driving in a car with headphones.
  • the data generation device is a mobile phone
  • the audio playback device is a headset
  • the data generation device and the audio playback device are independent devices as an example.
  • the specific implementations of the data generating device 40, the audio playing device 41, and the navigation server 42 in FIG. 4 are similar to those of the data generating device 20, the audio playing device 21, and the navigation server 22 in the corresponding embodiment in FIG. 2, and will not be repeated here.
  • the example in FIG. 4 is only to facilitate the understanding of the solution, and is not used to limit the solution.
  • FIG. 5 is a schematic diagram of an implementation manner of the data generation method provided by an embodiment of the application, and the data generation method is executed by the data generation device 50 in FIG. Fig. 5 shows the headset form of the data generating device 50. It should be understood that Fig. 5 is only an example. The following takes the data generating device 50 as a headset as an example to describe the specific implementation process in this application scenario in detail.
  • the headset obtains the absolute coordinates (also called latitude and longitude coordinates) corresponding to the spatial position of the headset, and sends the absolute coordinates corresponding to the spatial position of the headset to the data server 51.
  • the headset receives the spatial object information sent by the data server 51, and the spatial object
  • the information includes the object content of the spatial object around the headset and the spatial position of the spatial object around the headset.
  • the spatial position of the spatial object around the headset may be the absolute coordinates of the spatial object or the relative coordinates of the spatial object relative to the headset.
  • the headset After the headset receives the spatial object information, it generates orientation information according to the spatial position of the spatial object around the headset.
  • the headset may also acquire the posture, and generate the position information of the spatial object relative to the data generating device 50 according to the posture and the spatial object information.
  • the headset generates content information based on the orientation information and the object content of the spatial objects around the headset.
  • the earphone needs to generate spatial acoustic data after acquiring the position information and content information.
  • the specific implementation is similar to the manner in which the data generating device 10 in the embodiment corresponding to FIG. 1 executes the foregoing steps, and will not be repeated here.
  • the earphone can also determine whether the spatial position indicated by the orientation information is located in the preset spatial position area, or whether the spatial position indicated by the orientation information is located in the preset spatial direction, or , It is determined whether the object content indicated by the content information is the preset object content, and if the result of any one or more of the foregoing judgments is yes, it is determined that the spatial object information satisfies the preset condition.
  • the earphone when the spatial object information satisfies a preset condition, the earphone generates spatial acoustic data according to the orientation information and content information. In the case that the spatial object information does not meet the preset conditions, the headset no longer generates spatial acoustic data according to the spatial object information, and can process the next spatial object information.
  • the headset does not play the spatial object information that does not meet the preset conditions, that is, preliminary screening of the spatial object information is performed to reduce interference to users and increase the user viscosity of the solution.
  • the headset when the spatial object information satisfies the preset condition, the headset generates volume increase indication information, and generates spatial sound data according to the orientation information, content information, and volume increase indication information. In the case that the spatial object information does not meet the preset conditions, the headset no longer generates spatial acoustic data according to the spatial object information, and can process the next spatial object information.
  • the headset when the spatial object information meets the preset condition, the headset generates volume increase indication information, and generates spatial sound data according to the orientation information, content information, and volume increase indication information. In the case that the spatial object information does not meet the preset conditions, the headset generates spatial acoustic data according to the orientation information and content information.
  • the headset when the spatial object information meets the preset condition, the headset generates volume increase indication information, and generates spatial sound data according to the orientation information, content information, and volume increase indication information.
  • the headset In the case that the spatial object information does not meet the preset condition, the headset generates volume reduction instruction information, and generates spatial sound data according to the orientation information, content information, and volume reduction instruction information.
  • the earphone after generating the spatial sound data, the earphone generates and plays the spatial sound according to the spatial sound data.
  • the process of the data generating device 10 in the embodiment corresponding to FIG. 1 performing the foregoing steps, which will not be repeated here.
  • the specific representation of the data generating device 50 in FIG. 5 in the form of a headset is only an example. In practical applications, the data generating device 50 may also be represented in the form of a mobile phone, a portable computer, a navigator, or a car.
  • the data generating device is a mobile phone
  • the audio playback device is a headset
  • the mobile phone and the headset are mutually independent devices as an example for description.
  • the mobile phone sends the absolute coordinates corresponding to the spatial position of the mobile phone to the data server, receives the spatial object information sent by the data server, and generates content information and orientation information according to the spatial object information.
  • the mobile phone can also acquire the posture of the headset, and generate orientation information based on the posture of the headset and the spatial object information. Specifically, the headset measures the attitude of the headset, and sends the attitude of the headset to the mobile phone.
  • the mobile phone After the mobile phone obtains the content information and the position information, it generates the spatial sound data.
  • the spatial sound data includes the position information and the content information.
  • the specific implementation is the same as that of the data generating device 50 in the embodiment corresponding to FIG. 5 performing the foregoing steps. The method is similar and will not be repeated here.
  • the mobile phone After the mobile phone generates the spatial acoustic data, it sends the spatial acoustic data to the headset, and the headset performs a rendering operation according to the spatial acoustic data to obtain at least two mono signals, and transmits the at least two mono signals to the sound electronics Transducer module to play spatial sound through the acoustic-electric transducer module.
  • the headset performs a rendering operation according to the spatial acoustic data to obtain at least two mono signals, and transmits the at least two mono signals to the sound electronics Transducer module to play spatial sound through the acoustic-electric transducer module.
  • the data generation device is a mobile phone
  • the audio playback device is a headset
  • the mobile phone and the headset are mutually independent devices as an example for description.
  • the mobile phone sends the absolute coordinates corresponding to the spatial position of the mobile phone to the data server, receives the spatial object information sent by the data server, and generates content information and orientation information according to the spatial object information.
  • the data server receives the spatial object information sent by the data server, and generates content information and orientation information according to the spatial object information.
  • the spatial acoustic data in this embodiment refers to at least two mono signals generated according to the orientation information and content information, and the at least two mono signals are sent to Earphone, the earphone transmits at least two monophonic signals to the acoustic-electric transducer module to play spatial sound.
  • the data generating device can also be in the form of a mobile phone, a portable computer, or a navigator.
  • Terminal-side electronic devices equipped with sensors can also be represented by other terminal-side devices such as earphones, portable computers, navigators, or smart home appliances.
  • this implementation method is similar to the first implementation method that uses map data stored on the network side or terminal side to introduce surrounding space objects. The difference is that it will introduce the second method of surrounding space objects using map data stored on the network side or terminal side.
  • the data server 51 is replaced with electronic equipment on the terminal side, and the aforementioned electronic equipment on the terminal side is different from the terminal equipment integrated with the data generating device and the audio playing device.
  • the specific execution steps of the data generation device and the audio playback device are similar to the specific execution steps of the data generation device 50 and the audio playback device in the embodiment corresponding to FIG. 5, and the specific execution steps of the electronic equipment on the terminal side correspond to FIG. 5.
  • the specific execution steps of the data server 51 in the embodiment are similar, and will not be repeated here.
  • FIG. 6 is a schematic diagram of an implementation manner of the data generation method provided by an embodiment of the application.
  • the data generation method is executed by the data generation device 60 in FIG. 6.
  • Figure 6 shows the form of the headset of the data generation device 60. It should be understood that Figure 6 is only an example. The following takes the data generation device 60 as a headset, and the headset is equipped with a photosensitive sensor as an example. The realization process is described in detail.
  • the headset collects spatial object information through a photosensitive sensor.
  • at least two photosensitive sensors are deployed on the headset, and the headset uses the data collected by the photosensitive sensor to locate the spatial position of the light source (that is, the spatial object around the headset in FIG. 6) to generate the relative coordinates of the light source relative to the headset, and Determine the type of light source based on the data collected by the photosensitive sensor.
  • at least two image sensors can also be deployed on the headset.
  • the headset uses a binocular vision algorithm to locate the spatial objects around the headset to generate the relative relationship between the spatial objects around the headset and the headset. Coordinates, after obtaining the image of the space object around the headset through the image sensor, the image of the space object can be recognized to obtain the object content of the space object.
  • the example here is only to prove the feasibility of the solution and is not used to limit This program.
  • the headset generates orientation information according to the relative coordinates of the spatial objects around the headset with respect to the headset.
  • the headset can also acquire the posture of the headset, and generate the orientation information of the spatial object relative to the headset according to the posture of the headset and the spatial object information.
  • the headset generates content information based on the orientation information and the type of spatial objects around the headset.
  • the earphone After obtaining the position information and content information, the earphone generates spatial sound data, and plays the spatial sound according to the spatial sound data.
  • the earphone performing the foregoing steps, refer to the description of the manner in which the data generating device 50 in the corresponding embodiment of FIG. 5 performs the foregoing steps. , Do not repeat it here.
  • the data generating device 60 shown in FIG. 6 as a headset form is only an example. In practical applications, the data generating device 60 may also be represented in the form of a mobile phone, a portable computer, a navigator, or a car.
  • FIG. 7 is a schematic diagram of an implementation of the data generation method provided by the embodiment of the application.
  • the data generation device 70 is specifically represented as a mobile phone, and the audio playback device 71 is specifically represented as a headset.
  • a sound sensor is configured as an example in, and the specific implementation process in this application scenario is described in detail. It should be understood that FIG. 7 is only an example.
  • the mobile phone (that is, an example of the data generating device 70) collects spatial object information corresponding to the spatial objects around the earphone through a sound sensor.
  • a sound sensor that is, an example of the data generating device 70
  • the time delay estimation positioning method can be used to locate the spatial position in Figure 7 based on the data collected by the sensors to obtain the relative coordinates of the spatial objects around the mobile phone with respect to the mobile phone.
  • different types of space objects have different frequencies of sound.
  • the mobile phone generates orientation information according to the relative coordinates of the space objects around the mobile phone with respect to the mobile phone.
  • the mobile phone can also acquire the posture of the headset, and generate the orientation information of the spatial object relative to the mobile phone according to the posture of the headset and the spatial object information.
  • the mobile phone generates content information based on the location information and the object content of the spatial objects around the mobile phone.
  • the mobile phone obtains the location information and content information, it generates spatial acoustic data.
  • the spatial acoustic data includes location information and content information.
  • the mobile phone after generating the spatial sound data, sends the spatial sound data to the earphone (that is, an example of the audio playback device 71), and the earphone plays the spatial sound through the acoustic-electric transducer module according to the spatial sound data.
  • the earphone that is, an example of the audio playback device 71
  • the earphone plays the spatial sound through the acoustic-electric transducer module according to the spatial sound data.
  • the earphone playing the spatial sound according to the spatial sound data reference may be made to the specific implementation manner of the data generating device 20 and the audio playing device 21 in the embodiment corresponding to FIG. 2, which will not be repeated this time.
  • the electronic device integrated with the data generation device and the sensor is a mobile phone, and the audio playback device is a headset as an example for description.
  • the mobile phone collects spatial object information corresponding to the spatial objects around the mobile phone through a sensor, and generates content information and orientation information according to the spatial object information.
  • the specific implementation of the foregoing steps is similar to the specific implementation of the data generating device 70 in the embodiment corresponding to FIG. 7, and will not be repeated this time.
  • the data generating device performs a rendering operation according to the content information and the orientation information to obtain spatial sound data.
  • the spatial sound data includes at least two mono signals, and the at least two mono signals are sent to the audio playback device ,
  • the audio playback device transmits at least two mono signals to the acoustic-electric transducer module to play spatial sound.
  • the data generation device is specifically represented as a mobile phone form but only an example. In practical applications, the data generation device It can also be expressed in the form of a portable computer, a navigator or a car.
  • the terminal device integrated with the data generation device and the audio playback device is an earphone
  • the terminal-side electronic device equipped with a sensor is an automobile as an example for description.
  • the data generating device can be regarded as an independent device of a headset, or as a separate device composed of a car and a two-channel headset located in the car.
  • the headset obtains the spatial object information around the sensor through the sensor on the car, including the relative coordinates of the spatial object relative to the sensor and the object content of the spatial object around the sensor.
  • the car collects data corresponding to the surrounding space objects through the sensor, and the headset receives the data collected by the sensor sent by the car, and generates space object information based on the data collected by the sensor.
  • the car collects data corresponding to the surrounding space objects through the sensor, and generates space object information based on the data collected by the sensor, and the headset receives the space object information sent by the car.
  • the headset generates orientation information based on the relative coordinates of the spatial objects around the sensor relative to the sensor.
  • the headset can also acquire the posture of the headset, and generate the orientation information of the spatial object relative to the headset according to the posture of the headset and the spatial object information.
  • the headset generates content information based on the orientation information and the type of spatial objects around the headset. After the headset generates content information and orientation information, it generates spatial sound data, and then plays the spatial sound.
  • the specific implementation manner of the earphone performing the foregoing steps is similar to the specific implementation manner of the data generating device 60 in the embodiment corresponding to FIG. 6, and will not be repeated this time.
  • earphones can also be replaced by mobile phones, laptops, or navigators, and the terminal-side electronic devices equipped with sensors can also be expressed as earphones, laptops, navigators, or smart home appliances and other terminal-side devices. Wait.
  • the data generating device is a mobile phone
  • the audio playback device is a headset
  • the terminal-side electronic device equipped with a sensor is a car
  • the mobile phone and the headset are mutually independent devices as an example.
  • the mobile phone obtains the spatial object information around the sensor through the sensor on the car, generates position information and content information according to the spatial object information, and then generates spatial sound data.
  • the spatial sound data includes the position information and content information.
  • the mobile phone performs the steps of the preceding steps.
  • the specific implementation is similar to the fourth implementation in the scene of using sensors to sense and introduce surrounding space objects, and will not be repeated this time.
  • the mobile phone After the mobile phone generates the spatial acoustic data, it can send the spatial acoustic data to the headset, and the headset performs a rendering operation based on the spatial acoustic data to generate at least two mono signals, and transmit the at least two mono signals to the acoustic-electric transducer module , Play spatial sound through the acoustic-electric transducer module.
  • the headset performs a rendering operation based on the spatial acoustic data to generate at least two mono signals, and transmit the at least two mono signals to the acoustic-electric transducer module , Play spatial sound through the acoustic-electric transducer module.
  • the data generating device is a mobile phone
  • the audio playback device is a headset
  • the terminal-side electronic device equipped with a sensor is a car
  • the mobile phone and the headset are mutually independent devices as an example.
  • the mobile phone obtains the spatial object information around the sensor through the sensor in the car, and generates the position information and content information according to the spatial object information.
  • the mobile phone After the mobile phone generates the orientation information and content information, it performs a rendering operation to obtain spatial acoustic data.
  • the spatial acoustic data includes at least two mono signals, and the at least two mono signals are sent to the headset.
  • a monophonic signal is transmitted to the acoustic-electric transducer module to play spatial sound.
  • the data generation device can be regarded as a mobile phone, an independent device, or as a car and located in a car.
  • the mobile phone can be replaced with a portable computer or a navigator.
  • the terminal-side electronic equipment equipped with sensors can also be represented by other terminal-side devices such as earphones, portable computers, navigators, or smart home appliances. Equipment, etc.
  • the embodiment of the present invention provides a data generation method.
  • the foregoing method is executed by a data generation device (including but not limited to the data generation device in the corresponding embodiment of FIG. 1 to FIG. 7).
  • the data generation device receives the spatial object information.
  • FIG. 8 is a schematic flowchart of a data generation method provided in an embodiment of the present application.
  • the data generation method provided in an embodiment of the present application may include:
  • Step 801 The data generating device obtains spatial object information.
  • the manner in which the data generating apparatus obtains the spatial object may include: the data generating apparatus receives the spatial object information, or the data generating apparatus collects the spatial object information through a sensor.
  • the data generation device can generate spatial acoustic data according to the received spatial object information, or generate spatial acoustic data according to the spatial object information collected by the sensor, that is, the data presentation method provided by this solution can be applied in a variety of application scenarios , Extends the application scenarios of this solution, and improves the flexibility of this solution.
  • the spatial object information refers to the description information of the spatial object, which is used to obtain the position information of the spatial object relative to the data generating device, which includes at least the description information of the position of the spatial object, and the spatial object refers to the three-dimensional Objects in three-dimensional space.
  • the spatial object information may include, for example, navigation data in the form of text from the data generating device to the navigation destination, navigation data in the form of an audio stream from the data generating device to the navigation destination, or spatial objects around the data generating device.
  • the object content and absolute coordinates of may also include the object content and relative coordinates of the spatial objects around the data generating device.
  • the receiving methods referred to in the embodiments of this application include, but are not limited to, cellular communication, wireless local area network (Wireless Fidelity, WIFI), Worldwide Interoperability for Microwave Access (Wimax), Bluetooth communication technology (Bluetooth), Zigbee Communication technology (ZigBee), optical communication, satellite communication, infrared communication, transmission line communication, hardware interface or wire reception on the hardware circuit board, or obtain information from a software module, or read information from a storage device.
  • the sensor includes at least one of a photosensitive sensor, a sound sensor, an image sensor, an infrared sensor, a thermal sensor, a pressure sensor, or an inertial sensor.
  • the data generation device receiving spatial object information may include: a navigation application is set in the data generation device to obtain the navigation starting point and the navigation purpose The navigation start point and the navigation destination are sent to the navigation server through the aforementioned navigation application program, and the space object information in text form sent by the navigation server is received, that is, the data generating device receives the interface data through the navigation application program, The space object information in text form carries navigation data from the navigation start point to the navigation destination.
  • the data generating device sent to the navigation server can be the name of the navigation starting point and the navigation destination, or the longitude and latitude coordinates of the navigation starting point and the navigation destination, or other information used to indicate the navigation starting point and the navigation destination.
  • Information about spatial location may be a navigation destination with a spatial location, a traffic sign, surveillance or other navigation-related spatial objects, etc.
  • the navigation data from the navigation start point to the navigation destination can be divided into at least one road segment.
  • the textual space object information can include description information for at least one road segment.
  • the description information for each road segment includes multiple fields and each segment.
  • the field value of each field, the foregoing multiple fields may include the field value of the content field, as an example, for example, the content field may be a road section description (instruction) field.
  • the foregoing multiple fields may also include a location field.
  • the location field may specifically be a distance field and a turn field, and the location field may also include other fields, or, among the foregoing multiple fields, only Including content fields, excluding location fields, etc., which are not limited here.
  • Table 1 is an example of displaying spatial object information in the form of a table, please refer to Table 1 below.
  • Table 1 shows the description information of a section of the road section in the spatial object information.
  • the example in Table 1 is only to facilitate the understanding of the solution, and is not used to limit the solution.
  • the data generating device may also be equipped with a positioning system, such as a global positioning system (GPS), and the data generating device obtains the data generated by the positioning system.
  • the spatial position of the device is determined as the starting point of navigation.
  • the data generating device receives the navigation starting point input by the user. More specifically, when the data generating device is a device with a display interface such as a headset with projection function, a mobile phone, a portable computer, a navigator, or a car, the data generating device may receive the user through the display interface of a navigation application Enter the starting point of the navigation.
  • a positioning system such as a global positioning system (GPS)
  • GPS global positioning system
  • the data generating device When the data generating device is a headset without a projection function or other equipment without a display interface, the data generating device may also be equipped with a microphone, and the data generating device receives the navigation starting point in the form of voice input by the user through the microphone.
  • a microphone may also be configured in the data generating device, and the data generating device receives the navigation starting point in the form of voice input by the user through the microphone.
  • the data generating device receives the navigation destination input by the user. For a specific implementation manner, refer to the manner in which the data generating device receives the navigation starting point input by the user, which will not be repeated here.
  • the navigation application in the data generating device may convert the spatial object information in text form into an audio stream form
  • the data generating device obtains the spatial object information in the form of audio stream from the navigation application program, that is, the data generating device receives the spatial object information in the form of audio stream through the navigation application program (also called audio stream data)
  • the audio stream data may be pulse code modulation (PCM) audio stream data, or may also be audio stream data in other formats.
  • the operating system of the data generating device can obtain the spatial object information in the form of audio streams output by the navigation application through the function AudioPolicyManagerBase::getOutput (that is, a function of the audio policy implementation layer).
  • the data generation device receiving the space object information may include: the data generation device obtains the first coordinates corresponding to the spatial position of the data generation device Send the first coordinates corresponding to the spatial position of the data generating device to the data server, and receive the spatial object information sent by the data server.
  • the aforementioned spatial object information includes the object content and second coordinates of the spatial objects around the data generating device.
  • the first coordinate may be the latitude and longitude coordinates (also called absolute coordinates) of the data generating device
  • the second coordinate may be the absolute coordinates corresponding to the spatial position of the spatial object, or the spatial position of the spatial object and the data generating device.
  • the aforementioned spatial object information refers to map data corresponding to the spatial object, that is, the data generating device receives the map data stored on the network side.
  • the spatial object included in the spatial object information may be a library, a pizzeria, a construction site, or other entity objects located in a three-dimensional space as shown in FIG. 5.
  • the data generating device is equipped with a positioning system, and the data generating device can obtain the first coordinates corresponding to the spatial position of the data generating device through the positioning system, and send the first coordinates corresponding to the spatial position of the data generating device to the data server .
  • the data server may pre-store map data around the data generating device.
  • the aforementioned map data includes the object content of the spatial object around the data generating device and the absolute coordinates corresponding to the spatial position of the spatial object around the data generating device.
  • the object content and the second coordinate of the spatial object around the first coordinate are acquired, and the spatial object information is generated, and the object content and the second coordinate of the spatial object around the first coordinate will be included.
  • the spatial object information of the coordinates is sent to the data generating device, and correspondingly, the data generating device receives the aforementioned spatial object information sent by the data server.
  • the data server after receiving the first coordinate corresponding to the spatial position of the data generating device, obtains the object content and absolute coordinates of the spatial object around the first coordinate, and generates the spatial object information.
  • the two coordinates are absolute coordinates.
  • the data server may also use the first coordinates as the origin according to the received first coordinates and the acquired absolute coordinates of the space objects around the first coordinates, according to the first coordinates and the space objects around the first coordinates.
  • the second coordinates carried are relative coordinates.
  • the data generation device receiving the space object information may include: the data generation device obtains the first coordinates corresponding to the spatial position of the data generation device , Sending the first coordinates corresponding to the spatial position of the data generating device to the electronic device on the terminal side, and receiving the spatial object information sent by the electronic device on the terminal side.
  • the aforementioned spatial object information includes the object content of the spatial objects around the data generating device And the second coordinate.
  • the specific execution steps of the data generation device can refer to the description of the first three implementations of the surrounding space object scene using the map data stored on the network side or the terminal side in the above embodiment. The difference is The only point is to replace the data server in the above-mentioned embodiment with the electronic device on the terminal side, which will not be repeated here.
  • the data generation device collecting space object information through the sensor may include: the data generation device sends a signal collection instruction to the sensor through an internal interface to instruct the sensor to collect data ,
  • the data generating device receives the data collected by the sensor, and generates spatial object information according to the data collected by the sensor.
  • the data generating device locates the spatial objects around the data generating device according to the collected data to generate relative coordinates of the spatial objects around the data generating device, and determines the spatial objects around the data generating device according to the data collected by the sensor
  • the specific implementation of the content of the object can be included in the description in the corresponding embodiment of Fig. 6 and Fig. 7 above.
  • the data generation device collecting spatial object information through the sensor may include: In one case, the terminal-side electronic device equipped with the sensor collects and communicates with the sensor through the sensor. For the data corresponding to the surrounding space objects, the data generating device receives the data collected by the sensors sent by the terminal-side electronic equipment equipped with the sensors, and generates the space object information according to the data collected by the sensors. In another case, the terminal-side electronic device equipped with the sensor collects data corresponding to the surrounding space objects through the sensor, and generates spatial object information based on the data collected by the sensor, and the data generating device receives the data from the terminal-side electronic device equipped with the sensor The spatial object information.
  • the data generating device sends a sensor data acquisition request to the terminal-side electronic device equipped with the sensor, and the terminal-side electronic device equipped with the sensor responds to the sensor data acquisition request and corresponds to the surrounding space objects through sensor collection. And then send the data collected by the sensor or spatial object information to the data generating device.
  • the terminal-side electronic device equipped with the sensor can actively send the data collected by the sensor or the spatial object information to the data generating device.
  • the aforementioned sending method can be real-time sending, sending every preset time, Sending at a fixed time point or other sending methods, etc., this time is not limited.
  • the data generation device receives the spatial object information.
  • the data generating device may always be in the reception of the space object information State, thereby converting the received spatial object information into spatial sound data in time, and playing the spatial sound in time.
  • the data generating device may be provided with a switch button for receiving the user's turn-on operation and turn-off operation.
  • the data generation device When the user inputs the turn-on operation through the switch button, the data generation device is in the receiving state of spatial object information.
  • the data generating device When the closing operation is input, the data generating device closes the receiving function of the spatial object information, and then no longer receives the spatial object information.
  • the data generating device is an electronic device that can provide a display interface to the user, and the switch control can be displayed to the user through the display interface, so as to receive the opening operation or the closing operation input by the user through the aforementioned switch control.
  • a switch button may be provided outside the data generating device, so that the opening operation and the closing operation can be input through the switch button.
  • Step 802 The data generating device generates content information and orientation information according to the spatial object information.
  • the data generating device may generate content information and orientation information according to the spatial object information.
  • the content information is used to determine the playback content of the spatial sound
  • the content information includes orientation information.
  • the spatial object is the navigation destination
  • the content information is used to describe the path planning from the data generating device to the navigation destination.
  • the spatial object is When an event occurs in a space, a person, an animal, or an object exists in the space, the content information is used to describe the direction of the space object relative to the data generating device and the object content of the space object.
  • the content information may be "Walk forward 100 meters and then turn right" or "There is a coffee shop 50 meters in front of the left", etc.
  • the position information may include position information and direction information, which is used to indicate the position of the spatial object relative to the terminal device.
  • the position information may or may not carry the height information. Specifically, it may be expressed as Cartesian coordinates, or may be the position information in other formats.
  • the terminal device may be a data generating device, an audio playback device, or a terminal-side electronic device equipped with a sensor. Further, the aforementioned terminal-side electronic device equipped with sensors may be the same device as the data generation device, or the same device as the audio playback device, or may be an independent device independent of the data generation device and the audio playback device.
  • the spatial object information may include navigation data in text form, and the navigation data in text form includes field values of content fields.
  • step 802 may include: the data generating device generates orientation information and content information according to the field value of the content field included in the spatial object information.
  • step 802 may include: the data generating device generates content information according to the field value of the content field, and generates content information according to the field value of the location field.
  • Location information is the relative coordinates of the spatial object.
  • the turn field obtains that the space object is on the right of the terminal device, but because there is no specific distance value on the right, it can be set to the default value, for example, the default is 10 meters, the x-axis value is 10, and the space object is obtained based on the distance field.
  • the y-axis value is 100, and there is no height information, you can set the z-axis value to 0 to obtain the orientation information of the space object (10, 100, 0); the content information obtained based on the instruction field is " "Turn right after 100 meters", it should be understood that the value of the x-axis can also be -10, or other values, etc.
  • the foregoing examples are only to facilitate understanding of this solution, and are not used to limit this solution.
  • the spatial object information includes navigation data in the form of audio streams
  • the navigation data in the form of audio streams includes content information
  • the content information includes position information.
  • step 802 may include: the data generating device performs navigation data in the form of audio streams.
  • Voice recognition get location information and content information.
  • the position information is the relative coordinates of the space object.
  • the navigation data in the form of audio stream is " Walk forward 100 meters and then turn right"
  • the keyword "right" is extracted.
  • step 802 may include: the data generating device according to the data The spatial position of the generating device and the spatial object information generate orientation information.
  • the data generating device uses the first coordinates corresponding to the spatial position of the data generating device (that is, the absolute coordinates of the spatial position of the data generating device) as the coordinate origin, and uses the gyroscope or positioning system in the data generating device to determine the user
  • the forward direction of the user is the positive direction of the y-axis, and the coordinate system is established, and the absolute coordinates of the surrounding space objects (that is, one of the second coordinates) are generated according to the data included in the space object information.
  • the type of the spatial object is a bookstore
  • the position information obtained from the absolute coordinates of the spatial position of the data generating device and the absolute coordinates of the bookstore is (0, 50, 0)
  • the content information is that there is a bookstore 50 meters ahead .
  • the spatial object information includes the object content and relative coordinates of the spatial object.
  • the data generating device can extract relative coordinates from the spatial object information to obtain orientation information. And generate content information based on the location information and the object content included in the spatial object information.
  • step 802 may include: the data generating device according to the relative coordinates of the spatial object included in the spatial object information Generate orientation information, and generate content information based on the orientation information and the object content included in the spatial object information.
  • step 802 may include: the data generation device according to the space object The relative coordinates generate orientation information, and the content information is generated based on the orientation information and the object content included in the spatial object information.
  • the data generating apparatus generates orientation information according to the posture of the terminal device and the spatial object information.
  • the posture may be 30 degrees to the right, 20 degrees to the left, 15 degrees to the up, or other postures.
  • the attitude is the orientation of the front of the car; when the data generating device is a mobile phone or a navigator, the attitude is the orientation of the screen on the mobile phone or the navigator;
  • the data generating device is a dual-channel headset, the posture is the face orientation of the user wearing the dual-channel headset;
  • the data generating device is a separate device composed of a car and a mobile phone located in the car
  • the posture is the orientation of the front of the car, or the orientation of the screen of a mobile phone located in the car; when the data generating device is a combination of a car and a two-channel headset located in the car
  • the posture is the orientation of the front of the car, or the face orientation of the
  • the first implementation manner and the fourth implementation manner of the map data scene stored on the network side or the terminal side, and the use of sensor sensing It also introduces the first realization method and the fourth realization method of the surrounding space object scene. Since the data generating device and the audio processing device are configured in the same terminal device, the data generating device generates orientation information based on the posture and spatial object information after measuring the posture of the terminal device, so that the sound source position of the spatial sound heard by the end user It is consistent with the position information of the space object relative to the terminal device to improve the accuracy of the space sound.
  • the second and third implementations of the pedestrian navigation scene and the car navigation scene the second and third implementations of the map data scene stored on the network side or the terminal side, and the use of sensors to sense and Introduce the second, third, fifth and sixth realization of the surrounding space object scene.
  • the data generating device and the audio processing device are two independent devices, the data generating device receives the posture sent by the audio playback device, and generates orientation information according to the posture and spatial object information. More specifically, the audio playback device may send the posture of the audio playback device to the data generation device in real time, or it may send the posture of the audio playback device to the data generation device every preset duration.
  • the aforementioned preset duration may be 2 seconds. , 5 seconds, 10 seconds or other durations, etc.
  • the data generating device or the audio playback device is a portable device such as a headset, a mobile phone, a portable computer or a navigator
  • the data generating device or the audio playback device is equipped with a gyroscope or other portable devices.
  • the posture measurement function element, the data generation device or the audio playback device uses the gyroscope or other elements with the posture measurement function to obtain the posture of the terminal device.
  • the posture can refer to the posture of the headset, mobile phone, portable computer or navigator.
  • the data generating device or the audio playing device is a car
  • the data generating device or the audio playing device can be measured by a gyroscope, steering wheel or other components configured in the car.
  • the posture of the car may refer to the posture of the front of the vehicle, the posture of the wheels, the posture of the vehicle body, or other postures.
  • Cars can be cars, trucks, motorcycles, buses, boats, airplanes, helicopters, lawn mowers, recreational vehicles, playground cars, construction equipment, trams, golf carts, trains, and trolleys, etc. This application is implemented The examples are not particularly limited.
  • the embodiment of the present invention may include step 803.
  • the data generating device judges whether the spatial object information meets a preset condition, and if it is satisfied, it proceeds to step 804; if it is not satisfied, it proceeds to step 805.
  • the data generation device determines whether the spatial object information meets a preset condition, wherein the spatial object information that meets the preset condition is spatial object information including a preset spatial location area, a preset spatial direction, or the content of a preset object .
  • the preset spatial location area refers to the preset spatial location area relative to the spatial location of the data generating device or the audio processing device. As an example, it may be within meters of the spatial location of the data generating device or the audio processing device. As another example, for example, the spatial position of the data generating device or the audio processing device may be taken as the origin, within an area with a radius of 10 meters, etc., which is not limited here.
  • the preset spatial direction may be a position direction relative to the spatial position of the data generating device or the audio processing device.
  • the preset spatial direction may be the face orientation of the user wearing the dual-channel headset, and the face orientation of the user wearing the dual-channel headset may be based on the dual-channel headset Measured by the gyroscope, inertial sensor or other components configured in the system, the useful user will generally look towards what he wants to see, and set the direction of the face of the user wearing the two-channel headset to the preset spatial direction, which is conducive to Improve the accuracy of the process of determining the spatial object of interest; when the data generating device is a mobile phone or a navigator, the preset space direction can be the direction of movement on the mobile phone or navigator; when the data generating device is a car, the preset space The direction may be the front of the vehicle.
  • the preset spatial direction may be the front, rear, left, right, or other directions of the data generating device or audio processing device, etc., or it may be an absolute spatial position direction, as an example, for example, The preset spatial direction can be east, west, south, north, or other directions, etc., which is not limited here.
  • the preset object content may be input by the user in advance. As an example, for example, the user may input a coffee shop, a bookstore, or other types of spatial objects in advance as an object of interest to the user. It can also be independently determined by the data generation device. As an example, for example, the content of objects with a high risk factor such as construction sites can be set as the preset object content.
  • the object content is set to the preset object content, etc. It should be understood that the examples here are only to facilitate the understanding of this solution.
  • the specific preset spatial location area, the preset spatial direction and/or the specific connotation of the preset object content can be determined by the technology in the art
  • the personnel are determined according to the actual product form, and there is no limitation here. Specifically, the data generating device judges.
  • step 803 may include: the data generation device determines whether the object content indicated by the content information is the preset object content, and if the determination result is yes, the data generation device determines the spatial object information Meet the preset conditions.
  • step 803 includes one or more of the following: the data generating device determines whether the spatial position indicated by the orientation information is located in a preset In the spatial location area, or the data generating device determines whether the spatial location indicated by the orientation information is located in a preset spatial direction, or the data generating device determines whether the object content indicated by the content information is the preset object content. In a case where the judgment result of any one or more of the foregoing is yes, the data generating device determines that the spatial object information satisfies the preset condition.
  • the embodiment of the present invention may include step 804 in which the data generating apparatus generates volume increase indication information.
  • the data generating device after determining that the spatial object information meets a preset condition, the data generating device generates volume increase indication information.
  • the volume increase indication information is used to indicate to increase the volume of the spatial sound corresponding to the spatial object information.
  • the volume increase indication information may carry the volume value that needs to be increased, and the volume increase indication information carries the volume value.
  • the value may be a positive value, as an example, such as 3dB, 8dB, 10dB, 15dB or other values; the volume increase indication information may not carry the volume value that needs to be increased, which is not limited here.
  • the embodiment of the present invention may include step 805, in which the data generating apparatus generates volume reduction indication information.
  • the data generating device after determining that the spatial object information does not meet a preset condition, the data generating device generates volume reduction indication information.
  • the volume decrease indication information is used to indicate to decrease the volume of the spatial sound corresponding to the spatial object information
  • the volume decrease indication information may carry the volume value that needs to be decreased
  • the volume decrease indication information carries the volume value
  • the value of can be negative or positive, as an example, such as -3dB, -8dB, -10dB, -15dB or other values, etc.; the volume reduction indication information may not carry the volume that needs to be reduced
  • the value is not limited here.
  • Step 806 The data generating device generates spatial acoustic data.
  • the data generating device generates spatial acoustic data after obtaining the orientation information and content information.
  • the spatial sound data is used to indicate the generation of spatial sound.
  • the spatial acoustic data includes orientation information and content information, or the spatial acoustic data includes at least two mono signals generated according to the orientation information and content information, and the at least two mono signals are used to correspond to the two mono signals.
  • Acousto-electric transducer modules of the same time play to generate spatial sound.
  • the spatial sound data includes orientation information and content information, it may also include volume information and the like.
  • spatial sound data is specifically represented as a spatial audio object, and various information included in the spatial audio object is specifically represented as an array field.
  • the ETSI TS 103 223 standard provides a standard for "object-based audio immersive sound metadata and code stream". This standard supports the calculation of the position coordinates of the listening user and the position coordinates of the sound source. Immersive sound metadata and code stream of the location and location between the sound source and the user. It should be noted that the ETSI TS 103 223 standard is only a reference standard for spatial acoustic data.
  • the position information in the spatial sound data is specifically represented as a position (position) field
  • content information is specifically represented as a content (contentkind) field
  • volume information can be specifically represented as a volume gain (gain) field, etc., here Do not exhaustively.
  • steps 803 to 805 are optional steps. If steps 803 to 805 are not executed, or if steps 803 and 804 are executed, step 805 is not executed, and the execution result of step 803 is that the spatial object information does not satisfy The preset condition, or, if steps 803 and 805 are executed, step 804 is not executed, and the execution result of step 803 is that the spatial object information meets the preset condition, then step 806 may include: the data generating device generates Spatial acoustic data.
  • the data generating device can determine the position information as the field value of the location field and the content information as the field value of the content field. Set other fields in the spatial audio object to default values to obtain the spatial audio object.
  • Step 806 includes: the data generating device generates spatial sound data according to the orientation information, content information, and volume increase indication information. Specifically, in the ETSI TS 103 223 standard, after obtaining the position information and content information in the form of coordinates, the data generating device can determine the position information as the field value of the location field and the content information as the field value of the content field.
  • the volume value that needs to be increased can be determined as the field value of the volume gain field. If the volume increase instruction information does not carry the volume value that needs to be increased, Then the field value of the volume gain field can be increased by a preset value. As an example, for example, the field value of the volume gain field can be 3dB, 8dB, 10dB, 15dB or other values.
  • the spatial object information is spatial object information that meets preset conditions
  • volume increase indication information is generated, and the volume increase indication information indicates to increase the volume of the spatial sound corresponding to the spatial object information.
  • the playback volume can be increased to attract the user's attention, to prevent the user from missing the spatial objects of the preset object content, which is beneficial to improve the safety of the navigation process, and it can also prevent the user from missing the objects of interest.
  • Space objects increase the user viscosity of this program.
  • step 804 includes: the data generating device generates spatial sound data according to the orientation information, content information, and volume reduction indication information. Specifically, in the ETSI TS 103 223 standard, after the headset obtains the position information and content information in the form of coordinates, the position information can be determined as the field value of the location field, and the content information can be determined as the field value of the content field.
  • the decrease indication information carries the volume value that needs to be reduced, and the field value of the volume gain field can be determined according to the volume value that needs to be reduced, and the field value of the aforementioned volume gain field takes a negative value. If the volume reduction indication information does not carry the volume value that needs to be reduced, the field value of the volume gain field can be reduced by the preset value. As an example, for example, the field value of the volume gain field can be -3dB, -8dB, -10dB , -15dB or other values, etc.
  • the network is used In the third implementation manner of the map data scene stored on the side or terminal side, and in the third and sixth implementation manners of using sensors to sense and introduce the surrounding space object scene, or, it can also be used in the pedestrian navigation scene and In the first implementation of the car navigation scene, the first and fourth implementations of the scene using map data stored on the network side or the terminal side, and the first implementation of the scene using sensors to sense and introduce surrounding space objects One implementation and the fourth implementation.
  • step 806 may further include: the data generating device performs a rendering operation according to the content information and the orientation information to generate at least two mono signals, and the at least two mono signals are used to be combined with two Acousto-electric transducer modules corresponding to monophonic signals are played simultaneously to generate spatial sound.
  • the rendering specifically involves integrating spatial orientation information into the audio stream data through a specific algorithm or data processing operation, and finally generating at least two mono signals.
  • One monophonic signal is used to be played simultaneously by the acoustic-electric transducer modules corresponding to the two monophonic signals to generate spatial sound.
  • the data generation device or the audio playback device may be pre-configured with a rendering function library. After the spatial sound data is obtained, the left ear rendering function and the right ear rendering function corresponding to the azimuth information of the spatial sound data are obtained, and the results related to the spatial sound data are obtained.
  • the audio stream data corresponding to the content information is rendered by the left ear rendering function to the audio stream data corresponding to the content information to obtain the left channel signal, and the audio stream data corresponding to the content information is rendered by the right ear rendering function to obtain The right channel signal, where the left channel signal and the right channel signal belong to two mono signals. More specifically, if the spatial object information includes navigation data in the form of audio streams, the content information in the form of audio streams can be extracted from the spatial object information; if the spatial object information includes navigation data in the form of text, the spatial sound data needs to be added The content information is converted into content information in the form of an audio stream.
  • the left-ear rendering function and the right-ear rendering function are both head-related impulse response (HRIR) functions.
  • the position information of the spatial sound data corresponds to the left ear HRIR function and the right ear HRIR function, and the aforementioned PCM data is convolved with the left ear HRIR function and the right ear HRIR function, respectively, to obtain the left channel signal and the right channel signal, and then The left channel signal and the right channel signal can be played through the left and right sound-electric transducer modules of the audio playback device.
  • the left-ear rendering function and the right-ear rendering function are both head-related transfer functions (HRTF), and it is necessary to obtain the PCM data corresponding to the content information of the spatial acoustic data, and obtain the The left ear HRTF function and the right ear HRTF function corresponding to the position information of the spatial sound data, the aforementioned PCM data is transformed into the frequency domain to obtain the transformed audio stream data, and the transformed audio stream data is respectively compared with the left ear HRTF function and the right ear HRTF function.
  • HRTF head-related transfer functions
  • the ear HRTF function is multiplied, and the multiplied signal is transformed into the time domain to obtain the left channel signal and the right channel signal, and then the left channel signal and the right channel signal can be played through the left and right sound-electric transducer modules of the audio playback device.
  • Channel signal Taking the audio-electric transducer module with the audio playback device as a headset as an example here, it is only to prove the feasibility of this solution.
  • the analogy can be applied, and the spatial sound generation method is not limited here.
  • the embodiment of the present invention may include step 807, the data generating device or the audio playing device plays the spatial sound according to the spatial sound data.
  • the data generating device may play the spatial sound according to the spatial sound data.
  • spatial sound is a kind of sound.
  • the sound source position of the spatial sound corresponds to the azimuth information, and the playback content of the spatial sound is content information.
  • the spatial sound data includes content information and orientation information
  • the data generation device and the audio playback device are located in separate independent devices. After the data generation device generates spatial sound data including content information and orientation information, it sends the spatial sound data including content information and orientation information to the audio playback device.
  • the audio playback device performs a rendering operation according to the content information and the orientation information to generate at least two mono signals, transmits the at least two mono signals to the acoustic-electric transducer module, and plays the spatial sound through the acoustic-electric transducer module.
  • the audio playback device after the audio playback device generates at least two mono signals according to the spatial sound data, it can also acquire the posture of the audio playback device in real time, and obtain the transformed spatial orientation information according to the posture and spatial sound data of the audio playback device, and Re-render the content information in the form of an audio stream, and transmit at least two mono signals obtained by the re-rendering operation to the acoustic-electric transducer module, and play the spatial sound through the acoustic-electric transducer module.
  • the re-rendering refers to The transformed spatial orientation information is incorporated into the audio stream data through a specific algorithm or data processing operation, and finally at least two mono signals are generated.
  • the number of the mono signals included in the at least two mono signals is the same as the number of the acoustic-electric transducer modules included in the audio playback device.
  • the data generation device and the audio playback device are integrated in the same device. If the spatial sound data includes content information and orientation information, the data generation device needs to first generate at least two mono signals according to the content information and the orientation information, and combine the at least two mono signals. The signal is transmitted to the acoustic-electric transducer module through the internal interface, and the spatial sound is played through the acoustic-electric transducer module.
  • the internal interface can be specifically represented as a wire on a hardware circuit board.
  • the data generation device can directly use the gyroscope, car steering wheel and other attitude measurement components to obtain the attitude of the audio playback device, and compare the audio stream according to the attitude of the audio playback device.
  • the content information of the form is re-rendered, and after obtaining at least two mono signals for performing the over-rendering operation, it is transmitted to the audio playback device through the internal interface to play the spatial sound.
  • the spatial sound data includes at least two mono signals generated according to content information and orientation information
  • the third one of the map data scenes stored on the network side or the terminal side is used.
  • the data generating device sends at least two mono signals to the audio playback device, and the audio playback device inputs the at least two mono signals to the acoustic-electric transducer module to play spatial sound.
  • the audio playback device may obtain the posture of the audio playback device and send the posture of the audio playback device to the data generation device.
  • the data generation device re-renders the content information in the audio stream according to the posture of the audio playback device.
  • the generating device sends the at least two mono signals obtained through the re-rendering operation to the audio playback device, and the audio playback device inputs the at least two mono signals obtained through the re-rendering operation to the acoustic-electric transducer module to play the spatial sound.
  • the first implementation manner and the fourth implementation manner of the map data scene stored on the network side or the terminal side, and the use of sensors to sense and Introduce the first realization method and the fourth realization method of the surrounding space object scene.
  • the spatial acoustic data includes at least two monophonic signals
  • the data generating device transmits to the acoustic-electric transducer module through the internal interface, and plays the spatial sound through the acoustic-electric transducer module.
  • the playback volume of the left channel signal and the right channel signal may also be adjusted according to the volume information of the spatial sound data. Specifically, if the data generating device generates the volume increase instruction information, the playback volume of the left channel signal and the right channel signal is increased, and if the data generating device generates the volume decrease instruction information, the left channel signal is reduced. The playback volume of the signal and the right channel signal.
  • spatial sound data is generated according to the orientation information and content information of the navigation destination.
  • the previously generated spatial sound data indicates that the spatial sound is played, and the sound source corresponding to the spatial sound
  • the playback position is consistent with the orientation information of the navigation destination, that is, the user can determine the correct forward direction according to the playback position of the sound source of the space sound heard.
  • the playback method is more intuitive, and there is no need to open the map frequently to confirm their forward direction Whether it is correct or not, the operation is simple, and the efficiency of the navigation process is improved; in addition, when the spatial object is another type of object, it provides a more intuitive and efficient data presentation method.
  • FIG. 9 is a schematic flowchart of a data generation method provided in an embodiment of the present application.
  • the data generation method provided in an embodiment of the present application may include:
  • Step 901 The data generating device obtains spatial object information.
  • Step 902 The data generating device generates content information and orientation information according to the spatial object information.
  • the embodiment of the present invention may include step 903.
  • the data generating device judges whether the spatial object information meets a preset condition, and if it is satisfied, it proceeds to step 904; if it is not satisfied, the execution ends.
  • the data generation device judges whether the spatial object information meets the preset condition, if it is satisfied, it can go to step 904, if it is not satisfied, the spatial sound data is no longer generated according to the spatial object information, and then it can Step 901 is re-accessed to process the next spatial object information.
  • the embodiment of the present invention may include step 904 in which the data generating apparatus generates volume increase indication information.
  • the specific implementation manner of steps 901 to 904 performed by the data generating apparatus is similar to the specific implementation manner of steps 801 to 804 in the embodiment corresponding to FIG. 8, and details are not described herein.
  • Step 905 The data generating device generates spatial acoustic data.
  • steps 903 and 904 are optional steps. If steps 903 and 904 are not executed, or if step 903 is executed, step 904 is not executed, and the execution result of step 903 is spatial object information If the preset condition is satisfied, step 905 includes: the data generating device generates spatial acoustic data according to the orientation information and content information.
  • step 806 includes: the data generating device generates spatial sound data according to the orientation information, content information, and volume increase indication information.
  • the spatial object information before generating the spatial sound data, it will be judged whether the spatial object information meets the preset condition, and the spatial sound will be generated based on the spatial object information only if the result of the judgment is that the preset condition is satisfied.
  • the data that is, the spatial object information is filtered, which not only avoids the waste of computer resources caused by the spatial object information that does not meet the preset conditions, but also avoids excessive interruption to the user, and increases the user viscosity of the solution.
  • the embodiment of the present invention may include step 906, the audio playing device or the data generating device plays the spatial sound according to the spatial sound data.
  • FIG. 10 is a schematic diagram of a structure of a data generating device provided by an embodiment of the application.
  • the data generating device 100 includes: an obtaining module 1001 and a generating module 1002.
  • the obtaining module 1001 is used to obtain the spatial object information
  • the spatial object information is used to obtain the position information of the spatial object relative to the data generating device.
  • step 801 refers to the description of step 801 in the embodiment corresponding to FIG.
  • the generating module 1002 uses To generate content information and orientation information based on the spatial object information, the orientation information is used to indicate the orientation of the spatial object pointed to by the spatial object information relative to the data generating device, and the content information is used to describe the spatial object.
  • the generating module 1002 is also used to generate spatial sound data according to the orientation information and content information.
  • the spatial sound data is used to play the spatial sound.
  • the sound source position of the spatial sound corresponds to the orientation information.
  • the specific implementation can be With reference to the description of step 802 in the embodiment corresponding to FIG. 8 and the description of step 905 in the embodiment corresponding to FIG. 9, for specific content, please refer to the description in the method embodiment shown in the foregoing embodiment of the present application, and will not be repeated here.
  • the generating module 1002 when navigation data is played, the generating module 1002 generates spatial sound data according to the orientation information and content information of the navigation destination.
  • the previously generated spatial sound data indicates that the playback is spatial sound, and the spatial sound corresponds to
  • the playback position of the sound source is consistent with the orientation information of the navigation destination, that is, the user can determine the correct forward direction according to the playback position of the sound source of the spatial sound heard.
  • the playback method is more intuitive, and there is no need to open the map frequently to confirm yourself Whether the forward direction of the system is correct, the operation is simple, and the efficiency of the navigation process is improved; in addition, when the spatial object is other types of objects, it provides a more intuitive and efficient data presentation method.
  • the spatial acoustic data includes orientation information and content information
  • the spatial acoustic data includes at least two mono signals generated according to the orientation information and content information
  • the at least two mono signals are used to be combined with two Acousto-electric transducer modules corresponding to monophonic signals are played simultaneously to generate spatial sound.
  • the generating module 1002 is specifically configured to generate orientation information according to at least one of the position or posture of the data generating device and the spatial object information.
  • the generating module 1002 is specifically configured to generate orientation information according to at least one of the position or posture of the data generating device and the spatial object information.
  • the generating module 1002 generates orientation information according to the posture and spatial object information, so that the sound source position of the spatial sound heard by the end user is consistent with the orientation information of the spatial object relative to the terminal device, so as to improve the accuracy of the spatial sound degree.
  • the acquisition module 1001 is specifically used to receive spatial object information.
  • step 801 in the embodiment corresponding to FIG. 8 refer to the walking navigation scene, the car navigation scene, and Use the map data stored on the network side or the terminal side to introduce the description of the surrounding space object scene; or, collect the space object information through the sensor.
  • step 801 in the embodiment corresponding to FIG. 8 please refer to the description of step 801 in the embodiment corresponding to FIG. 8, or you can also refer to using The sensor senses and introduces the description of the surrounding space object scene.
  • the generation module 1002 can apply the data presentation mode provided by this solution in a variety of application scenarios, which expands the application scenarios of this solution and improves the implementation flexibility of this solution.
  • the acquiring module 1001 is specifically configured to receive spatial object information in at least one of the following three ways: receiving audio stream data generated by an application program; or, receiving interface data generated by an application program; or For receiving the map data stored on the network side or the terminal side, for a specific implementation manner, refer to the description of step 801 in the embodiment corresponding to FIG. 8.
  • the senor includes at least one of a photosensitive sensor, a sound sensor, an image sensor, an infrared sensor, a thermal sensor, a pressure sensor, or an inertial sensor.
  • the generating module 1002 is specifically used to generate content information and orientation information according to the spatial object information when it is determined that the spatial object information is the spatial object information that meets the preset conditions.
  • the specific implementation method can refer to the figure 8 corresponds to the description of steps 903 to 905 in this embodiment.
  • the generating module 1002 will determine whether the spatial object information meets the preset conditions before generating the spatial acoustic data, and will generate spatial acoustic data based on the spatial object information only if the result of the determination is that the preset conditions are met. , That is, the spatial object information is screened, which not only avoids the waste of computer resources caused by the spatial object information that does not meet the preset conditions, but also avoids excessive interruption to the user, and improves the user viscosity of the solution.
  • the generating module 1002 is also used to generate volume increase indication information when it is determined that the spatial object information is spatial object information meeting preset conditions, and the volume increase indication information is used to indicate increase The volume of the spatial sound corresponding to the spatial object information that satisfies the preset condition.
  • the volume increase indication information is used to indicate increase The volume of the spatial sound corresponding to the spatial object information that satisfies the preset condition.
  • the generation module 1002 can increase the playback volume for the spatial object of the preset object content to attract the user's attention, avoid the user from missing the spatial object of the preset object content, and help improve the safety of the navigation process. It can also prevent users from missing the space objects of interest, and increase the user viscosity of this solution.
  • the generating module 1002 is also used to generate volume reduction indication information when it is determined that the spatial object information is spatial object information meeting preset conditions, and the volume reduction indication information is used to indicate reduction.
  • the volume of the spatial sound corresponding to the spatial object information that meets the preset conditions refer to the description of steps 803 and 805 in the embodiment corresponding to FIG. 8.
  • the spatial object information that satisfies the preset condition is the spatial object information including the preset spatial location area, the preset spatial direction, or the preset object content.
  • the generating module 1002 is specifically configured to perform a rendering operation on the audio stream data corresponding to the content information according to the orientation information, content information, and the posture of the audio playback device to generate spatial acoustic data, spatial acoustic data It includes at least two mono signals generated according to the position information and the content information.
  • a rendering operation on the audio stream data corresponding to the content information according to the orientation information, content information, and the posture of the audio playback device to generate spatial acoustic data, spatial acoustic data It includes at least two mono signals generated according to the position information and the content information.
  • the data generating device 100 includes at least one of a headset, a mobile phone, a portable computer, a navigator, or a car.
  • the audio playback device when the audio playback device is a two-channel headset, the preset spatial direction is the face orientation of a user wearing the two-channel headset, and the audio playback device is used to play spatial sound.
  • the data generating device 100 may specifically be a terminal device in the embodiment corresponding to FIG. 1, FIG. 5, or FIG. 6, or the data generating device in FIG. 2 to FIG. 4, and FIG. limited. It should be noted that the information interaction and execution process among the various modules/units in the data generating device 100 are based on the same concept as the method embodiments corresponding to FIGS. 1 to 9 in the embodiments of the present application. For details, please refer to The description in the foregoing method embodiment shown in the embodiment of the present application will not be repeated here.
  • the data generating device 100 may be one device or two different devices. The step of generating content information and orientation information is performed by one device, and the step of generating spatial sound data based on the content information and orientation information is performed by another device. carried out.
  • FIG. 11 is a schematic structural diagram of a data generation device provided by an embodiment of this application.
  • the data generation device 1100 may specifically be represented as a virtual reality VR device. , Mobile phones, tablets, laptops, smart wearable devices, monitoring data processing equipment or radar data processing equipment, etc., which are not limited here.
  • the data generating device 1100 may be deployed with the data generating device 100 described in the embodiment corresponding to FIG. 10 for implementing the functions of the data generating device in the corresponding embodiment of FIG. 1 to FIG. 9.
  • the data generating device 1100 includes: a receiver 1101, a transmitter 1102, a processor 1103, and a memory 1104 (the number of processors 1103 in the data generating device 1100 can be one or more, and one processor is used as a processor in FIG. 11 Example), where the processor 1103 may include an application processor 11031 and a communication processor 11032.
  • the receiver 1101, the transmitter 1102, the processor 1103, and the memory 1104 may be connected by a bus or other means.
  • the memory 1104 may include a read-only memory and a random access memory, and provides instructions and data to the processor 1103. A part of the memory 1104 may also include a non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 1104 stores a processor and operating instructions, executable modules or data structures, or a subset of them, or an extended set of them.
  • the operating instructions may include various operating instructions for implementing various operations.
  • the processor 1103 controls the operation of the data generating device.
  • the various components of the data generating device are coupled together through a bus system, where the bus system may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus.
  • bus system may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus.
  • various buses are referred to as bus systems in the figure.
  • the method disclosed in the foregoing embodiment of the present application may be applied to the processor 1103 or implemented by the processor 1103.
  • the processor 1103 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method can be completed by an integrated logic circuit of hardware in the processor 1103 or instructions in the form of software.
  • the aforementioned processor 1103 may be a general-purpose processor, a digital signal processing (DSP), a microprocessor or a microcontroller, and may further include an application specific integrated circuit (ASIC), field programmable Field-programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field programmable Field-programmable gate array
  • the processor 1103 can implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory 1104, and the processor 1103 reads the information in the memory 1104, and completes the steps of the foregoing method in combination with its hardware.
  • the receiver 1101 can be used to receive input digital or character information, and to generate signal inputs related to related settings and function control of the data generating device.
  • the transmitter 1102 can be used to output digital or character information through the first interface.
  • the transmitter 1102 can also be used to send instructions to the disk group through the first interface to modify the data in the disk group.
  • the transmitter 1102 can also include display devices such as a display screen. .
  • the processor 1103 is configured to execute the data generation method executed by the data generation device in the embodiment corresponding to FIG. 1 to FIG. 9.
  • the application processor 11031 is used to obtain spatial object information, which is used to obtain position information of the spatial object relative to the data generating device, and generates content information and position information according to the spatial object information, and the position information is used to indicate the spatial object The location of the spatial object pointed to by the information relative to the data generating device.
  • the content information is used to describe the spatial object.
  • spatial audio data and spatial acoustic data are generated, and spatial audio data and spatial acoustic data are used to generate playback spatial acoustic signals,
  • the sound source position of the spatial sound signal corresponds to the azimuth information.
  • the application processor 11031 is specifically configured to generate orientation information according to at least one of the position or posture of the terminal device of the data generating apparatus and the spatial object information.
  • the application processor 11031 is specifically configured to receive spatial object information, or to collect spatial object information through a sensor.
  • the application processor 11031 is specifically configured to receive spatial object information in at least one of the following three ways: receiving audio stream data generated by an application program, or receiving interface data generated by an application program, Or, receive the map data stored on the network side or the terminal side.
  • the senor includes at least one of a photosensitive sensor, a sound sensor, an image sensor, an infrared sensor, a thermal sensor, a pressure sensor, or an inertial sensor.
  • the application processor 11031 is specifically configured to generate content information and orientation information according to the spatial object information when it is determined that the spatial object information is spatial object information meeting preset conditions.
  • the application processor 11031 is further configured to generate volume increase instruction information when it is determined that the spatial object information is spatial object information meeting preset conditions, and the volume increase instruction information is used to indicate increase The volume of the spatial sound corresponding to the spatial object information satisfying the preset condition is louder.
  • the spatial object information that satisfies the preset condition is the spatial object information including the preset spatial location area, the preset spatial direction, or the preset object content.
  • the audio playback device when the audio playback device is a two-channel headset, the preset spatial direction is the face orientation of a user wearing the two-channel headset, and the audio playback device is used to play spatial sound.
  • the data generating device 1100 may be one device or two different devices, wherein the step of generating content information and orientation information is performed by one device, and the step of generating spatial acoustic data according to the content information and orientation information is performed by another device.
  • the embodiments of the present application also provide a computer-readable storage medium in which a computer program is stored, and when it runs on a computer, the computer can execute the above-mentioned embodiments shown in FIGS. 1 to 9 The steps performed by the data generating device in the described method.
  • the embodiments of the present application also provide a computer program product, which when running on a computer, causes the computer to execute the steps performed by the data generating device in the method described in the foregoing embodiments shown in FIGS. 1 to 9.
  • the embodiments of the present application also provide a chip system, which includes a processor, which is used to support the network device to implement the functions involved in the above aspects, for example, send or process the data and/or data involved in the above methods.
  • the chip system further includes a memory, and the memory is used to store necessary program instructions and data of the network device.
  • the chip system can be composed of chips, and can also include chips and other discrete devices.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of software products, and the computer software products are stored in a storage
  • the medium includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Acoustics & Sound (AREA)
  • Library & Information Science (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Telephone Function (AREA)
  • Stereophonic System (AREA)

Abstract

一种数据生成方法及装置,应用于智能终端技术领域,提供了一种更为直观且高效的空间对象呈现方式。数据生成装置首先获取空间对象信息,然后根据所述空间对象信息生成内容信息和方位信息,所述方位信息为所述空间对象信息指向的空间对象相对于所述数据生成装置的方位信息,再根据所述方位信息和所述内容信息,生成空间声数据,所述空间声数据用于播放空间声,所述空间声的声源位置与所述方位信息对应。

Description

一种数据生成方法及装置 技术领域
本申请实施例涉及智能终端技术领域,尤其涉及一种数据生成方法及装置。
背景技术
随着移动互联网技术的发展,具有导航功能的智能电子设备,例如手机,给人们的生活带来了很大的便利。智能电子设备在提供导航服务时,一般基于用户输入的起始点和终点进行路径规划,并通过语音播放的形式输出路径规划结果。
但由于语音播放的提示方式不够直观,导致当用户不熟悉当地环境时,无法准确当将自己所处的位置与地图上的路径匹配好,而导航系统受到定位精度的限制,往往要用户走错一段时间后才能感知并进行偏航提示,尤其在使用规划的路径比较繁琐的导航数据时,用户需要频繁打开地图来确认自己的前进方向是否正确,操作繁琐,用户体验大大降低。
除了导航场景以外,在其他应用场景中也存在信息指示不直观的技术问题。
发明内容
针对上述技术问题,本申请实施例提供了一种数据生成方法及装置,在向用户指示空间对象信息时能够使用户更加直观高效地知晓空间对象的方位信息,较大程度提升了用户体验。
第一方面,本申请实施例提供了一种数据生成方法,可用于智能终端技术领域。数据生成装置获取空间对象信息,其中,空间对象信息用于获取空间对象相对于所述数据生成装置的方位信息,所述空间对象信息例如可以包括从数据生成装置到导航目的地的文本形式的导航数据,也可以包括从数据生成装置到导航目的地的音频流形式的导航数据,也可以包括数据生成装置周围的空间对象的对象内容和绝对坐标,还可以包括数据生成装置周围的空间对象的对象内容和相对坐标;数据生成装置根据空间对象信息生成内容信息和方位信息,方位信息用于指示空间对象信息指向的空间对象相对于数据生成装置的方位,内容信息用于描述空间对象,其中,所述方位信息包括位置信息与方向信息中的至少一项,所述空间对象包括但不限于导航目的地、空间内发生的事件、空间内存在的人、动物或者物体,在空间对象为导航目的地时,内容信息用于描述从数据生成装置到导航目的地的路径规划,在空间对象为空间内发生的事件、空间内存在的人、动物或者物体时,内容信息用于描述空间对象相对于数据生成装置的方向和空间对象的对象内容;根据方位信息和内容信息,生成空间声数据,空间声数据用于播放空间声,空间声的声源位置与方位信息对应。本申请实施例应用于导航场景时,会根据导航目的地相对于数据生成装置的方位信息和内容信息生成空间声数据,所述空间声数据用于播放空间声,且所述空间声对应的声源方位与导航目的地相对于数据生成装置的方位一致,所述空间声对应的声音内容与从数据生成装置到导航目的地的路径规划一致。通过使用户根据听到的空间声的声源方位确定空间对象的方位,更为直观地向用户指示了空间对象信息,提升了用户体验。
在一种可能的实现方式中,空间声数据包括方位信息和内容信息,或者,空间声数据 包括根据方位信息和内容信息生成的至少两个单声信号,至少两个单声信号用于被与两个单声信号对应的声电换能模块同时播放以产生空间声。
在一种可能的实现方式中,数据生成装置根据空间对象信息生成方位信息,包括:根据数据生成装置的位置或姿态中的至少一项,以及空间对象信息,生成方位信息。其中,所述方位信息包括位置信息和方向信息。具体的,在空间对象信息包括数据生成装置周围的空间对象的对象内容和空间位置的情况下,数据生成装置可以根据数据生成装置的空间位置和数据生成装置周围的空间对象的空间位置,生成空间对象相对于数据生成装置的方向信息。数据生成装置可以通过陀螺仪、惯性传感器或其他元件测得数据生成装置的姿态,例如:当所述数据生成装置为汽车时,所述姿态为车头的朝向;当所述数据生成装置为手机或导航仪时,所述姿态为手机或导航仪上屏幕的朝向;当所述数据生成装置为双声道耳机时,所述姿态为佩戴所述双声道耳机的用户的面部朝向;当所述数据生成装置为汽车和位于所述汽车内的手机共同构成的分立装置时,所述姿态为所述汽车的车头的朝向,或者为位于所述汽车内的手机的屏幕的朝向;当所述数据生成装置为汽车和位于所述汽车内的双声道耳机共同构成的分立装置时,所述姿态为所述汽车的车头的朝向,或者为佩戴所述双声道耳机的车内用户的面部朝向。根据数据生成装置的姿态和所述数据生成装置周围的空间对象的空间位置,生成空间对象相对于数据生成装置的方位信息。本申请实施例中,根据姿态和空间对象信息生成方位信息,以使得最终用户听到的空间声的声源位置与空间对象相对于数据生成装置的方位信息一致,以提高用户直观感受的准确性。
在一种可能的实现方式中,数据生成装置获取空间对象信息,包括:接收空间对象信息;或者,通过传感器采集空间对象信息。其中,接收方式包括通过蜂窝通信、无线局域网、全球互通微波访问、蓝牙通信技术、紫蜂通信技术、光通信、卫星通信、红外线通信、传输线通信、硬件接口、硬件电路板上的走线接收、从软件模块获取信息或者从存储器件读取信息中的至少一项。传感器包括光敏传感器、声音传感器、图像传感器、红外传感器、热敏传感器、压力传感器或惯性传感器中的至少一项。数据生成装置可以根据接收到的空间对象信息生成空间声数据,也可以根据通过传感器采集到的空间对象信息生成空间声数据,也即在多种应用场景中都可以适用本方案提供的数据呈现方式,扩展了本方案的应用场景,提高了本方案的实现灵活性。
在一种可能的实现方式中,数据生成装置接收空间对象信息,包括通过以下三种方式中的至少一种接收空间对象信息:数据生成装置接收应用程序生成的音频流数据,并将所述音频流数据确定为接收到的空间对象信息,音频流数据可以为音频流形式的导航数据,对音频流形式的导航数据进行语音识别,得到方位信息和内容信息;或者,数据生成装置接收应用程序生成的接口数据,并将所述接口数据确定为接收到的空间对象信息,接口数据可以为文本形式的导航数据,则数据生成装置根据文本形式的导航数据中包括的内容字段的字段值和位置字段的字段值,生成方位信息和内容信息;或者,数据生成装置接收网络侧或终端侧存储的地图数据,并将所述接口数据确定为接收到的空间对象信息,地图数据包括数据生成装置周围空间对象的对象内容和坐标,前述坐标可以为绝对坐标,也可以为相对坐标,则数据生成装置根据数据生成装置周围空间对象的坐标生成方位信息,根据方位信息和数据生成装置周围空间对象的对象内容生成内容信息。
在一种可能的实现方式中,传感器包括光敏传感器、声音传感器、图像传感器、红外传感器、热敏传感器、压力传感器或惯性传感器中的至少一项。
在一种可能的实现方式中,数据生成装置根据空间对象信息生成内容信息和方位信息,包括:在数据生成装置确定空间对象信息为满足预设条件的空间对象信息的情况下,根据空间对象信息生成内容信息和方位信息。具体的,数据生成装置判断方位信息所指示的空间位置是否位于预设空间位置区域内,或者,数据生成装置判断方位信息所指示的空间位置是否位于预设空间方向上,或者,数据生成装置判断内容信息所指示的对象内容是否为预设对象内容。在前述任一项或多项的判断结果为是的情况下,则数据生成装置确定空间对象信息满足预设条件。本申请实施例中,在生成空间声数据之前,会判断空间对象信息是否满足预设条件,仅在判断结果为满足预设条件的情况下,才会基于空间对象信息生成空间声数据,也即会对空间对象信息进行筛选,既避免了不满足预设条件的空间对象信息造成的计算机资源的浪费,也避免对用户的过度打扰,提高本方案的用户粘度。
在一种可能的实现方式中,方法还包括:在数据生成装置确定空间对象信息为满足预设条件的空间对象信息的情况下,生成音量增大指示信息,音量增大指示信息用于指示增大与满足预设条件的空间对象信息相对应的空间声的音量,音量增大指示信息中可以携带有需要增大的音量数值。本申请实施例中,对于预设对象内容的空间对象可以增大播放音量,以吸引用户的注意力,避免用户错过预设对象内容的空间对象,有利于提高导航过程的安全性,也可以避免用户错过感兴趣的空间对象,提高本方案的用户粘度。
在一种可能的实现方式中,方法还包括:在数据生成装置确定空间对象信息为满足预设条件的空间对象信息的情况下,生成音量减小指示信息,音量减小指示信息用于指示减小与满足预设条件的空间对象信息相对应的空间声的音量,音量减小指示信息中可以携带有需要增大的音量数值。
在一种可能的实现方式中,满足预设条件的空间对象信息为包括预设空间位置区域、预设空间方向或者预设对象内容的空间对象信息。其中,预设空间位置区域指的是相对于数据生成装置或音频播放装置的空间位置的空间位置区域;预设空间方向可以是相对于数据生成装置或音频播放装置的相对方向,例如在音频播放装置为双声道耳机的情况下,预设空间方向为佩戴双声道耳机的用户的面部朝向,在数据生成装置为手机或导航仪时,预设空间方向为手机或导航仪上移动的方向,在数据生成装置为汽车的情况下,预设空间方向为车头朝向,预设空间方向也可以是绝对的空间位置方向;预设对象内容可以为用户预先输入的,也可以为数据生成装置自主确定的。
在一种可能的实现方式中,在音频播放装置为双声道耳机的情况下,所述预设空间方向为佩戴双声道耳机的用户的面部朝向,佩戴双声道耳机的用户的面部朝向可以根据双声道耳机中配置的陀螺仪、惯性传感器或其他元件测得。
在一种可能的实现方式中,数据生成装置根据方位信息和内容信息,生成空间声数据包括:数据生成装置根据方位信息、内容信息和数据生成装置的姿态,对与内容信息对应的音频流数据执行渲染操作,以生成空间声数据;或者,数据生成装置根据方位信息、内容信息和音频播放装置的姿态,对与内容信息对应的音频流数据执行渲染操作,以生成空间声数据。空间声数据包括根据方位信息和内容信息生成的至少两个单声信号。所述渲染 具体为通过特定的算法或者数据处理操作在音频流数据中融入空间方位信息,最终生成至少两个单声信号,所述至少两个单声信号用于被与两个单声信号对应的声电换能模块同时播放以产生空间声。
第二方面,本申请实施例提供了一种数据生成装置,装置包括获取模块和生成模块。获取模块,用于获取空间对象信息,空间对象信息用于获取空间对象相对于数据生成装置的方位信息,生成模块,用于根据空间对象信息生成内容信息和方位信息,方位信息用于指示空间对象信息指向的空间对象相对于数据生成装置的方位,内容信息用于描述空间对象,生成模块,还用于根据方位信息和内容信息,生成空间声数据,空间声数据用于播放空间声,空间声的声源位置与方位信息对应。
在一种实现方式中,空间声数据包括方位信息和内容信息,或者,空间声数据包括根据方位信息和内容信息生成的至少两个单声信号,至少两个单声信号用于被与两个单声信号对应的声电换能模块同时播放以产生空间声。
在一种可能的实现方式中,生成模块,具体用于根据数据生成装置的位置或姿态中的至少一项,以及空间对象信息,生成方位信息。
在一种可能的实现方式中,获取模块,具体用于接收空间对象信息,或者,通过传感器采集空间对象信息。
在一种可能的实现方式中,获取模块,具体用于通过以下三种方式中的至少一种接收空间对象信息:接收应用程序生成的音频流数据,或者,接收应用程序生成的接口数据,或者,接收网络侧或终端侧存储的地图数据。
在一种可能的实现方式中,传感器包括光敏传感器、声音传感器、图像传感器、红外传感器、热敏传感器、压力传感器或惯性传感器中的至少一项。
在一种可能的实现方式中,生成模块,具体用于在确定空间对象信息为满足预设条件的空间对象信息的情况下,根据空间对象信息生成内容信息和方位信息。
在一种可能的实现方式中,生成模块,还用于在确定空间对象信息为满足预设条件的空间对象信息的情况下,生成音量增大指示信息,音量增大指示信息用于指示增大与满足预设条件的空间对象信息相对应的空间声的音量。
在一种可能的实现方式中,生成模块,还用于在确定空间对象信息为满足预设条件的空间对象信息的情况下,生成音量减小指示信息,音量减小指示信息用于指示减小与满足预设条件的空间对象信息相对应的空间声的音量。
在一种可能的实现方式中,满足预设条件的空间对象信息为包括预设空间位置区域、预设空间方向或者预设对象内容的空间对象信息。
在一种可能的实现方式中,生成模块,具体用于根据方位信息、内容信息和音频播放装置的姿态,对与内容信息对应的音频流数据执行渲染操作,以生成空间声数据,空间声数据包括根据方位信息和内容信息生成的至少两个单声信号。
在一种可能的实现方式中,数据生成装置包括耳机、手机、便携式电脑、导航仪或者汽车中的至少一项。数据生成装置既可以是由一个独立工作的设备构成的集成的装置,也可以是多个协同工作的不同设备共同构成的分立的装置。
在一种可能的实现方式中,在音频播放装置为双声道耳机的情况下,预设空间方向为 佩戴双声道耳机的用户的面部朝向,音频播放装置用于播放空间声。
对于本申请实施例第二方面提供的数据生成装置的组成模块执行第二方面以及第二方面的各种可能实现方式的具体实现步骤,均可以参考第一方面以及第一方面中各种可能的实现方式中的描述,此处不再一一赘述。
第三方面,本申请实施例提供了一种数据生成装置,包括存储器和处理器,存储器存储计算机程序指令,处理器运行计算机程序指令以执行上述第一方面所述的数据处理生成方法。
在一种可能的实现方式中,数据生成装置还包括收发器,用于接收空间对象信息。
在一种可能的实现方式中,数据生成装置还包括传感器,用于采集空间对象信息。
在一种可能的实现方式中,数据生成装置包括耳机、手机、便携式电脑、导航仪或者汽车中的至少一项。
本申请实施例第三方面中,处理器还可以用于执行第一方面的各个可能实现方式中数据生成装置执行的步骤,具体均可以参阅第一方面,此处不再赘述。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,当其在计算机上行驶时,使得计算机执行上述第一方面所述的数据处理生成方法。
第五方面,本申请实施例提供了一种计算机程序,当其在计算机上行驶时,使得计算机执行上述第一方面所述的数据处理生成方法。
第六方面,本申请实施例提供了一种芯片系统,该芯片系统包括处理器,用于支持服务器或数据处理生成装置实现上述方面中所涉及的功能,例如,发送或处理上述方法中所涉及的数据和/或数据。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存服务器或通信设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包括芯片和其他分立器件。
本申请实施例第四方面至第六方面的有益效果,可以参考第一方面。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例一提供的数据生成方法的示意图;
图2为本申请实施例二提供的数据生成方法的示意图;
图3为本申请实施例三提供的数据生成方法的示意图;
图4为本申请实施例提供的数据生成方法的再一种实现方式的示意图;
图5为本申请实施例提供的数据生成方法的又一种实现方式的示意图;
图6为本申请实施例提供的数据生成方法的再一种实现方式的示意图;
图7为本申请实施例提供的数据生成方法的又一种实现方式的示意图;
图8为本申请实施例提供的数据生成方法的一种实现方式的示意图;
图9为本申请实施例提供的数据生成方法的另一种实现方式的示意图;
图10为本申请实施例提供的数据生成装置的一种结构示意图;
图11为本申请实施例提供的数据生成装置的另一种结构示意图。
具体实施方式
本申请实施例提供了一种数据生成方法及相关设备,播放与导航数据中的方位信息对应的空间声,用户可以根据听到的空间声的声源播放位置来确定正确的前进方向,提升方式更为直观,提高了导航过程的效率,当空间对象为其他类型的对象时,提供了一种更为直观且高效的数据呈现方式。
本申请实施例的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请实施例的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。
下面结合附图,对本申请实施例的实施例进行描述。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请实施例可应用于各种对空间对象信息进行播放的场景中,其中,空间对象信息中包括针对空间对象的描述信息,空间对象指的是位于三维(3-dimension,3D)立体空间中的对象,在三维空间中有与之对应的空间位置,可以包括三维立体空间中的实体对象和非实体对象,空间对象包括但不限于导航目的地、空间内发生的事件、空间内存在的人、空间内存在的动物或空间内存在的静态物体。具体的,本申请实施例的应用场景包括但不限于步行导航、车载导航、利用网络侧或终端侧存储的地图数据介绍周围空间对象或利用传感器感测并介绍周围空间对象(实际情况不限于这四种典型应用场景)。下面对前述四种典型的应用场景逐个介绍。
下面首先以步行导航场景为例,介绍本申请实施例提供的数据生成方法的三种实现方式。
第一种,请参阅图1,为本申请实施例一提供的数据生成方法的示意图,所述数据生成方法由图1中的数据生成装置10执行。图1中示出了数据生成装置10的耳机形态,应理解,采用耳机作为数据生成装置10仅为一个示例,所述数据生成装置10还可包括其他种类的设备。下面对本应用场景中的具体实现过程进行详细描述。
耳机中可以设置有导航类应用程序,耳机获取导航起始点和导航目的地,并通过前述导航类应用程序将导航起始点和导航目的地发送给导航类服务器11。导航类服务器11利用其中存储的地图数据,确定导航起始点和导航目的地之间的导航数据。耳机通过导航类应用程序接收导航类服务器11发送的导航数据。耳机在获取到导航数据之后,根据导航数据生成内容信息和方位信息,其中,所述方位信息为导航目的地相对于耳机的方位信息。可选地,耳机还可以获取耳机的姿态,根据耳机的姿态和导航数据生成方位信息。
耳机在生成方位信息和内容信息之后,生成空间声数据。可选地,在生成空间声数据之前,耳机还可以判断内容信息所指示的对象内容是否为预设对象内容,在内容信息所指示的对象内容为预设对象内容的情况下,耳机生成音量增大指示信息,则耳机可以根据方位信息、内容信息和音量增大指示信息,生成空间声数据。所述空间声数据包括方位信息和内容信息,或者,所述空间声数据包括根据方位信息和内容信息生成的两个单声信号,所述两个单声信号用于被与所述两个单声信号对应的声电换能模块同时播放以产生空间声。可选地,在内容信息所指示的对象内容不是预设对象内容的情况下,耳机生成音量减小指示信息,则耳机可以根据方位信息、内容信息和音量减小指示信息,生成空间声数据。
可选地,耳机在播放空间声的过程中,还可以根据空间声数据调整空间声的播放音量。具体的,若耳机生成的为音量增大指示信息,则增大空间声的播放音量,若耳机生成的为音量减小指示信息,则减小空间声的播放音量。
可选地,耳机在根据空间声数据生成两个单声信号之前,还可以获取耳机的实时姿态,进而根据耳机的实时姿态对音频流形式的内容信息进行重渲染,将经重渲染后得到的两个单声信号传输至耳机的两个声电换能模块,通过声电换能模块播放空间声,使用户听到的空间声的声源位置与空间对象相对于耳机的实时方位信息一致。
本申请实施例一中,根据导航目的地的方位信息和内容信息生成空间声数据,空间声数据用于播放空间声,空间声的声源的方位与导航目的地相对于耳机的方位一致,空间声的播放内容与从数据生成装置到导航目的地的路径规划一致,从而用户可以根据听到的空间声的声源方位来确定正确的前进方向,信息呈现方式更为直观,无需再频繁打开地图来确认自己的前进方向是否正确,操作简单,提高了导航的便捷性、安全性和用户体验。
需要说明的是,图1中数据生成装置10具体表现为耳机形态仅为一种示例,在实际应用中,数据生成装置10还可以为汽车、手机、便携式电脑、导航仪或其他便携式终端设备。
第二种,具体的,请参阅图2,为本申请实施例二提供的数据生成方法的示意图,所述数据生成方法由图2中的数据生成装置20执行。图2数据生成装置20为手机,音频播放装置21为耳机,手机和耳机为互相独立的设备,应理解,图2仅为一个示例,下面对本应用场景中的具体实现过程进行详细描述。
手机(也即数据生成装置20的一个示例)接收导航类服务器22发送的导航数据。手机在接收到所述导航数据之后,根据空间对象信息生成内容信息和方位信息,进而生成空间声数据。具体实现方式与图1对应实施例中数据生成装置10执行前述步骤的具体实现方式类似,此处不做赘述。
可选地,手机在生成空间声数据之后,本实施例中的空间声数据包括的为方位信息和内容信息,将空间声数据发送给耳机(也即音频播放装置20的一个示例),由耳机根据空间声数据,执行渲染操作,以生成两个单声信号,将两个单声信号传输至声电换能模块,通过耳机的两个声电换能模块播放空间声。可选地,耳机在生成两个单声信号的过程中,还可以根据空间声数据调整空间声的播放音量。具体的,若生成的为音量增大指示信息,则增大空间声的播放音量,若生成的为音量减小指示信息,则减小空间声的播放音量。
可选地,耳机在根据空间声数据生成两个单声信号之前,还可以获取耳机的姿态,根据耳机的姿态对音频流形式的内容信息进行重渲染,将经重渲染操作得到的两个单声信号 传输至耳机的两个声电换能模块,通过声电换能模块播放空间声,使用户听到的空间声的声源位置与空间对象的方位信息一致。
需要说明的是,图2中数据生成装置20具体表现为手机形态仅为一种示例,在实际应用中,数据生成装置20还可以表现为便携式电脑、导航仪或其他便携式终端设备的形态等。
第三种,具体的,请参阅图3,图3为本申请实施例提供的数据生成方法的一种实现方式的示意图,图3中以数据生成装置30为手机,音频播放装置31为耳机,手机和耳机为互相独立的设备,应理解,图3仅为一个示例,下面对本应用场景中的具体实现过程进行详细描述。
手机(也即数据生成装置30的一个示例)接收导航类服务器32发送的导航数据。手机在接收到空间对象信息之后,根据空间对象信息生成内容信息和方位信息,进而根据内容信息和方位信息执行渲染操作,生成空间声数据,本实施例中的空间声数据指的是至少两个单声信号。具体实现方式与图1对应实施例中数据生成装置10执行前述步骤的具体实现方式类似,此处不做赘述。
可选地,手机将两个单声信号发送给耳机(也即音频播放装置31的一种示例),耳机将两个单声信号输入至声电换能模块,以播放空间声。可选地,耳机可以获取耳机的姿态,并将耳机的姿态发送给手机,由手机根据耳机的姿态对音频流形式的内容信息进行重渲染,手机将经重渲染操作得到的两个单声信号发送给耳机,耳机将两个单声信号输入至声电换能模块,以播放空间声。
需要说明的是,图3中数据生成装置30具体表现为手机形态仅为一种示例,在实际应用中,数据生成装置30还可以表现为便携式电脑、导航仪或其他便携式终端设备的形态等。
接着以车载导航为例,介绍本申请实施例提供的数据生成方法的三种实现方式。针对车载导航场景下的三种实现方式,与步行导航场景中的三种实现方式类似,对于车载导航场景下数据生成装置、音频播放装置和导航类服务器的具体实现方式均可以参照上述实施例,此处不做赘述。区别在于,相对于步行导航,当数据生成装置和音频播放装置集成于同一终端设备中时,数据生成装置的具体展示形态除了图1对应实施例中的举例外,还可以为汽车,音频播放装置指的是前述多种数据生成装置中的声电换能模块。当数据生成装置和音频播放装置分别为两个独立的设备时,数据生成装置的具体展示形态除了图1对应实施例中的举例外,还可以为汽车。在终端设备或数据生成装置的具体展示形态为汽车的情况下,姿态具体可以为汽车车头的姿态、汽车车轮的姿态或其他元件的姿态等。
为进一步理解本方案,请参阅图4,请参阅图4,图4为本申请实施例提供的数据生成方法的一种场景示意图。图4中示出了车载导航场景中,人在车内戴着耳机驾驶,以数据生成装置为手机,音频播放装置为耳机,数据生成装置与音频播放装置为互相独立的设备为例。图4中数据生成装置40、音频播放装置41以及导航类服务器42的具体实现方式与图2对应实施例中数据生成装置20、音频播放装置21以及导航类服务器22类似,此处不做赘述,图4中的举例仅为方便理解本方案,不用于限定本方案。
接着以利用网络侧或终端侧存储的地图数据介绍周围空间对象为例,介绍本申请实施例提供的数据生成方法的四种实现方式。
第一种,请参阅图5,图5为本申请实施例提供的数据生成方法的一种实现方式的示 意图,所述数据生成方法由图5中的数据生成装置50执行。图5中示出了数据生成装置50的耳机形态,应理解,图5仅为一个示例,下面以数据生成装置50具体表现为耳机为例,对本应用场景中的具体实现过程进行详细描述。
耳机获取与耳机的空间位置对应的绝对坐标(也可以称为经纬度坐标),并将与耳机的空间位置对应的绝对坐标发送给数据服务器51,耳机接收数据服务器51发送的空间对象信息,空间对象信息中包括耳机周围空间对象的对象内容和耳机周围空间对象的空间位置,前述耳机周围空间对象的空间位置可以为空间对象的绝对坐标,也可以为空间对象相对于耳机的相对坐标。
耳机在接收到空间对象信息之后,根据耳机周围空间对象的空间位置生成方位信息。可选地,耳机还可以获取姿态,根据姿态和空间对象信息生成空间对象相对于数据生成装置50的方位信息。耳机根据方位信息和耳机周围空间对象的对象内容生成内容信息。
耳机在获取到方位信息和内容信息之后,需要生成空间声数据,具体实现方式与图1对应实施例中数据生成装置10执行前述步骤方式类似,此处不做赘述。可选地,在生成空间声数据之前,耳机还可以判断方位信息所指示的空间位置是否位于预设空间位置区域内,或者,判断方位信息所指示的空间位置是否位于预设空间方向上,或者,判断内容信息所指示的对象内容是否为预设对象内容,在前述任一项或多项的判断结果为是的情况下,则确定空间对象信息满足预设条件。
在一种情况下,在空间对象信息满足预设条件的情况下,耳机根据方位信息和内容信息,生成空间声数据。在空间对象信息不满足预设条件的情况下,耳机不再根据空间对象信息生成空间声数据,进而可以处理下一个空间对象信息。耳机不播放不满足预设条件的空间对象信息,也即对空间对象信息做了初步筛选,以降低对用户的干扰,提高本方案的用户粘度。
在另一种情况下,在空间对象信息满足预设条件的情况下,耳机生成音量增大指示信息,并根据方位信息、内容信息和音量增大指示信息,生成空间声数据。在空间对象信息不满足预设条件的情况下,耳机不再根据空间对象信息生成空间声数据,进而可以处理下一个空间对象信息。
在另一种情况下,在空间对象信息满足预设条件的情况下,耳机生成音量增大指示信息,并根据方位信息、内容信息和音量增大指示信息,生成空间声数据。在空间对象信息不满足预设条件的情况下,耳机根据方位信息和内容信息,生成空间声数据。
在另一种情况下,在空间对象信息满足预设条件的情况下,耳机生成音量增大指示信息,并根据方位信息、内容信息和音量增大指示信息,生成空间声数据。在空间对象信息不满足预设条件的情况下,耳机生成音量减小指示信息,并根据方位信息、内容信息和音量减小指示信息,生成空间声数据。
可选地,耳机在生成空间声数据之后,根据空间声数据生成并播放空间声,具体实现方式可以参阅图1对应实施例中数据生成装置10执行前述步骤的过程,此处不做赘述。
需要说明的是,图5中数据生成装置50具体表现为耳机形态仅为一种示例,在实际应用中,数据生成装置50还可以表现为手机、便携式电脑、导航仪或者汽车等形态。
第二种,本实现方式中以数据生成装置为手机,音频播放装置为耳机,手机和耳机为 互相独立的设备为例进行说明。
手机向数据服务器发送与手机的空间位置对应的绝对坐标,接收数据服务器发送的空间对象信息,根据空间对象信息生成内容信息和方位信息,具体实现方式可参阅图5对应实施例中数据生成装置50执行前述步骤的具体实现方式。可选地,手机还可以获取耳机的姿态,根据耳机的姿态和空间对象信息生成方位信息。具体的,耳机测量获得耳机的姿态,并将耳机的姿态发送给手机。
手机在获取到内容信息和方位信息之后,生成空间声数据,本实施例中空间声数据包括方位信息和内容信息,具体实现方式与图5对应实施例中数据生成装置50执行前述步骤的具体实现方式类似,此处不做赘述。
可选地,手机在生成空间声数据之后,将空间声数据发送给耳机,由耳机根据空间声数据执行渲染操作,得到至少两个单声信号,并将至少两个单声信号传输至声电换能模块,以通过声电换能模块播放空间声。前述步骤的具体实现方式可以参考图2对应实施例中数据生成装置20和音频播放装置21执行前述步骤的具体实现方式,此次不做赘述。
第三种,本实现方式中以数据生成装置为手机,音频播放装置为耳机,手机和耳机为互相独立的设备为例进行说明。
手机向数据服务器发送与手机的空间位置对应的绝对坐标,接收数据服务器发送的空间对象信息,根据空间对象信息生成内容信息和方位信息。前述步骤的具体实现方式可以参考对利用网络侧或终端侧存储的地图数据介绍周围空间对象的第二种实现方式中的描述,此处不做赘述。
根据方位信息和内容信息执行渲染操作,生成空间声数据,本实施例中的空间声数据指的是根据方位信息和内容信息生成的至少两个单声信号,将至少两个单声信号发送给耳机,由耳机将至少两个单声信号传输至声电换能模块,以播放空间声。前述步骤的具体实现方式可以参考图3对应实施例中数据生成装置30和音频播放装置31执行前述步骤的具体实现方式,此次不做赘述。
需要说明的是,在利用网络侧或终端侧存储的地图数据介绍周围空间对象场景的第二种和第三种实现方式中,数据生成装置还可以表现为手机、便携式电脑或者导航仪等形态,配置有传感器的终端侧电子设备还可以表现为耳机、便携式电脑、导航仪或智能家电等其他终端侧的设备等。
第四种,本实现方式与利用网络侧或终端侧存储的地图数据介绍周围空间对象的第一种实现方式类似,区别在于将与利用网络侧或终端侧存储的地图数据介绍周围空间对象的第一种实现方式中的数据服务器51替换为终端侧的电子设备,前述终端侧的电子设备与集成有数据生成装置和音频播放装置的终端设备为不同的设备。本实现方式中,数据生成装置和音频播放装置的具体执行步骤与图5对应实施例中数据生成装置50和音频播放装置的具体执行步骤类似,终端侧的电子设备的具体执行步骤与图5对应实施例中数据服务器51的具体执行步骤类似,此处不进行赘述。
接着以利用传感器感测并介绍周围空间对象为例,介绍本申请实施例提供的数据生成方法的六种实现方式。
第一种,请参阅图6,图6为本申请实施例提供的数据生成方法的一种实现方式的示 意图,所述数据生成方法由图6中的数据生成装置60执行。图6中示出了数据生成装置60的耳机形态,应理解,图6仅为一个示例,下面以数据生成装置60具体表现为耳机,耳机中配置有光敏传感器为例,对本应用场景中的具体实现过程进行详细描述。
耳机通过光敏传感器采集空间对象信息。具体的,耳机上部署至少两个光敏传感器,耳机利用光敏传感器采集的数据对光源(也即图6中耳机周围的空间对象)的空间位置进行定位,以生成光源相对于耳机的相对坐标,并根据光敏传感器采集的数据确定光源的类型。为进一步理解本方案,作为另一示例,在耳机上也可以部署至少两个图像传感器,耳机采用双目视觉算法对耳机周围的空间对象进行定位,以生成耳机周围的空间对象相对于耳机的相对坐标,在通过图像传感器获取到耳机周围的空间对象的图像以后,可以对空间对象的图像进行识别,以获取空间对象的对象内容,此处举例仅为证明本方案的可实现性,不用于限定本方案。
耳机根据耳机周围的空间对象相对于耳机的相对坐标生成方位信息。可选地,耳机还可以获取耳机的姿态,根据耳机的姿态和空间对象信息生成空间对象相对于耳机的方位信息。耳机根据方位信息和耳机周围的空间对象的类型生成内容信息。耳机在获取到方位信息和内容信息之后,生成空间声数据,根据空间声数据播放空间声,耳机执行前述步骤的具体实现方式可以参考图5对应实施例中数据生成装置50执行前述步骤方式的描述,此处不做赘述。
需要说明的是,图6中数据生成装置60具体表现为耳机形态仅为一种示例,在实际应用中,数据生成装置60还可以表现为手机、便携式电脑、导航仪或者汽车等形态。
第二种,请参阅图7,图7为本申请实施例提供的数据生成方法的一种实现方式的示意图,下面以数据生成装置70具体表现为手机,音频播放装置71具体表现为耳机,手机中配置有声音传感器为例,对本应用场景中的具体实现过程进行详细描述,应理解,图7仅为一个示例。
手机(也即数据生成装置70的一个示例)通过声音传感器采集与耳机周围的空间对象对应的空间对象信息。具体的,手机上部署至少两个声音传感器,可以根据传感器采集到的数据,利用时延估计定位法对图7中的空间位置进行定位,得到手机周围的空间对象相对于手机的相对坐标,在本应用场景中,前述相对坐标中可以存在高度信息。并根据传感器采集到的数据确定空间对象的类型,例如蝙蝠、鸟、猫咪等,不同类型的空间对象发出的声音的频率不同。
手机根据手机周围的空间对象相对于手机的相对坐标生成方位信息。可选地,手机还可以获取耳机的姿态,根据耳机的姿态和空间对象信息生成空间对象相对于手机的方位信息。手机根据方位信息和手机周围的空间对象的对象内容生成内容信息。手机在获取到方位信息和内容信息之后,生成空间声数据,本实现方式中空间声数据包括方位信息和内容信息,手机执行前述步骤的具体实现方式可以参考图2对应实施例中数据生成装置20和音频播放装置21执行前述步骤方式的描述,此处不做赘述。
可选地,手机在生成空间声数据之后,将空间声数据发送给耳机(也即音频播放装置71的一个示例),由耳机根据空间声数据,通过声电换能模块播放空间声。耳机根据空间声数据播放空间声的具体实现方式可以参考图2对应实施例中数据生成装置20和音频播放 装置21的具体实现方式,此次不做赘述。
第三种,本实现方式中以集成有数据生成装置和传感器的电子设备为手机,音频播放装置为耳机为例进行说明。
手机通过传感器采集与手机周围的空间对象对应的空间对象信息,根据空间对象信息生成内容信息和方位信息。前述步骤的具体实现方式与图7对应实施例中数据生成装置70的具体实现方式也类似,此次不做赘述。
可选地,数据生成装置根据内容信息和方位信息,执行渲染操作,得到空间声数据,本实施例中空间声数据包括至少两个单声信号,将至少两个单声信号发送给音频播放装置,由音频播放装置将至少两个单声信号传输至声电换能模块,以播放空间声。前述步骤的具体实现方式可以参考图3对应实施例中数据生成装置30和音频播放装置31的具体实现方式,此次不做赘述。
需要说明的是,在利用传感器感测并介绍周围空间对象场景的第二种和第三种实现方式中,数据生成装置具体表现为手机形态仅为一种示例,在实际应用中,数据生成装置还可以表现为便携式电脑、导航仪或者汽车等形态。
第四种,本实现方式中以集成有数据生成装置和音频播放装置的终端设备为耳机,配置有传感器的终端侧电子设备为汽车为例进行说明。在本实现方式中,数据生成装置可以视为是耳机这个独立的设备,也可以视为汽车和位于汽车内的双声道耳机共同构成的分立装置。
耳机通过汽车上的传感器获取传感器周围的空间对象信息,其中包括空间对象相对于传感器的相对坐标和传感器周围的空间对象的对象内容。具体的,在一种情况下,汽车通过传感器采集与周围空间对象对应的数据,耳机接收汽车发送的传感器采集的数据,并根据传感器采集的数据生成空间对象信息。在另一种情况下,汽车通过传感器采集与周围空间对象对应的数据,并根据传感器采集的数据生成空间对象信息,耳机接收汽车发送的空间对象信息。
耳机根据传感器周围的空间对象相对于传感器的相对坐标生成方位信息。可选地,耳机还可以获取耳机的姿态,根据耳机的姿态和空间对象信息生成空间对象相对于耳机的方位信息。耳机根据方位信息和耳机周围的空间对象的类型生成内容信息。耳机在生成内容信息和方位信息之后,生成空间声数据,进而播放空间声。耳机执行前述步骤的具体实现方式与图6对应实施例中数据生成装置60的具体实现方式类似,此次不做赘述。
需要说明的是,上述耳机还可以被替代为手机、便携式电脑或者导航仪等形态,配置有传感器的终端侧电子设备还可以表现为耳机、便携式电脑、导航仪或智能家电等其他终端侧的设备等。
第五种,本实现方式中以数据生成装置为手机,音频播放装置为耳机,配置有传感器的终端侧电子设备为汽车,手机和耳机为互相独立的设备为例进行说明。
手机通过汽车上的传感器获取传感器周围的空间对象信息,根据空间对象信息生成方位信息和内容信息,进而生成空间声数据,本实施例中空间声数据包括方位信息和内容信息,手机执行前述步骤的具体实现方式与利用传感器感测并介绍周围空间对象场景中第四种实现方式类似,此次不做赘述。
手机在生成空间声数据之后,可以将空间声数据发送给耳机,由耳机根据空间声数据执行渲染操作,以生成至少两个单声信号,将至少两个单声信号传输至声电换能模块,通过声电换能模块播放空间声。前述步骤的具体实现方式可以参考图2对应实施例中数据生成装置20和音频播放装置21执行前述步骤的具体实现方式,此次不做赘述。
第六种,本实现方式中以数据生成装置为手机,音频播放装置为耳机,配置有传感器的终端侧电子设备为汽车,手机和耳机为互相独立的设备为例进行说明。
手机通过汽车上的传感器获取传感器周围的空间对象信息,根据空间对象信息生成方位信息和内容信息,手机执行前述步骤的具体实现方式与利用传感器感测并介绍周围空间对象场景中第四种实现方式类似,此次不做赘述。
手机在生成方位信息和内容信息之后,执行渲染操作,得到空间声数据,本实施例中空间声数据包括至少两个单声信号,将至少两个单声信号发送给耳机,由耳机将至少两个单声信号传输至声电换能模块,以播放空间声。前述步骤的具体实现方式可以参考图3对应实施例中数据生成装置30和耳机31的具体实现方式,此次不做赘述。
需要说明的是,在利用传感器感测并介绍周围空间对象场景的第五种和第六种实现方式中,数据生成装置可以视为是手机这个独立的设备,也可以视为汽车和位于汽车内的手机共同构成的分立装置;此外,手机还可以被替换为便携式电脑或者导航仪等形态,配置有传感器的终端侧电子设备还可以表现为耳机、便携式电脑、导航仪或智能家电等其他终端侧的设备等。
本发明实施例提供了一种数据生成方法,前述方法由数据生成装置(包括但不限于前述图1至图7对应实施例中的数据生成装置)执行,数据生成装置在接收到空间对象信息之后,可以不对接收到的空间对象信息进行筛选,直接根据空间对象信息生成空间对象数据,也可以为对接收到的空间对象信息进行筛选,仅根据符合预设条件的空间对象信息生成空间对象数据。由于前述两种实现方式中的具体实现流程有所不同,以下分别进行介绍。
(1)不对空间对象信息进行筛选
本申请实施例中,参见图8,图8为本申请实施例提供的数据生成方法的一种流程示意图,本申请实施例提供的数据生成方法可以包括:
步骤801,数据生成装置获取空间对象信息。
本申请实施例中,数据生成装置获取空间对象的方式可以包括:数据生成装置接收空间对象信息,或者,数据生成装置通过传感器采集空间对象信息。数据生成装置可以根据接收到的空间对象信息生成空间声数据,也可以根据通过传感器采集到的空间对象信息生成空间声数据,也即在多种应用场景中都可以适用本方案提供的数据呈现方式,扩展了本方案的应用场景,提高了本方案的实现灵活性。
其中,空间对象信息指的是对空间对象的描述信息,用于获取空间对象相对于所述数据生成装置的方位信息,其中至少包括对空间对象的方位的描述信息,空间对象指的是位于三维立体空间中的对象。空间对象信息例如可以包括从数据生成装置到导航目的地的文本形式的导航数据,也可以包括从数据生成装置到导航目的地的音频流形式的导航数据,也可以包括数据生成装置周围的空间对象的对象内容和绝对坐标,还可以包括数据生成装置周围的空间对象的对象内容和相对坐标。本申请实施例中所指的接收方式包括但不限于 通过蜂窝通信,无线局域网(Wireless Fidelity,WIFI),全球互通微波访问(Worldwide Interoperability for Microwave Access,Wimax),蓝牙通信技术(Bluetooth),紫蜂通信技术(ZigBee),光通信,卫星通信,红外线通信,传输线通信,硬件接口或者硬件电路板上的走线接收,或者从软件模块获取信息,或者从存储器件读取信息。传感器包括光敏传感器、声音传感器、图像传感器、红外传感器、热敏传感器、压力传感器或惯性传感器中的至少一项。通过前述方式,提供了传感器的多种具体实现方式,提高了本方案的实现灵活性。
在步行导航场景的三种实现方式中以及在车载导航场景的三种实现方式中,数据生成装置接收空间对象信息可以包括:数据生成装置中设置有导航类应用程序,获取导航起始点和导航目的地,并通过前述导航类应用程序将导航起始点和导航目的地发送给导航类服务器,接收导航类服务器发送的文本形式的空间对象信息,也即数据生成装置通过导航类应用程序接收接口数据,文本形式的空间对象信息中携带有从导航起始点到导航目的地的导航数据。
其中,数据生成装置向导航类服务器发送的可以为导航起始点和导航目的地的名称,也可以为导航起始点和导航目的地的经纬度坐标,或其他用于指示导航起始点和导航目的地的空间位置的信息。与空间对象信息对应的空间对象可以为具有空间位置的导航目的地、交通路标、监控或其他与导航相关的空间对象等。从导航起始点到导航目的地的导航数据中可以分为至少一个路段,文本形式的空间对象信息中可以包括对至少一个路段的描述信息,对每个路段的描述信息中包括多个字段以及每个字段的字段值,前述多个字段中可以包括内容字段的字段值,作为示例,例如内容字段可以为路段描述(instruction)字段。前述多个字段中还可以包括位置字段,作为示例,例如位置字段具体可以为距离(distance)字段和转向点(turn)字段,位置字段也可以包括其他字段,或者,前述多个字段中可以只包括内容字段,不包括位置字段等,此处不做限定。此处以通过表格的形式展示空间对象信息为例,请参阅如下表1。
表1
距离(distance)  100米
转向点(turn)  右转
路段描述(instruction)  100米后右转
请参阅如上表1,表1中示出的为空间对象信息中一段路段的描述信息,表1中的示例仅为方便理解本方案,不用于限定本方案。
具体的,针对导航起始点的获取方式,在一种情况下,数据生成装置中还可以配置有定位系统,例如全球定位系统(global positioning system,GPS),数据生成装置通过定位系统获取到数据生成装置的空间位置,将前述空间位置并确定为导航起始点。在另一种情况下,数据生成装置接收用户输入的导航起始点。更具体的,在数据生成装置为具有投影功能的耳机、手机、便携式电脑、导航仪或者汽车等具有展示界面的设备的情况下,则数据生成装置可以通过导航类应用程序的展示界面,接收用户输入的导航起始点。在数据生成装置为不具有投影功能的耳机或其他不具有展示界面的设备的情况下,数据生成装置中还可以配置有麦克风,则数据生成装置通过麦克风接收用户输入的语音形式的导航起始点。可选地,在数据生成装置为具有展示界面的情况下,数据生成装置中也可以配置有麦克风, 数据生成装置通过麦克风接收用户输入的语音形式的导航起始点。针对导航目的地的获取方式,数据生成装置接收用户输入的导航目的地,具体实现方式可以参考数据生成装置接收用户输入的导航起始点的方式,此处不做赘述。
可选地,数据生成装置在通过导航类应用程序接收到导航类服务器发送的文本形式的空间对象信息之后,数据生成装置中的导航类应用程序可以将文本形式的空间对象信息转换为音频流形式的空间对象信息,数据生成装置从导航类应用程序中获取音频流形式的空间对象信息,也即数据生成装置通过导航类应用程序接收音频流形式的空间对象信息(也可以称为音频流数据),作为示例,例如音频流数据可以为脉冲编码调制(pulse code modulation,PCM)音频流数据,还可以为其他格式的音频流数据。具体的,数据生成装置的操作系统可以通过函数AudioPolicyManagerBase::getOutput(也即音频策略实现层的一种函数)获取到导航类应用程序输出的音频流形式的空间对象信息。
在利用网络侧或终端侧存储的地图数据介绍周围空间对象场景的前三种实现方式中,数据生成装置接收空间对象信息可以包括:数据生成装置获取与数据生成装置的空间位置对应的第一坐标,将与数据生成装置的空间位置对应的第一坐标发送给数据服务器,接收数据服务器发送的空间对象信息,前述空间对象信息中包括数据生成装置周围的空间对象的对象内容和第二坐标。其中,第一坐标可以为数据生成装置的经纬度坐标(也可以称为绝对坐标),第二坐标可以为与空间对象的空间位置对应的绝对坐标,也可以为空间对象的空间位置与数据生成装置的空间位置之间的相对坐标。在第二坐标为空间对象的绝对坐标的情况下,前述空间对象信息指的是与空间对象对应的地图数据,也即数据生成装置接收网络侧存储的地图数据。空间对象信息中包括的空间对象可以为图5中示出的图书馆、披萨店、施工点或其他位于三维立体空间中的实体对象等。
具体的,数据生成装置中配置有定位系统,数据生成装置可以通过定位系统获取与数据生成装置的空间位置对应的第一坐标,将与数据生成装置的空间位置对应的第一坐标发送给数据服务器。数据服务器中可以预先存储有数据生成装置周围的地图数据,前述地图数据中包括数据生成装置周围空间对象的对象内容和数据生成装置周围空间对象的空间位置对应的绝对坐标,数据服务器在接收到与数据生成装置的空间位置对应的第一坐标之后,获取第一坐标周围的空间对象的对象内容和第二坐标,生成空间对象信息,并将包括第一坐标周围的空间对象的对象内容和第二坐标的空间对象信息发送给数据生成装置,对应的,数据生成装置接收数据服务器发送的前述空间对象信息。
更具体的,数据服务器在接收到与数据生成装置的空间位置对应的第一坐标之后,获取第一坐标周围的空间对象的对象内容和绝对坐标,生成空间对象信息,空间对象信息中携带的第二坐标为绝对坐标。可选地,数据服务器还可以根据接收到的第一坐标和获取到的第一坐标周围的空间对象的绝对坐标之后,以第一坐标为原点,根据第一坐标和第一坐标周围的空间对象的绝对坐标生成第一坐标周围的空间对象的相对坐标,并将前述相对坐标确定为第二坐标,基于第一坐标周围的空间对象的对象内容和第二坐标生成空间对象信息,空间对象信息中携带的第二坐标为相对坐标。
在利用网络侧或终端侧存储的地图数据介绍周围空间对象场景的第四种实现方式中,数据生成装置接收空间对象信息可以包括:数据生成装置获取与数据生成装置的空间位置 对应的第一坐标,将与数据生成装置的空间位置对应的第一坐标发送给终端侧的电子设备,接收终端侧的电子设备发送的空间对象信息,前述空间对象信息中包括数据生成装置周围的空间对象的对象内容和第二坐标。本实施例中,数据生成装置的具体执行步骤可以参考上述实施例中,对利用网络侧或终端侧存储的地图数据介绍周围空间对象场景的前三种实现方式中的具体实现方式的描述,区别仅在于将上述实施例中的数据服务器替换为终端侧的电子设备,此处不进行赘述。
在利用传感器感测并介绍周围空间对象场景中的前三种实现方式中,数据生成装置通过传感器采集空间对象信息可以包括:数据生成装置通过内部接口向传感器发送信号采集指令,以指示传感器采集数据,数据生成装置接收传感器采集的数据,并根据传感器采集的数据生成空间对象信息。具体的,数据生成装置根据采集到的数据对数据生成装置周围的空间对象进行定位,以生成数据生成装置周围的空间对象的相对坐标,并根据传感器采集到的数据确定数据生成装置周围的空间对象的对象内容,具体实现方式可以参与上述图6和图7对应实施例中的描述。
在利用传感器感测并介绍周围空间对象场景中的后三种实现方式中,数据生成装置通过传感器采集空间对象信息可以包括:在一种情况下,配置有传感器的终端侧电子设备通过传感器采集与周围空间对象对应的数据,数据生成装置接收配置有传感器的终端侧电子设备发送的传感器采集的数据,并根据传感器采集的数据生成空间对象信息。在另一种情况下,配置有传感器的终端侧电子设备通过传感器采集与周围空间对象对应的数据,并根据传感器采集的数据生成空间对象信息,数据生成装置接收配置有传感器的终端侧电子设备发送的空间对象信息。
具体的,在一种情况下,数据生成装置向配置有传感器的终端侧电子设备发送传感器数据获取请求,配置有传感器的终端侧电子设备响应于传感器数据获取请求,通过传感器采集与周围空间对象对应的数据,进而向数据生成装置发送传感器采集的数据或空间对象信息。在另一种情况下,配置有传感器的终端侧电子设备可以主动向数据生成装置发送传感器采集的数据或空间对象信息,更具体的,前述发送方式可以为实时发送、每隔预设时长发送、在固定时间点发送或其他发送方式等等,此次不做限定。
针对空间对象信息的接收时间。在步行导航场景和车载导航场景中,当用户通过数据生成装置中的导航类应用程序执行导航功能时,数据生成装置接收空间对象信息。在利用网络侧或终端侧存储的地图数据介绍周围空间对象场景和利用传感器感测并介绍周围空间对象场景中,具体的,在一种实现方式中,数据生成装置可以一直处于空间对象信息的接收状态,从而将接收到的空间对象信息及时转换为空间声数据,以及时播放空间声。可选地,数据生成装置中可以设置有用于接收用户的开启操作和关闭操作的开关按钮,当用户通过开关按钮输入开启操作时,数据生成装置处于空间对象信息的接收状态,当用户通过开关按钮输入关闭操作时,数据生成装置关闭空间对象信息的接收功能,进而不再接收空间对象信息。具体地,在一种情况下,数据生成装置为可以向用户提供展示界面的电子设备,则可以通过展示界面向用户展示开关控件,以通过前述开关控件接收用户输入的开启操作或关闭操作。在另一种情况下,数据生成装置为不能够提供展示界面的电子设备时,可以在数据生成装置的外部设置有开关按键,从而通过开关按键输入开启操作和关闭操作。
步骤802,数据生成装置根据空间对象信息生成内容信息和方位信息。
本申请实施例中,数据生成装置可以根据空间对象信息生成内容信息和方位信息。其中,内容信息用于确定空间声的播放内容,内容信息中包括方位信息,在空间对象为导航目的地时,内容信息用于描述从数据生成装置到导航目的地的路径规划,在空间对象为空间内发生的事件、空间内存在的人、动物或者物体时,内容信息用于描述空间对象相对于数据生成装置的方向和空间对象的对象内容。作为示例,例如内容信息可以为“向前走100米后右转”或“左前方50米处有咖啡店”等,此处举例仅为方便理解本方案,不用于限定本方案。方位信息可以包括位置信息和方向信息,用于指示空间对象相对于终端设备的方位。方位信息中可以携带有高度信息,也可以不携带高度信息,具体可以表现为笛卡尔坐标,也可以为其他格式的方位信息。其中,终端设备指的可以为数据生成装置,也可以为音频播放装置,还可以为配置有传感器的终端侧电子设备。进一步地,前述配置有传感器的终端侧电子设备可以与数据生成装置为同一设备,也可以与音频播放装置为同一设备,还可以为独立于数据生成装置和音频播放装置的独立设备。
在步行导航场景和车载导航场景中,空间对象信息可以包括文本形式的导航数据,文本形式的导航数据中包括内容字段的字段值。则步骤802可以包括:数据生成装置根据空间对象信息中包括的内容字段的字段值,生成方位信息和内容信息。可选地,若文本形式的导航数据中包括内容字段的字段值和位置字段的字段值,则步骤802可以包括:数据生成装置根据内容字段的字段值生成内容信息,根据位置字段的字段值生成方位信息。作为示例,结合上述表1进行举例,方位信息为空间对象的相对坐标,此处以方位信息中可以携带高度信息为例,则需要通过x轴、y轴和z轴来描述,基于表1中的turn字段获取到空间对象在终端设备的右方,但由于没有右方的具体距离值,则可以设置为默认值,例如默认为10米,x轴数值为10,基于distance字段获取到空间对象在终端设备的前方100米,y轴数值为100,没有高度信息,则可以将z轴数值设置为0,从而得到空间对象的方位信息(10,100,0);基于instruction字段获取内容信息为“100米后右转”,应当理解,x轴的取值也可以为-10,或其他数值等,前述举例仅为方便理解本方案,不用于限定本方案。
可选地,空间对象信息包括音频流形式的导航数据,音频流形式的导航数据中包括内容信息,内容信息中包括方位信息,则步骤802可以包括:数据生成装置对音频流形式的导航数据进行语音识别,得到方位信息和内容信息。结合上述表1进行举例,方位信息为空间对象的相对坐标,此处以方位信息中可以携带高度信息为例,则需要通过x轴、y轴和z轴来描述,音频流形式的导航数据为“向前走100米后右转”,对音频流形式的导航数据进行语音识别后提取到关键词“右”,由于导航数据中没有右方的具体距离值,则可以设置为默认值,例如默认为10米,x轴数值为10;提取到关键词“向前”和“100米”,则前向方位距离为100米,也就是y轴数值为100;没有高度信息,则可以将z轴数值设置为0,从而得到空间对象的方位信息(10,100,0);对音频流形式的导航数据进行语音识别得到内容信息为“向前走100米后右转”应当理解,前述举例仅为方便理解本方案,不用于限定本方案。
在利用网络侧或终端侧存储的地图数据场景中,在一种情况下,空间对象信息中包括数据处理装置周围的空间对象的对象内容和绝对坐标,则步骤802可以包括:数据生成装 置根据数据生成装置的空间位置以及空间对象信息生成方位信息。具体的,数据生成装置以与数据生成装置的空间位置对应的第一坐标(也即数据生成装置的空间位置的绝对坐标)为坐标原点,利用数据生成装置中的陀螺仪或定位系统等确定用户的前进方向,以用户的前进方向为y轴的正方向,建立坐标系,并根据空间对象信息中包括的数据生成装置的空间位置周围空间对象的绝对坐标(也即第二坐标的一种),确定空间对象在坐标系中的位置,以生成方位信息,并根据方位信息和空间对象信息中包括的对象内容生成内容信息,其中,内容信息包括对空间对象的方位和类型的描述。作为示例,例如空间对象的类型为书店,根据数据生成装置的空间位置的绝对坐标和书店的绝对坐标得到的方位信息为(0,50,0),则内容信息为正前方50米处有书店。在另一种情况下,空间对象信息包括空间对象的对象内容和相对坐标。则数据生成装置可以从空间对象信息中提取相对坐标,以得到方位信息。并根据方位信息和空间对象信息中包括的对象内容生成内容信息。在另一种情况下,空间对象信息中包括空间对象的对象内容和空间对象相对于数据生成装置的相对坐标,则步骤802可以包括:数据生成装置根据空间对象信息中包括的空间对象的相对坐标生成方位信息,并根据方位信息和空间对象信息中包括的对象内容生成内容信息。
在利用传感器感测并介绍周围空间对象场景中,空间对象信息中包括配置有传感器的终端侧电子设备周围的空间对象的对象内容和相对坐标,则步骤802可以包括:数据生成装置根据空间对象的相对坐标生成方位信息,并根据方位信息和空间对象信息中包括的对象内容生成内容信息。
可选地,数据生成装置根据终端设备的姿态以及空间对象信息,生成方位信息,作为示例,例如姿态可以为右转30度、左转20度、上扬15度或其他姿态。进一步地,例如:当所述数据生成装置为汽车时,所述姿态为车头的朝向;当所述数据生成装置为手机或导航仪时,所述姿态为手机或导航仪上屏幕的朝向;当所述数据生成装置为双声道耳机时,所述姿态为佩戴所述双声道耳机的用户的面部朝向;当所述数据生成装置为汽车和位于所述汽车内的手机共同构成的分立装置时,所述姿态为所述汽车的车头的朝向,或者为位于所述汽车内的手机的屏幕的朝向;当所述数据生成装置为汽车和位于所述汽车内的双声道耳机共同构成的分立装置时,所述姿态为所述汽车的车头的朝向,或者为佩戴所述双声道耳机的车内用户的面部朝向。
具体的,在步行导航场景和车载导航场景的第一种实现方式中、利用网络侧或终端侧存储的地图数据场景的第一种实现方式和第四种实现方式中,以及,利用传感器感测并介绍周围空间对象场景的第一种实现方式和第四种实现方式中。由于数据生成装置和音频处理装置配置于同一终端设备中,数据生成装置在测量获得终端设备的姿态之后,根据姿态和空间对象信息生成方位信息,以使得最终用户听到的空间声的声源位置与空间对象相对于终端设备的方位信息一致,以提高空间声的精准度。
在步行导航场景和车载导航场景的第二种和第三种实现方式中、利用网络侧或终端侧存储的地图数据场景的第二种和第三种实现方式中,以及,利用传感器感测并介绍周围空间对象场景的第二种、第三种、第五种和第六种实现方式中。由于数据生成装置和音频处理装置分别为两个独立的设备,数据生成装置接收音频播放装置发送的姿态,根据姿态和空间对象信息生成方位信息。更具体的,音频播放装置可以实时向数据生成装置发送音频 播放装置的姿态,也可以为每隔预设时长向数据生成装置发送音频播放装置的姿态,作为示例,前述预设时长可以为2秒、5秒、10秒或其他时长等。
更具体的,在上述种种场景中,在数据生成装置或音频播放装置为耳机、手机、便携式电脑或导航仪等便携式设备的情况下,数据生成装置或音频播放装置中配置有陀螺仪或其他具有姿态测量功能的元件,数据生成装置或音频播放装置利用陀螺仪或其他具有姿态测量功能的元件获取到终端设备的姿态。其中姿态指的可以为耳机、手机、便携式电脑或者导航仪的姿态。在数据生成装置或音频播放装置为汽车的情况下,数据生成装置或音频播放装置可以通过汽车中配置的陀螺仪、方向盘转向或其他元件测得。其中,汽车的姿态指的可以为车头的姿态、车轮的姿态、车身的姿态或其他姿态等。汽车可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场汽车、施工设备、电车、高尔夫球车、火车、和手推车等,本申请实施例不做特别的限定。
可选地,本发明实施例可以包括步骤803,数据生成装置判断空间对象信息是否满足预设条件,若满足,则进入步骤804;若不满足,则进入步骤805。
本申请实施例中,数据生成装置判断空间对象信息是否满足预设条件,其中,满足预设条件的空间对象信息为包括预设空间位置区域、预设空间方向或者预设对象内容的空间对象信息。进一步地,预设空间位置区域指的是相对于数据生成装置或音频处理装置的空间位置的预设空间位置区域,作为示例,例如可以为数据生成装置或音频处理装置的空间位置的正前方米内,作为另一示例,例如可以为以数据生成装置或音频处理装置的空间位置为原点的,10米为半径的一个区域内等,此处不做限定。预设空间方向可以是相对于数据生成装置或音频处理装置的空间位置的位置方向。作为示例,例如在音频播放装置为双声道耳机的情况下,预设空间方向可以为佩戴双声道耳机的用户的面部朝向,佩戴双声道耳机的用户的面部朝向可以根据双声道耳机中配置的陀螺仪、惯性传感器或其他元件测得,有用用户一般会看向想要看到的东西,将位于佩戴双声道耳机的用户的面部朝向的方向设置为预设空间方向,有利于提高确定感兴趣空间对象过程的准确度;在数据生成装置为手机或导航仪时,预设空间方向可以为手机或导航仪上移动的方向;在数据生成装置为汽车的情况下,预设空间方向可以为车头朝向,进一步地,预设空间方向可以为数据生成装置或音频处理装置的前方、后方、左方、右方或其他方向等,也可以是绝对的空间位置方向,作为示例,例如预设空间方向可以为东方、西方、南方、北方或其他方向等,此处不做限定。预设对象内容可以为用户预先输入的,作为示例,例如用户可以预先输入咖啡店、书店或其他类型的空间对象作为用户感兴趣的对象。也可以为数据生成装置自主确定的,作为示例,例如可以将施工点等危险系数较大的对象内容设置为预设对象内容,作为另一示例,例如可以将转弯点、路口等需要重点提醒的对象内容设置为预设对象内容等,应当理解,此处举例均仅为方便理解本方案,具体预设空间位置区域、预设空间方向和/或预设对象内容的具体内涵均可由本领域技术人员结合实际产品形态确定,此处不做限定。具体的,数据生成装置判断。
具体的,在步行导航场景和车载导航场景中,步骤803可以包括:数据生成装置判断内容信息所指示的对象内容是否为预设对象内容,若判断结果为是,则数据生成装置确定空间对象信息满足预设条件。在利用网络侧或终端侧存储的地图数据场景和利用传感器感 测并介绍周围空间对象场景中,步骤803包括以下一项或多项:数据生成装置判断方位信息所指示的空间位置是否位于预设空间位置区域内,或者,数据生成装置判断方位信息所指示的空间位置是否位于预设空间方向上,或者,数据生成装置判断内容信息所指示的对象内容是否为预设对象内容。在前述任一项或多项的判断结果为是的情况下,则数据生成装置确定空间对象信息满足预设条件。
可选地,本发明实施例可以包括步骤804,数据生成装置生成音量增大指示信息。
本申请实施例的一些实施例中,数据生成装置确定空间对象信息满足预设条件之后,会生成音量增大指示信息。音量增大指示信息用于指示增大与空间对象信息相对应的空间声的音量,音量增大指示信息中可以携带有需要增大的音量数值,音量增大指示信息中携带的音量数值的取值可以为正值,作为示例,例如3dB、8dB、10dB、15dB或其他数值等;音量增大指示信息中也可以不携带有需要增大的音量数值,此处均不做限定。
可选地,本发明实施例可以包括步骤805,数据生成装置生成音量减小指示信息。
本申请实施例的一些实施例中,数据生成装置确定空间对象信息不满足预设条件之后,会生成音量减小指示信息。其中,音量减小指示信息用于指示减小与空间对象信息相对应的空间声的音量,音量减小指示信息中可以携带有需要减小的音量数值,音量减小指示信息中携带的音量数值的取值可以为负值,也可以为正值,作为示例,例如-3dB、-8dB、-10dB、-15dB或其他数值等;音量减小指示信息中也可以不携带有需要减小的音量数值,此处均不做限定。
步骤806,数据生成装置生成空间声数据。
本申请实施例中,数据生成装置在获取到方位信息和内容信息之后,会生成空间声数据。其中,空间声数据用于指示生成空间声。具体的,空间声数据包括方位信息和内容信息,或者,空间声数据包括根据方位信息和内容信息生成的至少两个单声信号,至少两个单声信号用于被与两个单声信号对应的声电换能模块同时播放以产生空间声。
在空间声数据包括方位信息和内容信息的情况下,其中还可以包括音量信息等。在ETSI TS 103 223标准中,空间声数据具体表现为空间音频对象,空间音频对象中包括的各种信息具体表现为数组字段。进一步地,ETSI TS 103 223标准提供了一种“基于对象的音频沉浸式声音元数据和码流”的标准,此标准支持根据收听用户的位置坐标,以及声源的位置坐标,计算出可以体现声源与用户之间位置远近和方位的沉浸式声音元数据和码流。需要说明的是,ETSI TS 103 223标准仅为空间声数据的一种参考标准,在实际实现方式中,可以为参考其他标准,也可以为在ETSI TS 103 223标准的基础上进行改动。本申请实施例中仅以ETSI TS 103 223标准为例进行介绍。参考ETSI TS 103 223标准,空间声数据中的位置信息具体表现为位置(position)字段,内容信息具体表现为内容(contentkind)字段,音量信息具体可以表现为音量增益(gain)字段等,此处不做穷举。
本申请实施例中,步骤803至805为可选步骤,若步骤803至805均不执行,或者,若步骤803和804执行,步骤805不执行,且步骤803的执行结果为空间对象信息不满足预设条件,或者,若步骤803和805执行,步骤804不执行,且步骤803的执行结果为空间对象信息满足预设条件,则步骤806可以包括:数据生成装置根据方位信息和内容信息,生成空间声数据。具体的,在ETSI TS 103 223标准中,数据生成装置在获取到坐标形式的 方位信息和内容信息之后,可以将方位信息确定为位置字段的字段值,将内容信息确定为内容字段的字段值,将空间音频对象中其他字段取默认值,得到空间音频对象。
若步骤803至805均执行,且步骤803的执行结果为空间对象信息满足预设条件,或者,步骤803和804执行,步骤805不执行,且步骤803的执行结果为空间对象信息满足预设条件,则步骤806包括:数据生成装置根据方位信息、内容信息和音量增大指示信息,生成空间声数据。具体的,在ETSI TS 103 223标准中,数据生成装置在获取到坐标形式的方位信息和内容信息之后,可以将方位信息确定为位置字段的字段值,将内容信息确定为内容字段的字段值,若音量增大指示信息中携带有需要增大的音量数值,则可以将需要增大的音量数值确定为音量增益字段的字段值,若音量增大指示信息中未携带需要增大的音量数值,则可以音量增益字段的字段值增大预设值,作为示例,例如音量增益字段的字段值可以为3dB、8dB、10dB、15dB或其他数值等。本申请实施例中,当空间对象信息为满足预设条件的空间对象信息时,会生成音量增大指示信息,音量增大指示信息指示增大与空间对象信息相对应的空间声的音量,也即对于预设对象内容的空间对象可以增大播放音量,以吸引用户的注意力,避免用户错过预设对象内容的空间对象,有利于提高导航过程的安全性,也可以避免用户错过感兴趣的空间对象,提高本方案的用户粘度。
若步骤803至805均执行,且步骤803的执行结果为空间对象信息不满足预设条件,或者,步骤803和805执行,步骤804不执行,且步骤803的执行结果为空间对象信息不满足预设条件,则步骤806包括:数据生成装置根据方位信息、内容信息和音量减小指示信息,生成空间声数据。具体的,在ETSI TS 103 223标准中,耳机在获取到坐标形式的方位信息和内容信息之后,可以将方位信息确定为位置字段的字段值,将内容信息确定为内容字段的字段值,若音量减小指示信息中携带有需要减小的音量数值,则可以将根据需要减小的音量数值确定音量增益字段的字段值,前述音量增益字段的字段值取值为负值。若音量减小指示信息中未携带需要减小的音量数值,则可以音量增益字段的字段值减小预设值,作为示例,例如音量增益字段的字段值可以为-3dB、-8dB、-10dB、-15dB或其他数值等。
进一步地,在上述三种情况中,若空间声数据为根据方位信息和内容信息生成的至少两个单声信号,也即在步行导航场景和车载导航场景的第三种实现方式中、利用网络侧或终端侧存储的地图数据场景的第三种实现方式中,以及,利用传感器感测并介绍周围空间对象场景的第三种和第六种实现方式中,或者,也可以在步行导航场景和车载导航场景的第一种实现方式中、利用网络侧或终端侧存储的地图数据场景的第一种实现方式和第四种实现方式中,以及,利用传感器感测并介绍周围空间对象场景的第一种实现方式和第四种实现方式中。则在生成方位信息和内容信息之后,步骤806还可以包括:数据生成装置根据内容信息和方位信息执行渲染操作,以生成至少两个单声信号,至少两个单声信号用于被与两个单声信号对应的声电换能模块同时播放以产生空间声。
具体的,对于根据空间声数据执行渲染操作的过程,所述渲染具体为通过特定的算法或者数据处理操作在音频流数据中融入空间方位信息,最终生成至少两个单声信号,所述至少两个单声信号用于被与两个单声信号对应的声电换能模块同时播放以产生空间声。数据生成装置或音频播放装置中可以预先配置有渲染函数库,在获取到空间声数据之后,获 取与空间声数据的方位信息对应的左耳渲染函数和右耳渲染函数,获取与空间声数据的内容信息对应的音频流数据,通过左耳渲染函数对与内容信息对应的音频流数据进行渲染,得到左声道信号,并通过右耳渲染函数对与内容信息对应的音频流数据进行渲染,得到右声道信号,其中,左声道信号和右声道信号归属于两个单声信号。更具体的,若空间对象信息包括音频流形式的导航数据,则可以从空间对象信息中提取音频流形式的内容信息;若空间对象信息包括的为文本形式的导航数据,则需要将空间声数据的内容信息转换为音频流形式的内容信息。
进一步地,此处以音频播放装置为耳机为例,对根据空间声数据生成空间声的具体实现方式做详细介绍。在一种实现方式中,左耳渲染函数和右耳渲染函数均为头相关脉冲响应(head related impulse response,HRIR)函数,则需要获取与空间声数据的内容信息对应的PCM数据,并获取与空间声数据的位置信息对应的左耳HRIR函数和右耳HRIR函数,将前述PCM数据分别与左耳HRIR函数和右耳HRIR函数进行卷积处理,得到左声道信号和右声道信号,进而可以通过音频播放装置的左右声电换能模块播放左声道信号和右声道信号。在另一种实现方式中,左耳渲染函数和右耳渲染函数均为头相关变换函数(head related transfer function,HRTF),则需要获取与空间声数据的内容信息对应的PCM数据,并获取与空间声数据的位置信息对应的左耳HRTF函数和右耳HRTF函数,将前述PCM数据变换到频域,得到变换后的音频流数据,将变换后的音频流数据分别与左耳HRTF函数和右耳HRTF函数进行相乘,并将相乘后的信号变换到时域,得到左声道信号和右声道信号,进而可以通过音频播放装置的左右声电换能模块播放左声道信号和右声道信号。此处以音频播放装置为耳机的声电换能模块为例,仅为证明本方案的可实现性,当音频播放装置为其他形态时,可类推适用,此处不对空间声的生成方式进行限定。
可选地,本发明实施例可以包括步骤807,数据生成装置或音频播放装置根据空间声数据播放空间声。
本申请实施例中,数据生成装置在生成空间声数据之后,可以根据空间声数据播放空间声。其中,空间声为一种声音。空间声的声源位置与方位信息对应,空间声的播放内容为内容信息。
若空间声数据包括内容信息和方位信息,在步行导航场景和车载导航场景的第二种实现方式中、利用网络侧或终端侧存储的地图数据场景的第二种实现场景中,以及,利用传感器感测并介绍周围空间对象场景的第二种和第五种实现方式中。数据生成装置和音频播放装置分别位于不同的独立设备中,数据生成装置在生成包括内容信息和方位信息的空间声数据之后,将包括内容信息和方位信息的空间声数据发送给音频播放装置,由音频播放装置根据内容信息和方位信息,执行渲染操作,以生成至少两个单声信号,将至少两个单声信号传输至声电换能模块,通过声电换能模块播放空间声。可选地,音频播放装置在根据空间声数据生成至少两个单声信号之后,还可以实时获取音频播放装置的姿态,根据音频播放装置的姿态和空间声数据得到变换后的空间方位信,并对音频流形式的内容信息进行重渲染,并将经重渲染操作得到的至少两个单声信号传输至声电换能模块,通过声电换能模块播放空间声,所述重渲染指的是通过特定的算法或者数据处理操作在音频流数据中融入变换后的空间方位信息,最终生成至少两个单声信号。本申请实施例中,至少两个单 声信号包括的单声信号的数量与音频播放装置中包括的声电换能模块的数量一致。
或者,在步行导航场景和车载导航场景的第一种实现方式中、利用网络侧或终端侧存储的地图数据场景的第一种实现方式和第四种实现方式中,以及,利用传感器感测并介绍周围空间对象场景的第一种实现方式和第四种实现方式中。数据生成装置和音频播放装置集成于同一设备中,若空间声数据包括内容信息和方位信息,则数据生成装置需要先根据内容信息和方位信息生成至少两个单声信号,将至少两个单声信号通过内部接口传输至声电换能模块,通过声电换能模块播放空间声。其中,内部接口具体可以表现为硬件电路板上的走线。可选地,由于数据生成装置和音频播放装置集成于同一设备中,数据生成装置可以直接利用陀螺仪、汽车方向盘等姿态测量元件获取到音频播放装置的姿态,根据音频播放装置的姿态对音频流形式的内容信息进行重渲染,得到执行过重渲染操作的至少两个单声信号之后,通过内部接口传输至音频播放装置播放空间声。
若空间声数据包括根据内容信息和方位信息生成的至少两个单声信号,在步行导航场景和车载导航场景的第三种实现方式中、利用网络侧或终端侧存储的地图数据场景的第三种实现方式中,以及,利用传感器感测并介绍周围空间对象场景的第三种和第六种实现方式中。数据生成装置将至少两个单声信号发送给音频播放装置,音频播放装置将至少两个单声信号输入至声电换能模块,以播放空间声。可选地,音频播放装置可以获取音频播放装置的姿态,并将音频播放装置的姿态发送给数据生成装置,由数据生成装置根据音频播放装置的姿态对音频流形式的内容信息进行重渲染,数据生成装置将经重渲染操作得到的至少两个单声信号发送给音频播放装置,音频播放装置将经重渲染操作得到的至少两个单声信号输入至声电换能模块,以播放空间声。
或者,在步行导航场景和车载导航场景的第一种实现方式中、利用网络侧或终端侧存储的地图数据场景的第一种实现方式和第四种实现方式中,以及,利用传感器感测并介绍周围空间对象场景的第一种实现方式和第四种实现方式中。若空间声数据包括至少两个单声信号,则数据生成装置通过内部接口传输至声电换能模块,通过声电换能模块播放空间声。
可选地,在根据空间声数据执行渲染操作以播放空间声的过程中,还可以根据空间声数据的音量信息调整左声道信号和右声道信号的播放音量。具体的,若数据生成装置生成的为音量增大指示信息,则增大左声道信号和右声道信号的播放音量,若数据生成装置生成的为音量减小指示信息,则减少左声道信号和右声道信号的播放音量。
本申请实施例中,当播放导航数据时,会根据导航目的地的方位信息和内容信息生成空间声数据,前述生成的空间声数据指示播放的为空间声,且空间声所对应的声源的播放位置与导航目的地的方位信息一致,也即用户可以根据听到的空间声的声源播放位置来确定正确的前进方向,播放方式更为直观,无需再频繁打开地图来确认自己的前进方向是否正确,操作简单,提高了导航过程的效率;此外,当空间对象为其他类型的对象时,提供了一种更为直观且高效的数据呈现方式。
(2)对空间对象信息进行筛选
本申请实施例中,请参阅图9,图9为本申请实施例提供的数据生成方法的一种流程示意图,本申请实施例提供的数据生成方法可以包括:
步骤901,数据生成装置获取空间对象信息。
步骤902,数据生成装置根据空间对象信息生成内容信息和方位信息。
可选地,本发明实施例可以包括步骤903,数据生成装置判断空间对象信息是否满足预设条件,若满足,则进入步骤904;若不满足,则执行结束。
本申请实施例的一些实施例中,数据生成装置判断空间对象信息是否满足预设条件,若满足,则可以进入步骤904,若不满足,则不再根据空间对象信息生成空间声数据,进而可以重新接入步骤901,以处理下一个空间对象信息。
可选地,本发明实施例可以包括步骤904,数据生成装置生成音量增大指示信息。
本申请实施例中,数据生成装置执行步骤901至904的具体实现方式与上述图8对应的实施例中步骤801至804的具体实现方式类似,此处不做赘述。
步骤905,数据生成装置生成空间声数据。
本申请实施例的一些实施例中,步骤903和904为可选步骤,若步骤903和904均不执行,或者,若步骤903执行,步骤904不执行,且步骤903的执行结果为空间对象信息满足预设条件,则步骤905包括:数据生成装置根据方位信息和内容信息,生成空间声数据。
若步骤903和904均执行,且步骤903的执行结果为空间对象信息满足预设条件,则步骤806包括:数据生成装置根据方位信息,内容信息和音量增大指示信息,生成空间声数据。
本申请实施例中,通过上述方式,在生成空间声数据之前,会判断空间对象信息是否满足预设条件,仅在判断结果为满足预设条件的情况下,才会基于空间对象信息生成空间声数据,也即会对空间对象信息进行筛选,既避免了不满足预设条件的空间对象信息造成的计算机资源的浪费,也避免对用户的过度打扰,提高本方案的用户粘度。
可选地,本发明实施例可以包括步骤906,音频播放装置或数据生成装置根据空间声数据播放空间声。
本申请实施例中,数据生成装置执行步骤905和906的具体实现方式可参阅上述图8对应的实施例中步骤806和807的具体实现方式的描述,此处不做赘述。
在图1至图9所对应的实施例的基础上,为了更好的实施本申请实施例的上述方案,下面还提供用于实施上述方案的相关装置。具体参阅图10,图10为本申请实施例提供的数据生成装置的一种结构示意图,数据生成装置100包括:获取模块1001和生成模块1002。获取模块1001,用于获取空间对象信息,空间对象信息用于获取空间对象相对于数据生成装置的方位信息,具体实现方式可以参考图8对应实施例中的步骤801的描述;生成模块1002,用于根据空间对象信息生成内容信息和方位信息,方位信息用于指示空间对象信息指向的空间对象相对于数据生成装置的方位,内容信息用于描述空间对象,具体实现方式可以参考图8对应实施例中的步骤802的描述;生成模块1002,还用于根据方位信息和内容信息,生成空间声数据,空间声数据用于播放空间声,空间声的声源位置与方位信息对应,具体实现方式可以参考图8对应实施例中的步骤802和图9对应实施例中步骤905的描述,具体内容可参见本申请实施例前述所示的方法实施例中的叙述,此处不再赘述。
本申请实施例中,当播放导航数据时,生成模块1002会根据导航目的地的方位信息和 内容信息生成空间声数据,前述生成的空间声数据指示播放的为空间声,且空间声所对应的声源的播放位置与导航目的地的方位信息一致,也即用户可以根据听到的空间声的声源播放位置来确定正确的前进方向,播放方式更为直观,无需再频繁打开地图来确认自己的前进方向是否正确,操作简单,提高了导航过程的效率;此外,当空间对象为其他类型的对象时,提供了一种更为直观且高效的数据呈现方式。
在一种实现方式中,空间声数据包括方位信息和内容信息,或者,空间声数据包括根据方位信息和内容信息生成的至少两个单声信号,至少两个单声信号用于被与两个单声信号对应的声电换能模块同时播放以产生空间声。
在一种可能的设计中,生成模块1002,具体用于根据数据生成装置的位置或姿态中的至少一项,以及空间对象信息,生成方位信息,具体实现方式可以参考图8对应实施例中的步骤802的描述。
本申请实施例中,生成模块1002根据姿态和空间对象信息生成方位信息,以使得最终用户听到的空间声的声源位置与空间对象相对于终端设备的方位信息一致,以提高空间声的精准度。
在一种可能的设计中,获取模块1001,具体用于接收空间对象信息,具体实现方式可以参考图8对应实施例中的步骤801的描述,或者,也可以参考步行导航场景、车载导航场景和利用网络侧或终端侧存储的地图数据介绍周围空间对象场景的描述;或者,通过传感器采集空间对象信息,具体实现方式可以参考图8对应实施例中的步骤801的描述,或者,也可以参考利用传感器感测并介绍周围空间对象场景的描述。
本申请实施例中,生成模块1002可以在多种应用场景中都可以适用本方案提供的数据呈现方式,扩展了本方案的应用场景,提高了本方案的实现灵活性。
在一种可能的设计中,获取模块1001,具体用于通过以下三种方式中的至少一种接收空间对象信息:接收应用程序生成的音频流数据;或者,接收应用程序生成的接口数据;或者,接收网络侧或终端侧存储的地图数据,具体实现方式可以参考图8对应实施例中的步骤801的描述。
在一种可能的设计中,传感器包括光敏传感器、声音传感器、图像传感器、红外传感器、热敏传感器、压力传感器或惯性传感器中的至少一项。
在一种可能的设计中,生成模块1002,具体用于在确定空间对象信息为满足预设条件的空间对象信息的情况下,根据空间对象信息生成内容信息和方位信息,具体实现方式可以参考图8对应实施例中的步骤903至905的描述。
本申请实施例中,生成模块1002在生成空间声数据之前,会判断空间对象信息是否满足预设条件,仅在判断结果为满足预设条件的情况下,才会基于空间对象信息生成空间声数据,也即会对空间对象信息进行筛选,既避免了不满足预设条件的空间对象信息造成的计算机资源的浪费,也避免对用户的过度打扰,提高本方案的用户粘度。
在一种可能的设计中,生成模块1002,还用于在确定空间对象信息为满足预设条件的空间对象信息的情况下,生成音量增大指示信息,音量增大指示信息用于指示增大与满足预设条件的空间对象信息相对应的空间声的音量,具体实现方式可以参考图8对应实施例中的步骤803至804的描述。
本申请实施例中,生成模块1002对于预设对象内容的空间对象可以增大播放音量,以吸引用户的注意力,避免用户错过预设对象内容的空间对象,有利于提高导航过程的安全性,也可以避免用户错过感兴趣的空间对象,提高本方案的用户粘度。
在一种可能的设计中,生成模块1002,还用于在确定空间对象信息为满足预设条件的空间对象信息的情况下,生成音量减小指示信息,音量减小指示信息用于指示减小与满足预设条件的空间对象信息相对应的空间声的音量,具体实现方式可以参考图8对应实施例中的步骤803和805的描述。
在一种可能的设计中,满足预设条件的空间对象信息为包括预设空间位置区域、预设空间方向或者预设对象内容的空间对象信息。
在一种可能的设计中,生成模块1002,具体用于根据方位信息、内容信息和音频播放装置的姿态,对与内容信息对应的音频流数据执行渲染操作,以生成空间声数据,空间声数据包括根据方位信息和内容信息生成的至少两个单声信号,具体实现方式可以参考图8对应实施例中的步骤806的描述。
在一种可能的设计中,数据生成装置100包括耳机、手机、便携式电脑、导航仪或者汽车中的至少一项。
在一种可能的设计中,在音频播放装置为双声道耳机的情况下,预设空间方向为佩戴双声道耳机的用户的面部朝向,音频播放装置用于播放空间声。
本申请实施例中,数据生成装置100具体可以为图1、图5或图6对应实施例中的终端设备,或者,图2至图4以及图7中的数据生成装置等,此次不做限定。需要说明的是,数据生成装置100中各模块/单元之间的信息交互、执行过程等内容,与本申请实施例中图1至图9对应的各个方法实施例基于同一构思,具体内容可参见本申请实施例前述所示的方法实施例中的叙述,此处不再赘述。此外,数据生成装置100可以为一个装置,也可以为两个不同的装置,其中生成内容信息和方位信息的步骤由一个装置执行,根据内容信息和方位信息生成空间声数据的步骤由另一个装置执行。
接下来介绍本申请实施例提供的一种数据生成装置,请参阅图11,图11为本申请实施例提供的数据生成装置的一种结构示意图,数据生成装置1100具体可以表现为虚拟现实VR设备、手机、平板、笔记本电脑、智能穿戴设备、监控数据处理设备或者雷达数据处理设备等,此处不做限定。其中,数据生成装置1100上可以部署有图10对应实施例中所描述的数据生成装置100,用于实现图1至图9对应实施例中数据生成装置的功能。具体的,数据生成装置1100包括:接收器1101、发射器1102、处理器1103和存储器1104(其中数据生成装置1100中的处理器1103的数量可以一个或多个,图11中以一个处理器为例),其中,处理器1103可以包括应用处理器11031和通信处理器11032。在本申请实施例的一些实施例中,接收器1101、发射器1102、处理器1103和存储器1104可通过总线或其它方式连接。
存储器1104可以包括只读存储器和随机存取存储器,并向处理器1103提供指令和数据。存储器1104的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。存储器1104存储有处理器和操作指令、可执行模块或者数据结构,或者它们的子集,或者它们的扩展集,其中,操作指令可包括各种操作指令,用于实现各 种操作。
处理器1103控制数据生成装置的操作。具体的应用中,数据生成装置的各个组件通过总线系统耦合在一起,其中总线系统除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都称为总线系统。
上述本申请实施例揭示的方法可以应用于处理器1103中,或者由处理器1103实现。处理器1103可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1103中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1103可以是通用处理器、数字信号处理器(digital signal processing,DSP)、微处理器或微控制器,还可进一步包括专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。该处理器1103可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1104,处理器1103读取存储器1104中的信息,结合其硬件完成上述方法的步骤。
接收器1101可用于接收输入的数字或字符信息,以及产生与数据生成装置的相关设置以及功能控制有关的信号输入。发射器1102可用于通过第一接口输出数字或字符信息,发射器1102还可用于通过第一接口向磁盘组发送指令,以修改磁盘组中的数据,发射器1102还可以包括显示屏等显示设备。
本申请实施例中,在一种情况下,处理器1103,用于执行图1至图9对应实施例中的数据生成装置执行的数据生成方法。具体的,应用处理器11031,用于获取空间对象信息,空间对象信息用于获取空间对象相对于数据生成装置的方位信息,根据空间对象信息生成内容信息和方位信息,方位信息用于指示空间对象信息指向的空间对象相对于数据生成装置的方位,内容信息用于描述空间对象,根据方位信息和内容信息,生成空间音频数据空间声数据,空间音频数据空间声数据用于生成播放空间声信号,空间声信号的声源位置与方位信息对应。
在一种可能的设计中,应用处理器11031,具体用于根据数据生成装置终端设备的位置或姿态中的至少一项,以及空间对象信息,生成方位信息。
在一种可能的设计中,应用处理器11031,具体用于接收空间对象信息,或者,通过传感器采集空间对象信息。
在一种可能的设计中,应用处理器11031,具体用于通过以下三种方式中的至少一种接收空间对象信息:接收应用程序生成的音频流数据,或者,接收应用程序生成的接口数据,或者,接收网络侧或终端侧存储的地图数据。
在一种可能的设计中,传感器包括光敏传感器、声音传感器、图像传感器、红外传感器、热敏传感器、压力传感器或惯性传感器中的至少一项。
在一种可能的设计中,应用处理器11031,具体用于在确定空间对象信息为满足预设条 件的空间对象信息的情况下,根据空间对象信息生成内容信息和方位信息。
在一种可能的设计中,应用处理器11031,还用于在确定空间对象信息为满足预设条件的空间对象信息的情况下,生成音量增大指示信息,音量增大指示信息用于指示增大与满足预设条件的空间对象信息相对应的空间声的音量。
在一种可能的设计中,满足预设条件的空间对象信息为包括预设空间位置区域、预设空间方向或者预设对象内容的空间对象信息。
在一种可能的设计中,在音频播放装置为双声道耳机的情况下,预设空间方向为佩戴双声道耳机的用户的面部朝向,音频播放装置用于播放空间声。
需要说明的是,应用处理器11031执行上述各个步骤的具体方式,与本申请实施例中图1至图9对应的各个方法实施例基于同一构思,其带来的技术效果与本申请实施例中图1至图9对应的各个方法实施例相同,具体内容可参见本申请实施例前述所示的方法实施例中的叙述,此处不再赘述。数据生成装置1100可以为一个装置,也可以为两个不同的装置,其中生成内容信息和方位信息的步骤由一个装置执行,根据内容信息和方位信息生成空间声数据的步骤由另一个装置执行。
本申请实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行上述图1至图9所示实施例中描述的方法中数据生成装置所执行的步骤。
本申请实施例中还提供一种计算机程序产品,当其在计算机上运行时,使得计算机执行如前述图1至图9所示实施例中描述的方法中数据生成装置所执行的步骤。
本申请实施例中还提供一种芯片系统,该芯片系统包括处理器,用于支持网络设备实现上述方面中所涉及的功能,例如,例如发送或处理上述方法中所涉及的数据和/或数据。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存网络设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包括芯片和其他分立器件。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可 以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请实施例各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请实施例的技术方案,而非对其限制;尽管参照前述实施例对本申请实施例进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请实施例各实施例技术方案的精神和范围。

Claims (24)

  1. 一种数据生成方法,其特征在于,所述方法应用于数据生成装置中,所述方法包括:
    获取空间对象信息,所述空间对象信息用于获取空间对象相对于所述数据生成装置的方位信息;
    根据所述空间对象信息生成内容信息和所述方位信息,所述方位信息用于指示所述空间对象信息指向的空间对象相对于所述数据生成装置的方位,所述内容信息用于描述所述空间对象;
    根据所述方位信息和所述内容信息,生成空间声数据,所述空间声数据用于播放空间声,所述空间声的声源位置与所述方位信息对应。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述空间对象信息生成方位信息,包括:
    根据所述数据生成装置的位置或姿态中的至少一项,以及所述空间对象信息,生成所述方位信息。
  3. 根据权利要求1或2所述的方法,其特征在于,所述获取空间对象信息,包括:
    接收所述空间对象信息;或者,通过传感器采集所述空间对象信息。
  4. 根据权利要求3所述的方法,其特征在于,所述接收所述空间对象信息,包括通过以下三种方式中的至少一种接收所述空间对象信息:
    接收应用程序生成的音频流数据;
    或者,接收应用程序生成的接口数据;
    或者,接收网络侧或终端侧存储的地图数据。
  5. 根据权利要求3所述的方法,其特征在于,所述传感器包括光敏传感器、声音传感器、图像传感器、红外传感器、热敏传感器、压力传感器或惯性传感器中的至少一项。
  6. 根据权利要求1至5任一项所述的方法,其特征在于,所述根据所述空间对象信息生成内容信息和方位信息,包括:
    在确定所述空间对象信息为满足预设条件的空间对象信息的情况下,根据所述空间对象信息生成内容信息和方位信息。
  7. 根据权利要求1至5任一项所述的方法,其特征在于,所述方法还包括:
    在确定所述空间对象信息为满足预设条件的空间对象信息的情况下,生成音量增大指示信息,所述音量增大指示信息用于指示增大与所述满足预设条件的空间对象信息相对应的空间声的音量。
  8. 根据权利要求6或7所述的方法,其特征在于,所述满足预设条件的空间对象信息为包括预设空间位置区域、预设空间方向或者预设对象内容的空间对象信息。
  9. 根据权利要求8所述的方法,其特征在于,在音频播放装置为双声道耳机的情况下,所述预设空间方向为佩戴所述双声道耳机的用户的面部朝向,所述音频播放装置用于播放所述空间声。
  10. 一种数据生成装置,其特征在于,所述装置包括:
    获取模块,用于获取空间对象信息,所述空间对象信息用于获取空间对象相对于所述数据生成装置的方位信息;
    生成模块,用于根据所述空间对象信息生成内容信息和所述方位信息,所述方位信息用于指示所述空间对象信息指向的空间对象相对于所述数据生成装置的方位,所述内容信息用于描述所述空间对象;
    所述生成模块,还用于根据所述方位信息和所述内容信息,生成空间声数据,所述空间声数据用于播放空间声,所述空间声的声源位置与所述方位信息对应。
  11. 根据权利要求10所述的装置,其特征在于,所述生成模块,具体用于根据所述终端设备的位置或姿态中的至少一项,以及所述空间对象信息,生成所述方位信息。
  12. 根据权利要求10或11所述的装置,其特征在于,所述获取模块,具体用于接收所述空间对象信息;或者,通过传感器采集所述空间对象信息。
  13. 根据权利要求12所述的装置,其特征在于,所述获取模块,具体用于通过以下三种方式中的至少一种接收所述空间对象信息:
    接收应用程序生成的音频流数据;
    或者,接收应用程序生成的接口数据;
    或者,接收网络侧或终端侧存储的地图数据。
  14. 根据权利要求12所述的装置,其特征在于,所述传感器包括光敏传感器、声音传感器、图像传感器、红外传感器、热敏传感器、压力传感器或惯性传感器中的至少一项。
  15. 根据权利要求10至14任一项所述的装置,其特征在于,所述生成模块,具体用于在确定所述空间对象信息为满足预设条件的空间对象信息的情况下,根据所述空间对象信息生成内容信息和方位信息。
  16. 根据权利要求10至14任一项所述的装置,其特征在于,
    所述生成模块,还用于在确定所述空间对象信息为满足预设条件的空间对象信息的情况下,生成音量增大指示信息,所述音量增大指示信息用于指示增大与所述满足预设条件的空间对象信息相对应的空间声的音量。
  17. 根据权利要求15或16所述的装置,其特征在于,所述满足预设条件的空间对象信息为包括预设空间位置区域、预设空间方向或者预设对象内容的空间对象信息。
  18. 根据权利要求10至17任一项所述的装置,其特征在于,所述数据生成装置包括耳机、手机、便携式电脑、导航仪或者汽车中的至少一项。
  19. 一种数据生成装置,其特征在于,包括存储器和处理器,所述存储器存储计算机程序指令,所述处理器运行所述计算机程序指令以执行权利要求1至9任一项所述的操作。
  20. 根据权利要求19所述的装置,其特征在于,所述装置还包括收发器,用于接收所述空间对象信息。
  21. 根据权利要求19所述的装置,其特征在于,所述装置还包括传感器,用于采集所述空间对象信息。
  22. 根据权利要求19至21任一项所述的装置,其特征在于,所述数据生成装置包括耳机、手机、便携式电脑、导航仪或者汽车中的至少一项。
  23. 一种计算机可读存储介质,其特征在于,包括计算机指令,所述计算机指令被处理器运行时,使得所述数据生成装置执行上述权利要求1至9中任一项所述的方法。
  24. 一种计算机程序产品,其特征在于,当所述计算机程序产品在处理器上运行时, 使得所述数据生成装置执行上述权利要求1至9中任一项所述的方法。
PCT/CN2019/129214 2019-12-27 2019-12-27 一种数据生成方法及装置 WO2021128287A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2022539102A JP2023508418A (ja) 2019-12-27 2019-12-27 情報生成方法及び装置
KR1020227024388A KR20220111715A (ko) 2019-12-27 2019-12-27 데이터 생성 방법 및 장치
EP19958038.2A EP4060522A4 (en) 2019-12-27 2019-12-27 DATA GENERATING METHOD AND DEVICE
CN201980102878.7A CN114787799A (zh) 2019-12-27 2019-12-27 一种数据生成方法及装置
PCT/CN2019/129214 WO2021128287A1 (zh) 2019-12-27 2019-12-27 一种数据生成方法及装置
US17/848,748 US20220322009A1 (en) 2019-12-27 2022-06-24 Data generation method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/129214 WO2021128287A1 (zh) 2019-12-27 2019-12-27 一种数据生成方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/848,748 Continuation US20220322009A1 (en) 2019-12-27 2022-06-24 Data generation method and apparatus

Publications (1)

Publication Number Publication Date
WO2021128287A1 true WO2021128287A1 (zh) 2021-07-01

Family

ID=76573503

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/129214 WO2021128287A1 (zh) 2019-12-27 2019-12-27 一种数据生成方法及装置

Country Status (6)

Country Link
US (1) US20220322009A1 (zh)
EP (1) EP4060522A4 (zh)
JP (1) JP2023508418A (zh)
KR (1) KR20220111715A (zh)
CN (1) CN114787799A (zh)
WO (1) WO2021128287A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102811406A (zh) * 2011-06-02 2012-12-05 株式会社电装 三维声音装置
WO2019067443A1 (en) * 2017-09-27 2019-04-04 Zermatt Technologies Llc AUDIO SPACE NAVIGATION
CN109960764A (zh) * 2019-04-01 2019-07-02 星觅(上海)科技有限公司 行车信息提示方法、装置、电子设备和介质
CN110139205A (zh) * 2018-02-09 2019-08-16 驭势(上海)汽车科技有限公司 用于辅助信息呈现的方法及装置
CN110134824A (zh) * 2018-02-09 2019-08-16 驭势(上海)汽车科技有限公司 呈现地理位置信息的方法、装置及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8170222B2 (en) * 2008-04-18 2012-05-01 Sony Mobile Communications Ab Augmented reality enhanced audio
EP2669634A1 (en) * 2012-05-30 2013-12-04 GN Store Nord A/S A personal navigation system with a hearing device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102811406A (zh) * 2011-06-02 2012-12-05 株式会社电装 三维声音装置
WO2019067443A1 (en) * 2017-09-27 2019-04-04 Zermatt Technologies Llc AUDIO SPACE NAVIGATION
CN110139205A (zh) * 2018-02-09 2019-08-16 驭势(上海)汽车科技有限公司 用于辅助信息呈现的方法及装置
CN110134824A (zh) * 2018-02-09 2019-08-16 驭势(上海)汽车科技有限公司 呈现地理位置信息的方法、装置及系统
CN109960764A (zh) * 2019-04-01 2019-07-02 星觅(上海)科技有限公司 行车信息提示方法、装置、电子设备和介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4060522A4 *

Also Published As

Publication number Publication date
JP2023508418A (ja) 2023-03-02
EP4060522A1 (en) 2022-09-21
EP4060522A4 (en) 2022-12-14
KR20220111715A (ko) 2022-08-09
CN114787799A (zh) 2022-07-22
US20220322009A1 (en) 2022-10-06

Similar Documents

Publication Publication Date Title
JP5821307B2 (ja) 情報処理装置、情報処理方法及びプログラム
US11638130B2 (en) Rendering of sounds associated with selected target objects external to a device
US10375506B1 (en) Spatial audio to enable safe headphone use during exercise and commuting
US9497146B2 (en) Message transfer system including display device, mobile device and message transfer method thereof
US9847024B2 (en) Methods and systems for providing a traffic congestion warning
US11309983B2 (en) Media exchange between devices
US20210400414A1 (en) Head tracking correlated motion detection for spatial audio applications
KR20140098615A (ko) 이동 단말기와 연결된 보청기를 피팅(fitting) 하는 방법 및 이를 수행하는 이동 단말기
US20140219485A1 (en) Personal communications unit for observing from a point of view and team communications system comprising multiple personal communications units for observing from a point of view
JP2009083815A (ja) ドライブレコーダ装置および事故解析シミュレーション装置
CN104798434A (zh) 通过行为预测防止掉线呼叫
EP4113961A1 (en) Voice call method and apparatus, electronic device, and computer readable storage medium
KR101875284B1 (ko) 드론을 이용한 관광정보 제공 시스템 및 방법
CN113343457A (zh) 自动驾驶的仿真测试方法、装置、设备及存储介质
WO2021128287A1 (zh) 一种数据生成方法及装置
CN110134824B (zh) 呈现地理位置信息的方法、装置及系统
CN104933895B (zh) 交通工具的提醒方法及装置
US20160248983A1 (en) Image display device and method, image generation device and method, and program
US20230038945A1 (en) Signal processing apparatus and method, acoustic reproduction apparatus, and program
CN116017265A (zh) 音频处理方法、电子设备、可穿戴设备、车辆及存储介质
KR20240040737A (ko) 다수의 마이크로폰들로부터의 오디오 신호들의 프로세싱
CN115373625A (zh) 便携式电子配件系统的音频操作系统及相关系统、方法和设备
CN116302270A (zh) 信息处理方法及相关装置
JP2014142687A (ja) 情報処理装置、情報処理方法、およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19958038

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022539102

Country of ref document: JP

Kind code of ref document: A

Ref document number: 2019958038

Country of ref document: EP

Effective date: 20220617

ENP Entry into the national phase

Ref document number: 20227024388

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE