CN110139205B - Method and device for auxiliary information presentation - Google Patents

Method and device for auxiliary information presentation Download PDF

Info

Publication number
CN110139205B
CN110139205B CN201810136218.8A CN201810136218A CN110139205B CN 110139205 B CN110139205 B CN 110139205B CN 201810136218 A CN201810136218 A CN 201810136218A CN 110139205 B CN110139205 B CN 110139205B
Authority
CN
China
Prior art keywords
information
audio
audio file
perception
playable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810136218.8A
Other languages
Chinese (zh)
Other versions
CN110139205A (en
Inventor
张印帅
周峰
史元春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uisee Shanghai Automotive Technologies Ltd
Original Assignee
Uisee Shanghai Automotive Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uisee Shanghai Automotive Technologies Ltd filed Critical Uisee Shanghai Automotive Technologies Ltd
Priority to CN201810136218.8A priority Critical patent/CN110139205B/en
Publication of CN110139205A publication Critical patent/CN110139205A/en
Application granted granted Critical
Publication of CN110139205B publication Critical patent/CN110139205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S1/005For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S3/004For headphones

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a method and a device for auxiliary information presentation, and relates to the technical field of internet. The method for auxiliary information presentation is applied to a smart device, the smart device comprises a sensor and a 3D audio device, and the method comprises the following steps: acquiring perception information of the intelligent equipment through the sensor; analyzing and processing the perception information to obtain decision information and/or control information of the intelligent equipment; and controlling the 3D audio equipment to play corresponding 3D audio according to the perception information and/or the decision information and/or the control information, wherein the sound source position of the played 3D audio is related to the orientation information of the perception information and/or the decision information and/or the control information. The present disclosure also provides a smart device, an electronic device, and a computer-readable storage medium.

Description

Method and device for auxiliary information presentation
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method and an apparatus for auxiliary information presentation, an intelligent device, an electronic device, and a computer-readable storage medium.
Background
The 3D audio technology can provide orientation information which can be distinguished by human ears for sound through the deployment of sound equipment and the cooperation of software, and can provide an extraordinary auditory feeling for people in many scenes such as movies, stages and smart homes by matching the movement of the sound. Due to the limitation of playing equipment and the difficulty in real-time processing of 3D audio files, the existing application field of the audio file system mainly focuses on static display and lacks of interactive and real-time 3D audio information applications.
The man-machine interaction mode of the existing automatic driving display system is as follows: the method is only suitable for single-lane information prompt of the vehicle, cannot display the high-speed road condition and the complex lane driving road condition compatible with multiple lanes, and leads the driver to subjectively misjudge the driving safety accident condition because of incomplete information acquisition. In addition, the existing display picture is only displayed by icons, and the display picture is not visual, so that potential driving safety hazards exist.
The existing intelligent vehicle body peripheral sensor can effectively perform multiple tasks such as automatic driving, auxiliary driving and the like, so that traffic accidents are effectively prevented, the driving burden and difficulty of a driver are greatly reduced, but highly intelligent equipment can cause the confusion of passengers on vehicle decision making, and thus distrust is generated. Therefore, the presentation of good vehicle and surrounding environment information becomes a core element for establishing a good human-vehicle relationship.
Sensors have become an essential element of almost every electronic product at present, and some complex and advanced sensors can assist mobile phones, computers and even automobiles to perform some tasks beyond imagination, for example, unmanned driving with hot fire is a highly intelligent scheme realized by sensors around the automobile body. However, the problems also bring about a serious problem, namely the trust problem between people and electronic equipment, so that the necessary links of perception, analysis, decision making and the like for ensuring the normal operation of the machine are not enough to ensure good experience for an interactive system. What is needed is a better way to present environmental information and machine decisions based on this.
However, the current interactive information presentation mode depends too much on visual information, and the limited auditory information used in the traditional mode is not enough to present enough information, so that a more efficient information auxiliary presentation mode is required.
Therefore, there is a need for a new method and apparatus for assisting in the presentation of information, a smart device, an electronic device, and a computer-readable storage medium.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a method and apparatus for auxiliary information presentation, a smart device, an electronic device, and a computer-readable storage medium, which overcome one or more of the problems due to the limitations and disadvantages of the related art, at least to some extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be learned by practice of the disclosure.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for auxiliary information presentation, which is applied to a smart device including a sensor and a 3D audio device, the method including: acquiring perception information of the intelligent equipment through the sensor; analyzing and processing the perception information to obtain decision information and/or control information of the intelligent equipment; and controlling the 3D audio equipment to play corresponding 3D audio according to the perception information and/or the decision information and/or the control information, wherein the sound source position of the played 3D audio is related to the orientation information of the perception information and/or the decision information and/or the control information.
In an exemplary embodiment of the present disclosure, the perception information includes: any one or more of intelligent equipment surrounding environment information, intelligent equipment position information and intelligent equipment surrounding service information.
In an exemplary embodiment of the present disclosure, the method further comprises: converting the perceptual information and/or the decision information and/or the control information into a first playable audio file.
In an exemplary embodiment of the present disclosure, the method further comprises: pre-storing a second playable audio file and a 3D audio playing format; matching the first playable audio file and/or the second playable audio file with a corresponding 3D audio playback format.
In an exemplary embodiment of the present disclosure, the method further comprises: and classifying and storing the first playable audio file and the second playable audio file.
In an exemplary embodiment of the present disclosure, the classifying the first playable audio file and the second playable audio file includes: classifying the first playable audio file and the second playable audio file into active cue information and inactive cue information.
In an exemplary embodiment of the present disclosure, the controlling the 3D audio device to play the corresponding 3D audio according to the perception information and/or the decision information and/or the control information includes: calling a corresponding playable audio file according to the perception information and/or the decision information and/or the control information; judging whether the called playable audio file is active prompt information or not; and when the called playable audio file is the active prompt message, controlling the 3D audio equipment to play the called playable audio file.
In an exemplary embodiment of the present disclosure, the controlling the 3D audio device to play the corresponding 3D audio according to the perception information and/or the decision information and/or the control information further includes: when the called playable audio file is the inactive prompt message, judging whether the input message is received; and when the input information is received, retrieving a corresponding playable audio file according to the input information and controlling the 3D audio equipment to play.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for assisting in information presentation, including: the information acquisition module is used for acquiring perception information of the intelligent equipment through the sensor; the information processing module is used for analyzing and processing the perception information to obtain decision information and/or control information of the intelligent equipment; and the audio playing module is used for controlling the 3D audio equipment of the intelligent equipment to play corresponding 3D audio according to the perception information and/or the decision information and/or the control information, wherein the sound source position of the played 3D audio is related to the orientation information of the perception information and/or the decision information and/or the control information.
In an exemplary embodiment of the present disclosure, the perception information includes: any one or more of intelligent equipment surrounding environment information, intelligent equipment position information and intelligent equipment surrounding service information.
In an exemplary embodiment of the present disclosure, the apparatus further includes: an information conversion module for converting the perception information and/or the decision information and/or the control information into a first playable audio file.
In an exemplary embodiment of the present disclosure, the apparatus further includes: the pre-storage module is used for pre-storing a second playable audio file and a 3D audio playing format; and the matching module is used for matching the first playable audio file and/or the second playable audio file with a corresponding 3D audio playing format.
In an exemplary embodiment of the present disclosure, the apparatus further includes: and the classification storage module is used for classifying and storing the first playable audio file and the second playable audio file.
In an exemplary embodiment of the present disclosure, the classification storage module includes a classification unit configured to classify the first playable audio file and the second playable audio file into active cue information and inactive cue information.
In an exemplary embodiment of the present disclosure, the audio playing module includes an audio file retrieving unit, a first judging unit, and a first playing unit. The audio file calling unit is used for calling a corresponding playable audio file according to the perception information and/or the decision information and/or the control information. The first judging unit is used for judging whether the called playable audio file is active prompt information or not. The first playing unit is used for controlling the 3D audio equipment to play the called playable audio file when the called playable audio file is the active prompt message.
In an exemplary embodiment of the present disclosure, the audio playing module further includes a second judging unit and a second playing unit. The second judging unit is used for judging whether input information is received or not when the called playable audio file is inactive prompting information. And the second playing unit is used for retrieving a corresponding playable audio file according to the input information and controlling the 3D audio equipment to play when the input information is received.
According to a third aspect of the embodiments of the present disclosure, there is provided a smart device, including: the environment perception system is used for collecting perception information of the intelligent equipment; the information processing system is used for analyzing and processing the perception information and generating corresponding decision information and/or control information; the audio and 3D format generating device is used for converting the perception information and/or the decision information and/or the control information into a playable audio file and matching the playable audio file with a pre-stored 3D audio playing format; and the 3D audio equipment is used for playing corresponding 3D audio according to the perception information and/or the decision information and/or the control information, wherein the sound source position of the played 3D audio is related to the orientation information of the perception information and/or the decision information and/or the control information.
In an exemplary embodiment of the present disclosure, the smart device comprises a smart driving vehicle.
In an exemplary embodiment of the present disclosure, the environmental perception system includes any one or more of a positioning sensor, an auditory sensor, a pressure sensor, a visual sensor, a millimeter wave radar, a lidar.
In an exemplary embodiment of the present disclosure, the information processing system is further configured to: classifying the perception information and/or the decision information and/or the control information.
In an exemplary embodiment of the present disclosure, the audio and 3D format generating apparatus is further configured to: and classifying and storing the playable audio files according to the classification.
In an exemplary embodiment of the disclosure, the 3D audio device is further configured to: judging whether the currently called playable audio file belongs to active prompt information or not according to the classification; and when the called playable audio file is the active prompt message, playing the called playable audio file.
In an exemplary embodiment of the disclosure, the 3D audio device is further configured to: when the called playable audio file is the inactive prompt message, judging whether the input message is received; and when the input information is received, retrieving a corresponding playable audio file according to the input information for playing.
In an exemplary embodiment of the present disclosure, the 3D audio device includes a plurality of speakers provided at preset locations of the smart device.
In an exemplary embodiment of the disclosure, an included angle between a connection line of each sound and a preset height at a designated position in the smart device and a horizontal line is within a range of a first preset angle and a second preset angle.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: one or more processors; storage means for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the embodiments described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method as in any one of the above embodiments.
According to the technical scheme in one embodiment of the disclosure, a 3D audio can be formed by using a 3D sound in the intelligent device, and the position of a sound source in a space created by the 3D audio is used for assisting the existing visual information device to present information acquired by a sensor of the intelligent device and/or decision information and/or control information made by the intelligent device.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 shows a flow chart of a method for assisting in the presentation of information in an exemplary embodiment of the disclosure.
Fig. 2 shows a flowchart of another method for assisting in the presentation of information in an exemplary embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of an apparatus for assisting information presentation in an exemplary embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of a smart device in an exemplary embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of another smart device in an exemplary embodiment of the present disclosure.
Fig. 6 shows a schematic diagram of another smart device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 shows a flow chart of a method for assisting in the presentation of information in an exemplary embodiment of the disclosure.
As shown in fig. 1, the present embodiment provides a method for auxiliary information presentation, which may be applied to a smart device including a sensor and a 3D audio device. The method may include the following steps.
The smart device may be, for example, an intelligent driving vehicle, but the present disclosure is not limited thereto, and any movable smart device, such as a smart phone, a smart robot, an unmanned aerial vehicle, and the like, may be applied to the embodiment of the present invention. In the following embodiments, the intelligent device is mainly exemplified as an intelligent driving vehicle, but the disclosure is not limited thereto. The sensors include any kind and number of sensors arranged on the intelligent device, such as acceleration sensors, visual sensors, and the like. The type and number of the sensors may be set autonomously according to the type of the specific smart device, and the following embodiments also illustrate the sensors disposed on the smart driving vehicle, but the disclosure is not limited thereto.
In the embodiment of the present invention, the vision sensor includes, but is not limited to, a camera with a detection and recognition function and a camera without a detection and recognition function. The former module is provided with software inside, and the target in the image is extracted and processed to obtain the position and movement information of the target. For example, a wide-angle camera with an object recognition function is one of visual sensors having a detection recognition function. The camera without the detection and identification functions only records and transmits the shot images for subsequent processing.
In an exemplary embodiment, the vision sensor may include one or more cameras. The present disclosure is not limited thereto.
In an exemplary embodiment, the camera may be a monocular, binocular, or monocular camera. However, the present disclosure is not limited thereto, and any sensor having a limited sensing angle may be applied to the present disclosure.
When the intelligent Device is an intelligent driving vehicle, the vision sensor in the embodiment of the present invention is an on-board camera, the on-board camera may be a monocular camera, a binocular camera, or a combination of more cameras, a single camera may employ a conventional lens, a wide-angle lens, a telephoto lens, a zoom lens, or the like, the camera sensor may be a CCD (Charge-coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor) sensor, the camera type may be a multicolor camera (such as an RGB color camera) or a monochrome camera (such as a black-and-white camera, an infrared camera, an R/G/B monochrome camera), or the like, and a specific form of the camera is not a limitation on an embodiment of the present invention.
It should be noted that the intelligent driving in the embodiment of the present invention is a broad concept, and may include man-machine co-driving, which means that the driver drives by people on some road sections and automatically drives by cars on some road sections, and the less the driver is required, the higher the degree of automatic driving is. The method also can completely realize driving travel by the automobile without a driver.
In step S110, sensing information of the smart device is collected by the sensor.
In an exemplary embodiment, the perception information may include: any one or more of intelligent equipment peripheral environment information, intelligent equipment position information, intelligent equipment peripheral service information and the like.
For example, real-time location information for a smart driving vehicle may be known by a positioning sensor on the smart driving vehicle.
In step S120, the perception information is analyzed to obtain decision information and/or control information of the smart device.
For example, the real-time position information collected by the positioning sensor of the intelligent driving vehicle may be processed to obtain the current decision information and/or control information for operating the driving direction and the driving speed of the intelligent driving vehicle.
In step S130, the 3D audio device is controlled to play a corresponding 3D audio according to the perception information and/or the decision information and/or the control information.
In some embodiments, the 3D audio device may be controlled to play the corresponding 3D audio only according to the perception information collected by the sensor, or only according to the decision information or control information currently made by the smart device. In other embodiments, the perception information, the decision information and the control information may be integrated at the same time to control the 3D audio device to play the corresponding 3D audio.
Wherein a sound source position of the played 3D audio is related to orientation information of the perceptual information and/or the decision information and/or the control information.
In the embodiment of the present invention, the orientation information in the perception information, the decision information, and/or the control information is analyzed, for example, when the control information currently sent by the intelligent driving vehicle is "turn right", the 3D audio device is controlled to simulate the sound source position to play the corresponding 3D audio in front of the right side of the intelligent driving vehicle.
In the embodiment of the present invention, the orientation information may be orientation information directly extracted from the perception information, the decision information, and the control information and related to a location, a direction (up, down, left, right, front, and back), a distance, or orientation information obtained by further analyzing and processing the perception information, the decision information, and the control information, for example, positioning the current location information of the intelligent driving vehicle by a positioning sensor of the intelligent driving vehicle, and determining a distance and an orientation angle between peripheral service information (e.g., a certain scenic spot) of the intelligent driving vehicle to be played and the location information, so as to control a 3D audio device to play a corresponding 3D audio at the orientation angle simulation sound source position, and adjust a volume of the played 3D audio according to a distance, for example, the volume is greater the closer the distance between the intelligent driving vehicle and the attraction, and the volume is less the farther the distance is.
In an exemplary embodiment, the method may further include: converting the perceptual information and/or the decision information and/or the control information into a first playable audio file.
In an exemplary embodiment, the method may further include: pre-storing a second playable audio file and a 3D audio playing format; matching the first playable audio file and/or the second playable audio file with a corresponding 3D audio playback format.
In the embodiment of the present invention, the intelligent driving vehicle is taken as an example for explanation, wherein a prompt audio file of part of driving related information in the intelligent driving vehicle is pre-stored, such as a vehicle acceleration sound, a vehicle charging prompt sound, and the like, but not real-time driving decision and/or control information, and a prompt sound which needs to be generated according to a change of an actual situation in the driving related information is not pre-stored, for example, when a moving object on a certain side (taking a left side of the vehicle as an example) is too close to the intelligent driving vehicle, a 3D audio device in the vehicle directly amplifies an environmental sound in that direction and plays the environmental sound in the same direction in the vehicle (here, i.e., the left side of the vehicle). For another example, when a person outside the vehicle wishes to converse with a passenger or driver inside the vehicle, the person may also play back using an audio file processed in real time.
In an exemplary embodiment, the method may further include: and classifying and storing the first playable audio file and the second playable audio file.
In the embodiment of the present invention, the perception information obtained by the sensor of the vehicle may include information such as vehicle body surrounding environment information, vehicle position information, environment model information, and location-based surrounding service information. Classification can be made according to several situations: data required for vehicle operation; vehicle operation and other aspects of data actively presented to a user (e.g., a passenger and/or driver); pre-stored or searchable data for user interaction, and the like. After classification, the different classes of data are used for different purposes.
In another embodiment, a smart driving vehicle generally includes five parts, a positioning sensor, a radar sensor, a visual sensor (camera or video camera), an auditory sensor, and a vehicle networking environmental information sensor. The system can process five types of information sensed by the sensors into: driving class information and non-driving class information. The driving information is classified into three parts of sensing information, decision information (for example, task planning) and control information (for example, decision execution) according to a preset rule, and is displayed on an output interface provided by the vehicle in a visual form, such as a display screen of an intelligent driving vehicle, an LED lamp strip, a 3D audio device and the like. And the non-driving information is matched according to the input information of the user and displayed to the user through the output equipment provided by the vehicle.
In an exemplary embodiment, the classifying the first playable audio file and the second playable audio file may include: classifying the first playable audio file and the second playable audio file into active cue information and inactive cue information.
In an exemplary embodiment, the controlling the 3D audio device to play the corresponding 3D audio according to the perception information and/or the decision information and/or the control information may include: calling a corresponding playable audio file according to the perception information and/or the decision information and/or the control information; judging whether the called playable audio file is active prompt information or not; and when the called playable audio file is the active prompt message, controlling the 3D audio equipment to play the called playable audio file.
In an exemplary embodiment, the controlling the 3D audio device to play the corresponding 3D audio according to the perception information and/or the decision information and/or the control information may further include: when the called playable audio file is the inactive prompt message, judging whether the input message is received; and when the input information is received, retrieving a corresponding playable audio file according to the input information and controlling the 3D audio equipment to play.
In the embodiment of the present invention, the active prompting information may include information that affects execution of the vehicle in environment information related to driving of the vehicle, that is, if the environment information acquired by the vehicle requires a change in a driving state of the vehicle, it is determined as the active prompting information. For example, when a sensor of a vehicle detects that there is an obstacle in front of the vehicle, the vehicle needs to change lanes, and at this time, the information sensed by the obstacle in front is information that needs to be actively presented to predict the change in the driving state of the vehicle that is going to be performed.
It should be noted that the 3D audio device in the embodiment of the present invention may be a single sound effect device, or may be a plurality of sound devices, and the disclosure does not limit the same. The single sound equipment can be defined as two kinds, one kind can use the earphone as the single sound equipment, and 3D sound effect can be realized. A single sound may contain multiple speakers.
In the embodiment of the invention, a multi-microphone array can be adopted, and the 3D audio content can be acquired and stored in real time by matching with an audio processing algorithm. For audio playing, because an audio source and playing equipment connected with an existing multi-channel sound box often cannot support multi-channels, a static state refers to that an existing 3D audio file is often used in a CD or some packaged software, so that interaction with a user cannot be achieved. Therefore, in this embodiment, a mode of bluetooth conversion to FM (Frequency Modulation) may be adopted to transfer a 3D audio file in an intelligent device of an android or other intelligent operating system to a multichannel speaker, thereby realizing interactive 3D audio. For example, a user may implement this dynamic interactive 3D audio by interacting with a smart device, converting to FM control 5.1 channels with bluetooth.
In an exemplary embodiment, the 3D audio device is given a highly interactive property by incorporating dynamic information, such as introduction of ambient information according to the current location of the smart device, etc. The positioning mode of the intelligent device can be based on environment information, for example, in a forest park, when wild animals are encountered, positioning can be carried out through the vision of a camera installed on the vehicle, and corresponding information is played. In other embodiments, a positioning system based on two-dimensional codes, NFC (Near Field Communication), and other auxiliary positioning tools may also be used.
In an exemplary embodiment, the information content based on the position (for example, the intelligent driving vehicle) is presented in a 3D audio manner, which optimizes the excessive dependence on the visual information originally, and the position attribute (i.e., the audio file has the position attribute, namely, three quantities of distance, height and angle of the sound source) of the 3D audio provides a good carrier for the presentation of the information based on the position, which cannot be achieved by the traditional presentation manner.
The audio cue in the prior art has only one dimension of content, and the audio cue in many scenes also needs position information cue to be presented with voice content. For example, when a vehicle explains scenic spots in a scenic area, the user can find a specific position of a currently explained scene more conveniently and quickly by using the sound source position of the 3D audio in the embodiment of the present disclosure as an aid.
In an exemplary embodiment, the sensor collects and decides the surrounding environment information, and the environment information sensed by the intelligent device is presented to the user more intuitively in a 3D audio presentation mode, so that the user can better understand the sensed content of the device and the decision reason of the device, and the human-computer trust is improved.
In an exemplary embodiment, in combination with the location information, the 3D audio may enhance the presentation of the visual information, and may be organically combined with the visual information, for example, by setting an audio alert at a certain location to call the user's attention to the corresponding location in space, thereby drawing the user's attention to the location visual information.
For example, many components within the vehicle interior do not themselves have interactive feedback, such as windows, rear-view mirrors, seats, and so forth. But by playing of the 3D audio it can be simulated that the position has an auditory feedback. For example, when a user forgets to fasten a seat belt, an audio cue can be played at the corresponding position of the seat belt.
In an exemplary embodiment, part of the user input with the location attribute may be fed back by means of 3D audio, for example, when the user has no way to interact in a space based on feedback, the 3D audio may give the user some auditory feedback by virtualizing the spatial location, thereby creating the spatial location as an interactive space effect.
In the embodiment of the present invention, the user input with the position attribute is an interactive mode that requires a user to input at a certain position or a certain key in a space. For example, a passenger may not have meaning when knocking a window in a traditional environment, and in the embodiment of the present invention, interactive meaning may be given by knocking a window, for example, knocking a window is to retrieve and voice-describe something other than a window. For example, when a passenger sees a coffee hall outside a window, the passenger taps the window, and the 3D audio simulates a sound effect retrieved at the window, so as to further perform voice description on the detailed information of the coffee hall outside the window.
In an exemplary embodiment, 3D audio provides effective assistance for the intelligent driving of the vehicle's perception and decision making to establish a good interaction with the human as a good complement to existing visual and auditory information.
According to the method for presenting the auxiliary information, provided by the embodiment of the invention, the sound field is created by combining the 3D playing equipment with the software and the hardware, so that the auditory perception in real life of human can be more intuitively simulated, the position of the sound box is ignored by a user, and more attention is paid to the position of the sound source of the sound simulated by the 3D sound field.
Compared with the existing presentation method of the orientation information, such as the presentation of the literal description of voice, the description of characters, the visual presentation combined with pictures and the like, the method for presenting the auxiliary information provided by the embodiment of the invention conveys the orientation information by directly simulating the sounding at the sound source through the sound field, and is most efficient and intuitive, so that the time and the load required by people for understanding the orientation information are reduced.
In an exemplary embodiment, using the location as meta-information, the amount of information of the original audio content can be reduced. That is, by adding the position information in the played audio, a part of the position information content which needs to be described through voice interaction can be directly converted into the position information of the audio for display, thereby reducing the content volume of the audio. For example, it is originally necessary to describe interactive content of "there is a vehicle entering in the left rear", and it is sufficient to say "there is a vehicle entering" directly by using 3D audio, which reduces the content by almost half because the 3D audio already simulates the sound source position in the left rear, for example, the sound source position in the left rear can be simulated by two sounds in the left and rear of the smart driving vehicle.
In an exemplary embodiment, a vehicle body sensor can be called to intuitively and actively play important information outside the vehicle into the vehicle, for example, when a person outside the vehicle finds that the vehicle is abnormally driven and gives a voice prompt to a passenger/driver inside the vehicle, voice content and position information of the person outside the vehicle can be recorded, and the same sound source is played in the vehicle in real time, so that the person inside the vehicle is communicated with the environment outside the vehicle. The audio content recorded by the microphone outside the vehicle can be played at the corresponding azimuth angle through the 3D audio equipment inside the vehicle.
In an exemplary embodiment, in a navigation application, when a user needs to be informed of a left turn, a right turn, or other similar instruction, the audio instruction may sound the 3D audio device according to the location of the desired direction, thereby making it more intuitive for the user to feel the direction of the desired turn. For example, if the current vehicle is about to turn left, then do an alert sound be emitted through the speaker in the front left of the vehicle?
In an exemplary embodiment, in a navigation application, a mobile phone or a smart driving vehicle can generate audio prompts based on position information for a user through the judgment of the traffic condition of the surrounding environment. For example, when a vehicle enters the left rear part, the system can play a prompt sound at the left rear part of the vehicle or increase the environmental sound at the left rear part, thereby playing a role in avoiding traffic accidents. The sound effect of the left rear can here be simulated by the left and rear sound.
In an exemplary embodiment, after the vehicle with the driving assistance system can sense environmental information and make a decision, the 3D audio device can play corresponding warning tones before driving the actuator to perform operations such as braking and steering, so as to inform the user of the sensing and judgment of the vehicle on the environmental information, thereby ensuring that the user understands the behavior of the intelligent driving vehicle.
In an exemplary embodiment, when the smart car is in a charging state, the 3D audio device may display an effect of energy absorption by simulating the sound of current inside the car from the outside to the inside, thereby intuitively expressing that the smart car is in a charging state.
In the embodiment of the invention, the parameter of the distance of the sound can be expressed by utilizing the 3D audio file, and the current sound is the audio file which is stored in advance, so that the method can be realized by uniformly playing the current audio file from the distance to the distance through sound devices such as four sound devices.
In an exemplary embodiment, when the vehicle accelerates, the 3D audio device may express that the vehicle accelerates and advances by simulating the current in the play car moving from the backward direction to the forward direction, thereby solving the problem that the existing electric vehicle runs with a weak sound.
In an exemplary embodiment, when the vehicle is decelerating, the 3D audio device may simulate playing a sound effect that moves quickly from front to back and blocks as opposed to an acceleration current, thereby alerting the user of the deceleration within the vehicle.
In the embodiment of the invention, the implementation method can be as follows: the system distributes seats corresponding to the current users, matches the position information of the corresponding 3D audio according to codes stored in the system in advance of the seats, and plays the 3D audio in combination with the content of the 3D audio file.
In an exemplary embodiment, the 3D audio device simulates playing guidance audio flowing from the door to the passenger seat when the passenger gets on the vehicle.
In an exemplary embodiment, when a passenger gets off the vehicle, the 3D audio device simulates and plays audio guided by a sound effect played from the vehicle door to the direction of the next destination of the passenger, so that navigation prompt is carried out on the subsequent route of the passenger.
In an exemplary embodiment, when a passenger performs voice interaction with a vehicle in the vehicle, the 3D audio device simulates and plays the position (such as a central control screen, a robot, a roof light strip, and the like) of the voice interaction visual feedback of the current vehicle, and sends a voice interaction instruction, so that the passenger intuitively feels the existence of the voice interaction in the vehicle, and the trust sense of the passenger when interacting with the voice interaction system can be effectively improved.
In an exemplary embodiment, the 3D audio device simulates a broadcast stream flowing toward the direction to be turned when the vehicle is turned, thereby forecasting the turning of the passenger vehicle.
In an exemplary embodiment, when the vehicle is started to run, the 3D audio device simulates a sound effect of playing current moving upwards from the chassis and wrapping the body of the vehicle, and represents the current of the vehicle is started all over the body.
In an exemplary embodiment, when the vehicle is shut down, the 3D audio simulation playing current falls from the roof to the chassis and fades away to a mute sound effect, representing the energy of the vehicle returning to the storage and being stored.
In an exemplary embodiment, when a person, a vehicle, a bicycle, or other road moving objects with too close surroundings are detected by a vehicle body, the 3D audio device records an audio file of a target position and plays the audio file at the same position in the vehicle, so that a user can be aware of possible dangerous information of surrounding road conditions without being disturbed in an environment sound state.
It should be noted that the target position is an actual position of a recorded person, a recorded vehicle, a recorded bicycle, or the like. The same position in the car refers to a target position which is virtualized through 3D audio when the audio is played in the car.
In an exemplary embodiment, a person outside the vehicle who wants to communicate with the person inside the vehicle may speak into the vehicle, wherein a vision sensor, such as an image sensor, mounted on the vehicle may discriminate such a situation, and thereby play an audio file recorded by a microphone into the vehicle at a location consistent with the person outside the vehicle, allowing the person inside and outside the vehicle to communicate.
For example, suppose an outside human is on the left side of the vehicle, then the 3D audio device inside the vehicle simulates the sound source position on the left side of the vehicle.
In an exemplary embodiment, the position of a collision source cannot be determined directly according to the collision feeling in the vehicle due to slight collision and scratch of the vehicle body or the fault of the vehicle, so that the 3D audio device plays a prompt tone of a corresponding position in the vehicle according to data obtained by a vehicle sensing system such as a vehicle body peripheral environment sensor, thereby assisting passengers in the vehicle to understand the fault source.
In the embodiment of the invention, after the sensors in the peripheral environment of the vehicle body sense scratch and collision, other sensors assist in determining the collision occurrence position, for example, a microphone array can calculate the time difference of collision sounds collected by microphones at different positions through an algorithm so as to deduce the collision occurrence position, and then position information is input into a system to generate a 3D audio playing file, so that prompt sounds with azimuth information are played.
The method for assisting information presentation provided by the embodiment of the invention can be used for interactive design of people (for intelligent equipment which is an intelligent driving vehicle, such as passengers, drivers or people outside the intelligent equipment) and intelligent equipment, and can be used for assisting the existing visual information equipment to present information acquired by an environment sensor around the vehicle body of the intelligent driving vehicle by forming 3D audio by using sound equipment in the vehicle and using the position of a sound source in a space created by the 3D audio, so that the vehicle and the environment information can be independently subjected to 3D audio presentation for the passengers or the drivers.
The embodiment of the invention can realize the 3D audio effect by combining hardware or software such as earphones, dual-channel or multi-channel sound boxes and the like, and actively or passively present the peripheral environment information sensed by the sensor of the intelligent equipment such as an intelligent driving vehicle and the decision and/or control information made based on the sensor sensing for the user in a 3D audio mode. On the other hand, existing visual and auditory presentation methods in existing information presentation devices (e.g., central control displays, etc.) are aided in the manner of 3D audio, thereby optimizing the efficiency, accuracy, and enjoyment of the user's interaction with the information presentation device.
Fig. 2 shows a flowchart of another method for assisting in the presentation of information in an exemplary embodiment of the present disclosure.
As shown in fig. 2, the method for presenting auxiliary information provided by the present embodiment may include the following steps.
In step S201, the smart device sensor senses environmental information.
In the embodiment of the invention, the system senses the surrounding environment information by using the sensor and collects the sensor information.
In step S202, the environmental audio file is obtained according to the system setting, analyzing and processing the environmental information sensed by the sensor.
In the embodiment of the invention, the system analyzes and processes the sensor information according to system setting, and classifies the sensor information.
In step S203, the environmental audio file is processed into a playable file.
In the embodiment of the invention, content parameters which are valuable to a preset system in the sensor information are extracted, and the parameters are converted into playable audio files, namely the playable files.
In step S204, the 3D audio playing format is matched with the environmental audio file according to the operation result.
In the embodiment of the invention, the 3D audio playing format is matched with the playable file according to the operation result. The operation result is classified information, that is, by classifying, it can be determined what playing format the current information needs to be played in (the playing format of the 3D audio is pre-stored, and there are about 8 classes), and matching means matching the playing mode of the 3D audio with the content.
In step S205, an audio file and a 3D audio playing format pre-stored by the system are called.
In step S206, whether the current file is an active prompt message; when the current file is the active prompt message, jumping to step S210; when the current file is the inactive prompt message, the process proceeds to step S207.
In the embodiment of the invention, whether the called file belongs to the active prompt message is judged. The called files comprise audio files and 3D audio playing formats prestored in the system, and the files which can be played and the corresponding 3D audio playing formats are obtained according to the sensor information.
In step S207, the logging system waits for a trigger.
In the embodiment of the invention, if the calling file is not the active prompt message, the calling file is stored in the system.
In step S208, it is determined whether input information input by the user is received; when the input information is received, the process proceeds to step S209; otherwise, the process proceeds to step S207.
In the embodiment of the invention, if the user inputs the information, the 3D audio equipment can be driven to play according to the audio file and the 3D audio playing format which are matched with the information input by the user in the information retrieval system input by the user.
In step S209, output information corresponding to the input information is retrieved according to the system setting.
In step S210, the player is driven to play the audio content in the form contained in the 3D audio information.
In the embodiment of the invention, if the calling file is the active prompt message, the current file is played according to the 3D audio playing format.
In some embodiments, the 3D audio playback format and the playable files may be an integral 3D audio file, and the 3D audio file may be a universal audio file content that can be played back in multiple tracks based on mp3, wav, etc. for more than 5.1 channels.
In some embodiments, the generated multi-channel audio playing file such as MP3 or wav may be dynamically rendered according to different positions of the player and the listener based on the position information.
It should be noted that the dynamic rendering and the player have no important relationship, because the player information stored in the dynamically rendered audio file can drive the player to play the file. The rendering of different positions mentioned here is only indicative of one possibility. Since the user position in the car is a fixed position, and the way to implement 3D audio is to play the audio file directly at the sound source position that is desired to be simulated, there is also no change in the position of the player. In the embodiment of the invention, the audio file is played for the passenger in the vehicle through intelligent equipment such as four sound boxes at four positions of the front, the back, the left and the right of the vehicle. The dynamic rendering referred to here is the above-mentioned real-time rendering implemented by combining a preset 3D audio format with an audio file.
Fig. 3 shows a schematic diagram of an apparatus for assisting information presentation in an exemplary embodiment of the present disclosure.
As shown in fig. 3, the present embodiment provides an apparatus 100 for auxiliary information presentation, and the apparatus 100 may include an information collecting module 110, an information processing module 120, and an audio playing module 130.
The information collecting module 110 may be used to collect the sensing information of the smart device through the sensor.
The information processing module 120 may be configured to analyze and process the sensing information to obtain decision information and/or control information of the smart device.
The audio playing module 130 may be configured to control the 3D audio device of the smart device to play corresponding 3D audio according to the perception information and/or the decision information and/or the control information.
Wherein a sound source position of the played 3D audio is related to orientation information of the perceptual information and/or the decision information and/or the control information.
In an exemplary embodiment, the perception information may include: any one or more of intelligent equipment surrounding environment information, intelligent equipment position information and intelligent equipment surrounding service information.
In an exemplary embodiment, the apparatus 100 may further include: an information conversion module for converting the perception information and/or the decision information and/or the control information into a first playable audio file.
In an exemplary embodiment, the apparatus 100 may further include: the pre-storage module is used for pre-storing a second playable audio file and a 3D audio playing format; and the matching module is used for matching the first playable audio file and/or the second playable audio file with a corresponding 3D audio playing format.
In an exemplary embodiment, the apparatus 100 may further include: and the classification storage module is used for classifying and storing the first playable audio file and the second playable audio file.
In an exemplary embodiment, the classification storage module may include a classification unit, and the classification unit may be configured to classify the first playable audio file and the second playable audio file into active cue information and inactive cue information.
In an exemplary embodiment, the audio playing module 130 may include an audio file retrieving unit, a first judging unit, and a first playing unit. The audio file retrieving unit may be configured to retrieve a corresponding playable audio file according to the perception information and/or the decision information and/or the control information. The first judging unit may be configured to judge whether the called playable audio file is active prompt information. The first playing unit may be configured to control the 3D audio device to play the called playable audio file when the called playable audio file is the active prompt information.
In an exemplary embodiment, the audio playing module 130 may further include a second judging unit and a second playing unit. The second determining unit may be configured to determine whether input information is received when the called playable audio file is inactive prompt information. The second playing unit may be configured to, when the input information is received, retrieve a corresponding playable audio file according to the input information and control the 3D audio device to play.
The specific details of each module/unit in the above-mentioned apparatus for presenting auxiliary information have been described in detail in the corresponding method for presenting auxiliary information, and therefore are not described herein again.
Fig. 4 shows a schematic diagram of a smart device in an exemplary embodiment of the present disclosure.
The present embodiment provides a smart device 200, and the smart device 200 may include a context awareness system 210, an information processing system 220, an audio and 3D format generating apparatus 230, and a 3D audio device 240.
The context awareness system 210 may be used to collect awareness information of the smart devices.
The environmental sensing system 210 includes various sensors, and the environmental sensing system 210 collects sensing information of the surrounding environment.
The information processing system 220 may be configured to analyze the perception information and generate corresponding decision information and/or control information.
Wherein, the information processing system 220 is used for processing the perception information of the environmental perception system 210. In some embodiments, the information processing system 220 processes the information obtained by the different sensors and classifies the information according to a predetermined setting.
The audio and 3D format generating means 230 may be configured to convert the perception information and/or the decision information and/or the control information into a playable audio file and match the playable audio file with a pre-stored 3D audio playing format.
The audio and 3D format generating device 230 is used to convert the perceptual information processed by the information processing system 220 into an audio format that can be played. In some embodiments, the audio and 3D format generating device 230 may store the processed perceptual information according to a classification.
The 3D audio device 240 may be configured to play the corresponding 3D audio according to the perception information and/or the decision information and/or the control information.
Wherein the 3D audio device 240 plays the stored audio information/files according to user requirements or actively.
Wherein a sound source position of the played 3D audio is related to orientation information of the perceptual information and/or the decision information and/or the control information.
In an exemplary embodiment, the smart device 200 may include a smart driving vehicle.
The intelligent driving vehicle refers to the following vehicles: the vehicle-mounted navigation system can communicate with the outside, can perform path planning according to a vehicle-using task of a user or receive external path planning, can be driven autonomously without a driver basically, and can comprise an unmanned vehicle (completely autonomous), an assisted driving vehicle (requiring driver intervention in a short time) and a driving-assisted vehicle (driven by the driver in most of the time). The intelligent driving vehicle runs according to the path planning and the visual map.
It should be noted that "intelligent driving" in this context should be understood in a broad sense, including driving situations where the driver is not present at all, and also covering situations where autonomous driving is dominant but the driver is occasionally out of control.
The state information of the intelligent driving vehicle comprises the position, the speed, the remaining mileage of the intelligent driving vehicle, the state of a sensor on the intelligent driving vehicle and the like.
In an exemplary embodiment, the environmental awareness system 210 may include any one or more of a positioning sensor, an audible sensor, a pressure sensor, a visual sensor, a millimeter wave radar, a lidar, and the like.
In an exemplary embodiment, the information processing system 220 may also be configured to: classifying the perception information and/or the decision information and/or the control information.
In an exemplary embodiment, the audio and 3D format generating device 230 may further be configured to: and classifying and storing the playable audio files according to the classification.
In an exemplary embodiment, the 3D audio device 240 may also be configured to: judging whether the currently called playable audio file belongs to active prompt information or not according to the classification; and when the called playable audio file is the active prompt message, playing the called playable audio file.
In an exemplary embodiment, the 3D audio device 240 may also be configured to: when the called playable audio file is the inactive prompt message, judging whether the input message is received; and when the input information is received, retrieving a corresponding playable audio file according to the input information for playing.
In an exemplary embodiment, the 3D audio device 240 may include a plurality of speakers provided at preset locations of the smart device.
In an exemplary embodiment, an included angle between a line connecting each sound device and a preset height at a designated position in the smart device 200 and a horizontal line is within a range of a first preset angle and a second preset angle. Reference is made here to the examples which may be found in figures 5 and 6 below.
The smart device is described as an example with reference to fig. 5 and 6, and the smart device is an intelligent driving vehicle and four speakers are provided in the vehicle.
The intelligent driving vehicle is assumed to be an unmanned vehicle, and various types and a plurality of sensors are arranged around the body of the unmanned vehicle. It should be noted that the smart device of this embodiment may be any other smart device with mobile property and 3D audio playing capability.
In the figure, the intelligent driving vehicle comprises a facility main body inner part 1, a facility main body outer part 2, an LED lamp strip 3, a 3D audio device 4, an in-vehicle infrared device 5, an in-vehicle seat 6, a binocular camera 7, a panoramic camera 8, an ultrasonic radar 9 and a laser radar 10. The four sounds are placed at the front, rear, left and right of the vehicle. In this embodiment, the distance between the speaker and the seat is not particularly limited, but the vertical height line between the speaker and the ears of the passenger and the horizontal line generally form an angle of 18 ° to-18 ° in order to achieve the best resolution of the audio position.
It should be noted that the sensors and their installation positions, the number of sounds and the installation positions included in the vehicle are all used for illustration, and are not used to limit the scope of the present disclosure.
The specific details of each constituent structure in the intelligent device have been described in detail in the corresponding method and apparatus for presenting auxiliary information, and therefore are not described herein again.
Further, the embodiment of the present disclosure also provides an electronic device, including: one or more processors; storage means for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the embodiments described above.
Further, the embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method as described in any of the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (20)

1. A method for auxiliary information presentation, applied to a smart device comprising a sensor and a 3D audio device, the method comprising:
acquiring perception information of the intelligent equipment through the sensor, wherein the perception information comprises intelligent equipment position information and intelligent equipment peripheral service information;
analyzing and processing the perception information to obtain decision information and control information of the intelligent equipment;
obtaining a user input having a location attribute;
obtaining orientation information between the intelligent device position information and the intelligent device peripheral service information according to the perception information, the decision information, the control information and the user input with the position attribute, wherein the orientation information comprises a distance and an azimuth between the intelligent device position information and the intelligent device peripheral service information;
and controlling the 3D audio equipment to play corresponding 3D audio at the azimuth angle simulation sound source position, and simultaneously adjusting the volume of the played 3D audio according to the distance, wherein the sound source position of the played 3D audio is related to the perception information, the decision information and the azimuth information of the control information.
2. The method of claim 1, wherein the perceptual information further comprises: and (4) ambient environment information of the intelligent equipment.
3. The method of claim 1, further comprising:
converting the perceptual information and/or the decision information and/or the control information into a first playable audio file.
4. The method of claim 3, further comprising:
pre-storing a second playable audio file and a 3D audio playing format;
matching the first playable audio file and/or the second playable audio file with a corresponding 3D audio playback format.
5. The method of claim 4, further comprising:
and classifying and storing the first playable audio file and the second playable audio file.
6. The method of claim 5, wherein the classifying the first playable audio file and the second playable audio file comprises:
classifying the first playable audio file and the second playable audio file into active cue information and inactive cue information.
7. The method according to claim 6, wherein the controlling the 3D audio device to play the corresponding 3D audio according to the perception information and/or the decision information and/or the control information comprises:
calling a corresponding playable audio file according to the perception information and/or the decision information and/or the control information;
judging whether the called playable audio file is active prompt information or not;
and when the called playable audio file is the active prompt message, controlling the 3D audio equipment to play the called playable audio file.
8. The method according to claim 7, wherein the controlling the 3D audio device to play the corresponding 3D audio according to the perception information and/or the decision information and/or the control information further comprises:
when the called playable audio file is the inactive prompt message, judging whether the input message is received;
and when the input information is received, retrieving a corresponding playable audio file according to the input information and controlling the 3D audio equipment to play.
9. An apparatus for facilitating presentation of information, comprising:
the information acquisition module is used for acquiring perception information of the intelligent equipment through the sensor, wherein the perception information comprises intelligent equipment position information and intelligent equipment peripheral service information;
the information processing module is used for analyzing and processing the perception information, obtaining decision information and control information of the intelligent equipment and obtaining user input with position attributes;
the audio playing module is used for obtaining azimuth information between the intelligent device position information and the intelligent device peripheral service information according to the perception information, the decision information, the control information and the user input with the position attribute, wherein the azimuth information comprises a distance and an azimuth angle between the intelligent device position information and the intelligent device peripheral service information; and controlling the 3D audio equipment of the intelligent equipment to play corresponding 3D audio at the position of the azimuth angle simulation sound source, and simultaneously adjusting the volume of the played 3D audio according to the distance, wherein the sound source position of the played 3D audio is related to the perception information and/or the decision information and/or the azimuth information of the control information.
10. A smart device, comprising:
the environment perception system is used for collecting perception information of the intelligent equipment, and the perception information comprises intelligent equipment position information and intelligent equipment peripheral service information;
the information processing system is used for analyzing and processing the perception information to generate corresponding decision information and control information; obtaining a user input having a location attribute;
an audio and 3D format generating device, configured to convert the perception information, the decision information, the control information, and the user input with the location attribute into a playable audio file, and match the playable audio file with a pre-stored 3D audio playing format;
the 3D audio equipment is used for obtaining azimuth information between the intelligent equipment position information and the intelligent equipment peripheral service information according to the perception information, the decision information, the control information and the user input with the position attribute, and the azimuth information comprises a distance and an azimuth angle between the intelligent equipment position information and the intelligent equipment peripheral service information; and controlling the position of the azimuth angle simulation sound source to play a corresponding 3D audio, and simultaneously adjusting the volume of the played 3D audio according to the distance, wherein the sound source position of the played 3D audio is related to the perception information, the decision information and the azimuth information of the control information.
11. The smart device of claim 10, wherein the smart device comprises a smart driving vehicle.
12. The smart device of claim 10 wherein the environmental awareness system comprises any one or more of a position sensor, an audio sensor, a pressure sensor, a visual sensor, a millimeter wave radar, a lidar.
13. The smart device of claim 10, wherein the information processing system is further configured to: classifying the perception information and/or the decision information and/or the control information.
14. The smart device of claim 13 wherein the audio and 3D format generating means is further configured to: and classifying and storing the playable audio files according to the classification.
15. The smart device of claim 14, wherein the 3D audio device is further configured to:
judging whether the currently called playable audio file belongs to active prompt information or not according to the classification;
and when the called playable audio file is the active prompt message, playing the called playable audio file.
16. The smart device of claim 14, wherein the 3D audio device is further configured to:
when the called playable audio file is the inactive prompt message, judging whether the input message is received;
and when the input information is received, retrieving a corresponding playable audio file according to the input information for playing.
17. The smart device of claim 10, wherein the 3D audio device comprises a plurality of speakers disposed at predetermined locations of the smart device.
18. The smart device of claim 17 wherein the angle between the line connecting each speaker to the predetermined height at the predetermined location within the smart device and the horizontal is within a range between a first predetermined angle and a second predetermined angle.
19. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
20. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN201810136218.8A 2018-02-09 2018-02-09 Method and device for auxiliary information presentation Active CN110139205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810136218.8A CN110139205B (en) 2018-02-09 2018-02-09 Method and device for auxiliary information presentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810136218.8A CN110139205B (en) 2018-02-09 2018-02-09 Method and device for auxiliary information presentation

Publications (2)

Publication Number Publication Date
CN110139205A CN110139205A (en) 2019-08-16
CN110139205B true CN110139205B (en) 2021-11-02

Family

ID=67568160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810136218.8A Active CN110139205B (en) 2018-02-09 2018-02-09 Method and device for auxiliary information presentation

Country Status (1)

Country Link
CN (1) CN110139205B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4060522A4 (en) * 2019-12-27 2022-12-14 Huawei Technologies Co., Ltd. Data generation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103770780A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Vehicle active safety system alarm shielding device
CN106627359A (en) * 2015-10-02 2017-05-10 福特全球技术公司 Potential hazard indicating system and method
CN107111473A (en) * 2014-10-31 2017-08-29 微软技术许可有限责任公司 For promoting the user interface capabilities interacted between user and its environment
CN107444257A (en) * 2017-07-24 2017-12-08 驭势科技(北京)有限公司 A kind of method and apparatus for being used to information be presented in vehicle
CN206856593U (en) * 2017-05-09 2018-01-09 上海东动科技有限公司 A kind of vehicle sonification system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8207837B2 (en) * 2004-03-11 2012-06-26 Bayerische Motoren Werke Aktiengesellschaft Process and apparatus for the output of music information in a vehicle
CN102253965A (en) * 2011-06-02 2011-11-23 李郁文 System and method for receiving or releasing information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103770780A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Vehicle active safety system alarm shielding device
CN107111473A (en) * 2014-10-31 2017-08-29 微软技术许可有限责任公司 For promoting the user interface capabilities interacted between user and its environment
CN106627359A (en) * 2015-10-02 2017-05-10 福特全球技术公司 Potential hazard indicating system and method
CN206856593U (en) * 2017-05-09 2018-01-09 上海东动科技有限公司 A kind of vehicle sonification system
CN107444257A (en) * 2017-07-24 2017-12-08 驭势科技(北京)有限公司 A kind of method and apparatus for being used to information be presented in vehicle

Also Published As

Publication number Publication date
CN110139205A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
EP3121064B1 (en) Vehicle control device and vehicle control method thereof
JP6881444B2 (en) Systems and methods for transmitting information to vehicles, vehicles, and non-transient computer-readable storage media
WO2022062659A1 (en) Intelligent driving control method and apparatus, vehicle, electronic device, and storage medium
CN106114352B (en) Warning method and device based on electric vehicle and vehicle
US20050270146A1 (en) Information processing system
KR102526081B1 (en) Vehicle and method for controlling thereof
WO2023274361A1 (en) Method for controlling sound production apparatuses, and sound production system and vehicle
CN113306491A (en) Intelligent cabin system based on real-time streaming media
WO2020120754A1 (en) Audio processing device, audio processing method and computer program thereof
JP7040513B2 (en) Information processing equipment, information processing method and recording medium
CN115257540A (en) Obstacle prompting method, system, vehicle and storage medium
CN110139205B (en) Method and device for auxiliary information presentation
CN111137212A (en) Rearview mirror device and vehicle
CN110134824B (en) Method, device and system for presenting geographic position information
JP7456490B2 (en) Sound data processing device and sound data processing method
JP4923579B2 (en) Behavior information acquisition device, display terminal, and behavior information notification system
JP2020059401A (en) Vehicle control device, vehicle control method and program
WO2023204076A1 (en) Acoustic control method and acoustic control device
CN111132005B (en) Information processing method, information processing apparatus, vehicle, and computer-readable storage medium
WO2023185002A1 (en) Vibration control system for steering wheel, control method, and vehicle
WO2024043053A1 (en) Information processing device, information processing method, and program
WO2023153314A1 (en) In-vehicle equipment control device and in-vehicle equipment control method
US20240025432A1 (en) Driver assistance system for vehicle
WO2018079584A1 (en) Control device, control system, control method, and program
KR102132058B1 (en) Interactive voice communication system embedded in a car

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant