CN117785091A - Information display method, information display device, vehicle and computer storage medium - Google Patents

Information display method, information display device, vehicle and computer storage medium Download PDF

Info

Publication number
CN117785091A
CN117785091A CN202311806986.7A CN202311806986A CN117785091A CN 117785091 A CN117785091 A CN 117785091A CN 202311806986 A CN202311806986 A CN 202311806986A CN 117785091 A CN117785091 A CN 117785091A
Authority
CN
China
Prior art keywords
user
voice command
voice
vehicle
response information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311806986.7A
Other languages
Chinese (zh)
Inventor
石存杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Rox Intelligent Technology Co Ltd
Original Assignee
Shanghai Rox Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Rox Intelligent Technology Co Ltd filed Critical Shanghai Rox Intelligent Technology Co Ltd
Priority to CN202311806986.7A priority Critical patent/CN117785091A/en
Publication of CN117785091A publication Critical patent/CN117785091A/en
Pending legal-status Critical Current

Links

Landscapes

  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The application discloses an information display method, an information display device, a vehicle and a computer storage medium, wherein the information display method comprises the following steps: receiving a voice instruction; responding based on the voice command under the condition that the voice command is sent by the first user, and obtaining first response information, wherein the first user is a user waking up the voice recognition system; identifying a first location where the first user is located; and determining a target display area according to the first position, and displaying the first response information in the target display area. In the above, when a plurality of users speak at the same time, only the voice instruction sent by the first user can be responded, so that the error response is avoided, the user experience is improved, in addition, the target display area is determined according to the position of the first user, the first response information is displayed in the target display area, and the user can conveniently check the response information.

Description

Information display method, information display device, vehicle and computer storage medium
Technical Field
The application belongs to the technical field of vehicles, and particularly relates to an information display method, an information display device, a vehicle and a computer storage medium.
Background
With the continuous development of technology, the configuration and the functions of vehicles are increasing, and the use scenes of automobiles are more and more, for example, the human-computer interaction technology is more and more widely applied in the automobile field. The man-machine interaction system can control the vehicle to provide navigation service, telephone communication, video entertainment and other services for the driver based on the voice instruction, and driving experience is improved.
The current voice recognition system can recognize and respond to voice instructions to obtain response information, but the response information is displayed on a display screen of a main driver, and for a user sitting at a position far away from the display screen of the main driver, the response information is inconvenient to obtain.
Disclosure of Invention
The embodiment of the application provides an information display method, an information display device, a vehicle and a computer storage medium, which are used for solving the problem that response information obtained by a user is inconvenient to acquire after responding based on a voice instruction of the user in the existing man-machine interaction.
In a first aspect, an embodiment of the present application provides an information display method, which is applied to a vehicle-mounted terminal, where the vehicle-mounted terminal includes a speech recognition system, and the method includes:
receiving a voice instruction;
responding based on the voice command under the condition that the voice command is sent by the first user, and obtaining first response information, wherein the first user is a user waking up the voice recognition system;
identifying a first location where the first user is located;
and determining a target display area according to the first position, and displaying the first response information in the target display area.
In a second aspect, an embodiment of the present application provides an information display device, which is applied to a vehicle-mounted terminal, where the vehicle-mounted terminal includes a speech recognition system, and the device includes:
the receiving module is used for receiving the voice instruction;
the acquisition module is used for responding based on the voice command to obtain first response information under the condition that the voice command is determined to be sent by the first user, wherein the first user is a user who wakes up the voice recognition system;
the identification module is used for identifying a first position where the first user is located;
the determining module is used for determining a target display area according to the first position;
and the display module is used for displaying the first response information in the target display area.
In a third aspect, embodiments of the present application provide a computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product for implementing a method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a vehicle comprising the computer storage medium of the third aspect, or the computer program product of the fourth aspect.
The information display method, device, vehicle and computer storage medium of the embodiment of the application, wherein the method comprises the following steps: receiving a voice instruction; responding based on the voice command under the condition that the voice command is sent by the first user, and obtaining first response information, wherein the first user is a user waking up the voice recognition system; identifying a first location where the first user is located; and determining a target display area according to the first position, and displaying the first response information in the target display area. In the above, when a plurality of users speak at the same time, only the voice instruction sent by the first user can be responded, so that the error response is avoided, the user experience is improved, in addition, the target display area is determined according to the position of the first user, the first response information is displayed in the target display area, and the user can conveniently check the response information.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
Fig. 1 is a schematic flow chart of an information display method according to an embodiment of the present application;
fig. 2a is a schematic diagram of distribution of passengers, cameras and microphones in a vehicle according to an embodiment of the present application;
fig. 2b is a schematic diagram of distribution of passengers, cameras and microphones in a vehicle according to an embodiment of the present application;
fig. 2c is a schematic flow chart of an information display method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an information display device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application are described in detail below to make the objects, technical solutions and advantages of the present application more apparent, and to further describe the present application in conjunction with the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative of the application and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by showing examples of the present application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
Fig. 1 is a flow chart illustrating an information display method according to an embodiment of the present application. As shown in fig. 1, the information display method provided in the embodiment of the present application is applied to a vehicle-mounted terminal, where the vehicle-mounted terminal includes a voice recognition system, and the method includes the following steps 101 to 105, where:
step 101, receiving a voice instruction.
It should be noted that, before receiving the voice command, the voice recognition system is awakened by the first user.
Step 102, responding based on the voice command to obtain first response information under the condition that the voice command is sent by the first user, wherein the first user is a user who wakes up the voice recognition system.
After receiving the voice command, the voice command is analyzed to determine whether the voice command is issued by the first user.
The embodiment of the application provides two analysis modes, namely, a mode I: performing tone analysis on the voice command to obtain a tone label of the voice command, and determining that the voice command is sent by a first user under the condition that the tone label of the voice command is identical to the tone label of the first user; mode two: tracking the sending position of the voice command, locking the position of the voice command outputter, shooting the position by using a camera to obtain a facial image of the voice command outputter, and if the facial image of the voice command outputter is the same as the facial image of the first user awakening the voice recognition system, determining that the voice command is sent by the first user. Fig. 2a and fig. 2b are schematic diagrams showing the distribution of passengers, cameras and microphones in a vehicle, where in fig. 2a, the microphones and the cameras are arranged in pairs, and in fig. 2b, the number of microphones is the same as the number of passengers.
The first response information may be response result information, for example, if the voice command is to play an animation, playing the animation in the target display area; if the voice command is to display the control panel, the control panel is displayed in the target display area. If the response fails, the reason for the failure may be displayed in the target display area, for example, the network signal is poor, etc.
In response to the voice command, the seat posture may be adjusted, and in this case, the first response information may be a notification information such as the start of seat posture adjustment, completion of seat posture adjustment, or the like. In response to the voice command, the response may be simultaneously performed by voice broadcasting, for example, "start seat posture adjustment", "seat posture adjustment completion", or the like.
Step 103, identifying a first position where the first user is located.
The first location may be a location of the first user within the vehicle. The location may be identified based on a microphone array and a camera disposed in the vehicle, for example, by locating the approximate location of the voice command output within the vehicle through the microphone array; and then shooting according to the camera in the approximate position range, so as to determine that the first position where the first user is located is a position of a main driver, a secondary driver or a rear row and the like.
And 104, determining a target display area according to the first position.
For example, in the case where the vehicle includes a plurality of display screens, a display screen closest to the first position is taken as a target display area; alternatively, in the case where the vehicle includes a plurality of display screens, a display screen closest to the first position is taken as a target display screen, and an area closest to the first position in the target display screen is taken as a target display area. For example, if the first position is located in the secondary driving, the display screen in front of the secondary driving position is used as a target display area; if the first position is located on the left side of the rear row, the rear row shares one display screen, the display screen of the rear row is taken as a target display screen, and the left half area of the target display screen is taken as a target display area.
And step 105, displaying the first response information in the target display area.
For example, the first response information may be text information, or text information and a picture, or text information and a video, and the first response information is displayed in the target display area and simultaneously subjected to voice broadcasting. The first response information may also be vehicle control information, in which case the control panel is displayed in the target display area, so that the user can perform the touch operation conveniently. The first response information can also be vehicle safety information, is triggered by a main driver, is displayed on a vehicle system, an instrument and a HUD, and is linked with a steering wheel and a seat.
In this embodiment, a voice command is received; responding based on the voice command under the condition that the voice command is sent by the first user, and obtaining first response information, wherein the first user is a user waking up the voice recognition system; identifying a first location where the first user is located; and determining a target display area according to the first position, and displaying the first response information in the target display area. In the above, when a plurality of users speak at the same time, only the voice instruction sent by the first user can be responded, so that the error response is avoided, the user experience is improved, in addition, the target display area is determined according to the position of the first user, the first response information is displayed in the target display area, and the user can conveniently check the response information.
In an embodiment of the present application, before the receiving the voice command, the method further includes:
under the condition of receiving a wake-up instruction, waking up the voice recognition system;
locating a second position of the wake-up instruction outputter within the vehicle by a microphone array provided in the vehicle;
determining a first camera according to the second position, and acquiring a facial image of the wake-up instruction outputter through the first camera;
the wake-up instruction exporter is determined to be the first user.
In this example, when a wake-up instruction is received, the voice recognition system is woken up, and the wake-up instruction may be set according to actual situations, which is not limited herein. After receiving the wake-up instruction, the vehicle-mounted terminal obtains a microphone closest to the wake-up instruction output person in the microphone array, for example, the microphone array includes a plurality of microphones, after the plurality of microphones collect the wake-up instruction, as the distance between each microphone and the output sound source of the wake-up instruction is different, the sound intensity of the wake-up instruction collected by each microphone is also different, the microphone closest to the wake-up instruction output person can be determined according to the sound intensity of the wake-up instruction, the second position is determined according to the position of the microphone, for example, the position of the microphone is taken as the center, the spatial area with the preset distance from the microphone belongs to the second position, and the preset distance can be 30 cm, 50 cm, etc., which can be specifically set according to practical situations, and is not limited. The second location is considered to be the location of the wake-up instruction output.
A plurality of cameras can be arranged in the vehicle, each camera shoots an area, after the second position is determined, the camera shooting the second position can be determined to be the first camera, and pictures shot by the first camera are analyzed, so that facial images of a wake-up instruction outputter can be obtained. And taking the wake-up instruction outputter as a first user, storing the facial image of the first user, and comparing the facial images of the users to judge whether the outputter of the voice instruction and the first user are the same user or not, so that the accuracy of voice recognition is improved, and the man-machine conversation experience is improved.
In one embodiment of the present application, receiving a voice command includes:
locating a third position of the voice command outputter within the vehicle by the microphone array during receipt of the voice command;
determining a second camera according to the third position, and acquiring a facial image of the voice instruction outputter through the second camera;
and if the facial image of the voice command output person is matched with the facial image of the wake-up command output person, determining that the voice command is sent by the first user.
In the same process of acquiring the facial image of the wake-up instruction outputter, in the process of receiving the voice instruction, the microphone array is used for positioning the third position of the voice instruction outputter in the vehicle, and the specific process may be: the microphone closest to the voice command output person in the microphone array is obtained, because the distance between each microphone and the output sound source of the voice command is different, the sound intensity of the voice command collected by each microphone is also different, the microphone closest to the voice command output person can be determined according to the sound intensity of the voice command, and the third position is determined according to the position of the microphone, for example, the position of the microphone is taken as the center, the space area with the preset distance from the microphone belongs to the second position, the preset distance can be 30 cm, 50 cm and the like, and the method can be specifically set according to practical situations, and is not limited herein. The third location is considered to be the location of the voice command output.
After the third position is determined, the camera shooting the third position can be determined to be the second camera, and the picture shot by the second camera is analyzed, so that the facial image of the voice command outputter can be obtained. And matching the facial image of the voice command output person with the facial image of the wake-up command output person, and if the matching is successful, determining that the voice command is sent by the first user.
In the above, it may be determined whether the outputter of the voice instruction is the first user, thereby determining whether to respond to the voice instruction. Through the mode, even if the first user is the first user because the position of the front and back position movement of the seat is adjusted, or the first user actively moves to other positions, compared with the prior art, the voice zone is divided by the vehicle, and if the user carries out voice dialogue across the voice zone, the face matching recognition provided by the application can greatly reduce the false recognition rate if the user carries out voice dialogue across the voice zone and is easy to make the vehicle with high probability of false recognition.
In yet another embodiment of the present application, after the receiving the voice command, in a case where it is determined that the voice command is sent by the first user, responding based on the voice command, and before obtaining the first response information, the method further includes:
performing tone analysis on the voice command to obtain a tone label of the voice command;
determining that the voice command is sent out by the first user under the condition that the tone color label of the voice command is the same as a first tone color label, wherein the first tone color label is a tone color label of voice of the voice recognition system awakened by the first user;
and under the condition that the voice command is sent by the first user, responding based on the voice command to obtain first response information, wherein the first response information comprises the following steps:
and if the voice command is used for controlling the first object, responding to the voice command to obtain first response information, wherein the first object is an object which is not controlled by other users except the first user in a historical time period, and the starting time of the historical time period is the time when the first user wakes up the voice recognition system.
In this embodiment, it is determined whether the voice command and the wake command are issued by the same user by performing a tone color analysis on the voice command.
Under the condition that the vehicle-mounted terminal receives the wake-up instruction, performing tone analysis on the wake-up instruction to obtain a first tone label; and under the condition that the vehicle-mounted terminal receives the voice command, performing tone analysis on the voice command to obtain a tone label of the voice command, and if the tone label of the voice command is identical to the first tone label, determining that the voice command is sent by the first user.
Semantic analysis is performed on the voice command to obtain a first object to be controlled, wherein the first object can be an air conditioner, a player, a skylight and the like. The voice command may be a command to turn on an air conditioner, turn off an air conditioner, play music, open a sunroof, etc. to operate the first object.
The first object is an object that has not been manipulated by other users in the history period, and the other users refer to other users than the first user. For example, the first user is a passenger, the other users refer to owners driving vehicles, the owners do not control the air conditioner in a historical time period, if the voice command of the passengers is to start the air conditioner, the first user responds to starting the air conditioner, if the owners control the air conditioner in the historical time period, for example, the air conditioner is started, the voice command of the passengers is: and the air conditioner is turned off, and the response to turning off the air conditioner is not performed. The end time of the history period may refer to a time of responding to a voice command.
By the above manner, when the object controlled by the voice instruction is not the object controlled by the other user in the historical time period, the voice instruction is responded, so that the operation of the first user on the object in the vehicle and the operation of the other user on the object in the vehicle do not conflict, for example, if the other user and the first user send the voice instruction in the same time period, the other user speaks: starting an air conditioner, wherein a first user says that: and turning off the air conditioner, and if the voice command output by other users is earlier than the voice command output by the first user, not responding to the voice command output by the first user.
In the above, the tone recognition can accurately position the voice command output person, so that the vehicle-mounted terminal and the user can be ensured to interact, a plurality of voice command output conditions exist in one vehicle interior area at the same time, the non-effective input command can be filtered through the tone recognition, and the interference of the useless voice command on the ongoing current voice dialogue is reduced, thereby ensuring that the voice really needing to interact can efficiently interact with the vehicle-mounted terminal.
According to the voice interaction method and device, according to the principle of who wakes up and identifies who, in an effective voice interaction process, the tone of the first user is identified as the unique voice command tone unless the voice recognition is forcedly interrupted or the user actively exits; the voice interaction between multiple users in a multi-tone area and different users in the same tone area is supported, and the voice interaction recognition accuracy can be ensured through tone recognition; when the user in the same voice zone triggers the voice interaction, the new voice interaction in the same voice zone can still be started through tone analysis, the established voice interaction is not influenced, and the method has high robustness.
In yet another embodiment of the present application, after the receiving the voice instruction, the method further comprises:
performing tone analysis on the voice command to obtain a tone label of the voice command;
determining that the voice command is sent by a second user under the condition that the tone color label of the voice command is the same as a second tone color label, wherein the second tone color label is a preset tone color label of voice;
and responding based on the voice instruction to obtain second response information.
In the foregoing, the second tone color label may be preset, for example, the vehicle owner inputs his own voice into the vehicle-mounted system in advance, and the vehicle-mounted system performs tone color analysis on the voice to obtain the second tone color label, where the second tone color label is the tone color label of the vehicle owner. The owner is the second user, and in addition, the voice of other people can be input to determine the second tone color label, which is not limited herein. The number of the second tone color tags may be one, or may be two or more, and is not limited herein.
In this embodiment, the second user may control the object that has been controlled by the first user in the history period, for example, the first user controls the air conditioner in the history period, the first user outputs a voice command for opening the air conditioner, and the air conditioner is currently in an open state, if the second user says that: and closing the air conditioner, and responding to closing the air conditioner to close the air conditioner.
In this embodiment, if the outputter of the voice command and the user who wakes up the voice recognition system are not the same user, the voice command can also be responded, and the premise of the response is that the tone color label of the voice command is the same as the tone color label of the preset voice.
In another embodiment of the present application, the second user may not perform manipulation on the object that has been manipulated by the first user in the historical period, that is, the responding based on the voice command, to obtain the second response information includes:
if the voice command is used for controlling a second object, responding to the voice command to obtain second response information, wherein the second object is an object which is not controlled by the first user in a historical time period, and the starting time of the historical time period is the time when the first user wakes up the voice recognition system;
and displaying the second response information on a preset target screen.
In this embodiment, the second user may not control the object that has been controlled by the first user in the history period, for example, the first user controls the air conditioner in the history period, the first user outputs a voice command for turning on the air conditioner, and the air conditioner is currently in an on state, if the second user says that: and when the air conditioner is closed, the air conditioner is not responded to closing, and the air conditioner is still in an open state. The second user can only manipulate objects that the first user has not manipulated within the historical period of time.
The target screen may be preset, for example, a screen corresponding to the main driver, which is not limited herein.
In this embodiment, the voice command output by the second user cannot control the object controlled by the first user in the historical time period, so that the voice command output by the second user can be prevented from being in conflict with the intention of the first user, so as to adapt to more scene requirements, and improve user experience.
Fig. 2c is a flowchart of an information display method according to an embodiment of the present application, including:
(1) And (3) voice triggering: the voice command wakes up the vehicle-mounted voice recognition system;
(2) Joint positioning: after the vehicle-mounted voice recognition system is triggered and a user in the vehicle can start to synchronously position a voice instruction output person through a physical key, the microphone array and the camera array in the vehicle; the in-vehicle position of the voice command output person is initially positioned through the microphone array sound source, and meanwhile facial information of the person is captured through the camera of the area, so that the accurate position of the person inputting the voice command is accurately tracked, and dynamic position tracking is carried out on the person through the camera.
(3) Vehicle-machine interaction: after the accurate position of the personnel is positioned successfully, the information can be transmitted to the cabin host computer, at the moment, the cabin host computer can start intelligent voice interaction with the personnel at the appointed position, and even if the personnel dynamically change the position in the vehicle during the voice interaction, the personnel can also accurately track through the camera, so that the user position is prevented from being played erroneously or voice instructions of an error area are collected, and further the voice recognition efficiency is improved.
(4) And (3) tone color identification: the accurate positioning of the personnel position in the automobile can ensure that the automobile system and the user in the area can effectively interact, but the situation that a plurality of voice input instructions exist in the automobile in the area simultaneously exists, and under the situation, a voice tone recognition system needs to be designed to filter the non-effective input instructions, so that the voice which really needs to interact can efficiently interact with the automobile system. According to the principle of who wakes up and recognizes who, in an effective voice interaction process, the tone of the personnel in the vehicle is recognized as a unique voice command unless the voice recognition is interrupted.
(5) The embodiment supports simultaneous voice interaction of multiple users in a multi-voice zone, and can ensure positioning accuracy and voice interaction recognition accuracy.
Fig. 3 shows a structural diagram of an information display device provided in an embodiment of the present application. As shown in fig. 3, the information display apparatus 300 is applied to a vehicle-mounted terminal including a voice recognition system, and includes:
a receiving module 301, configured to receive a voice instruction;
the first obtaining module 302 is configured to respond based on the voice command to obtain first response information when it is determined that the voice command is sent by the first user, where the first user is a user who wakes up the voice recognition system;
an identifying module 303, configured to identify a first location where the first user is located;
a first determining module 304, configured to determine a target display area according to the first position;
and a display module 305, configured to display the first response information in the target display area.
In an embodiment of the present application, the apparatus further includes:
the wake-up module is used for waking up the voice recognition system under the condition of receiving a wake-up instruction;
a first positioning module for positioning a second position of the wake-up instruction outputter in the vehicle through a microphone array provided in the vehicle;
the second acquisition module is used for determining a first camera according to the second position and acquiring a facial image of the awakening instruction outputter through the first camera;
and the second determining module is used for determining the wake-up instruction outputter as the first user.
In an embodiment of the present application, the receiving module 301 includes:
a second positioning module for positioning a third position of the voice command outputter in the vehicle through the microphone array in the process of receiving the voice command;
the third acquisition module is used for determining a second camera according to the third position and acquiring a facial image of the voice instruction outputter through the second camera;
and the third determining module is used for determining that the voice command is sent out by the first user if the facial image of the voice command outputter is matched with the facial image of the wake-up command outputter.
In an embodiment of the present application, the apparatus further includes:
the first analysis module is used for performing tone analysis on the voice command to obtain a tone label of the voice command;
a fourth determining module, configured to determine that the voice command is sent by the first user when a tone color tag of the voice command is the same as a first tone color tag, where the first tone color tag is a tone color tag of voice that the first user wakes up the voice recognition system;
accordingly, the first obtaining module 302 is configured to:
and if the voice command is used for controlling the first object, responding to the voice command to obtain first response information, wherein the first object is an object which is not controlled by other users except the first user in a historical time period, and the starting time of the historical time period is the time when the first user wakes up the voice recognition system.
In an embodiment of the present application, the apparatus further includes:
the second analysis module is used for performing tone analysis on the voice command to obtain a tone label of the voice command;
a fifth determining module, configured to determine that the voice command is sent by a second user when a tone color tag of the voice command is the same as a second tone color tag, where the second tone color tag is a preset tone color tag of voice;
and the response module is used for responding based on the voice instruction to obtain second response information.
In an embodiment of the present application, the response module includes:
the response sub-module is used for responding to the voice command to obtain second response information if the voice command is used for controlling a second object, wherein the second object is an object which is not controlled by the first user in a historical time period, and the starting time of the historical time period is the time when the first user wakes up the voice recognition system;
and the display sub-module is used for displaying the second response information on a preset target screen.
In an embodiment of the present application, the first determining module 304 is configured to:
in the case where the vehicle includes a plurality of display screens, a display screen closest to the first position is taken as a target display area;
alternatively, in the case where the vehicle includes a plurality of display screens, a display screen closest to the first position is taken as a target display screen, and an area closest to the first position in the target display screen is taken as a target display area.
The information display device 300 provided in the embodiment of the present application can implement each process implemented by the foregoing embodiment of the information display method, and in order to avoid repetition, a detailed description is omitted here.
Fig. 4 shows a schematic hardware structure of a vehicle implementing an information display method according to an embodiment of the present application.
The vehicle may include a processor 601 and a memory 602 storing computer program instructions.
In particular, the processor 601 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 602 may include mass storage for data or instructions. By way of example, and not limitation, memory 602 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the above. The memory 602 may include removable or non-removable (or fixed) media, where appropriate. Memory 602 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 602 is a non-volatile solid state memory.
The memory may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory comprises one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to the method according to the first aspect of the disclosure.
The processor 601 implements any one of the information display methods of the above embodiments by reading and executing computer program instructions stored in the memory 602.
In one example, the vehicle may also include a communication interface 603 and a bus 610. As shown in fig. 4, the processor 601, the memory 602, and the communication interface 603 are connected to each other through a bus 610 and perform communication with each other.
The communication interface 603 is mainly configured to implement communication between each module, apparatus, unit and/or device in the embodiments of the present application.
Bus 610 includes hardware, software, or both that couple the components of the information display device to one another. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 610 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
In addition, the information display method in combination with the above embodiment stores computer program instructions; the computer program instructions, when executed by a processor, implement any of the information display methods of the above embodiments.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, which are intended to be included in the scope of the present application.

Claims (10)

1. An information display method, characterized by being applied to a vehicle-mounted terminal, the vehicle-mounted terminal including a voice recognition system, the method comprising:
receiving a voice instruction;
under the condition that the voice command is sent by a first user, responding based on the voice command to obtain first response information, wherein the first user is a user waking up the voice recognition system;
identifying a first location where the first user is located;
and determining a target display area according to the first position, and displaying the first response information in the target display area.
2. The method of claim 1, wherein prior to said receiving a voice command, the method further comprises:
under the condition of receiving a wake-up instruction, waking up the voice recognition system;
locating a second position of the wake-up instruction outputter within the vehicle by a microphone array provided in the vehicle;
determining a first camera according to the second position, and acquiring a facial image of the wake-up instruction outputter through the first camera;
the wake-up instruction exporter is determined to be the first user.
3. The method of claim 2, wherein the receiving the voice instruction comprises:
locating a third position of the voice command outputter within the vehicle by the microphone array during receipt of the voice command;
determining a second camera according to the third position, and acquiring a facial image of the voice instruction outputter through the second camera;
and if the facial image of the voice command output person is matched with the facial image of the wake-up command output person, determining that the voice command is sent by the first user.
4. The method of claim 1, wherein after the receiving the voice command, in a case where it is determined that the voice command is issued by the first user, responding based on the voice command, and before obtaining the first response information, the method further comprises:
performing tone analysis on the voice command to obtain a tone label of the voice command;
determining that the voice command is sent out by the first user under the condition that the tone color label of the voice command is the same as a first tone color label, wherein the first tone color label is a tone color label of voice of the voice recognition system awakened by the first user;
and under the condition that the voice command is sent by the first user, responding based on the voice command to obtain first response information, wherein the first response information comprises the following steps:
and if the voice command is used for controlling the first object, responding to the voice command to obtain first response information, wherein the first object is an object which is not controlled by other users except the first user in a historical time period, and the starting time of the historical time period is the time when the first user wakes up the voice recognition system.
5. The method of claim 1, wherein after the receiving the voice command, the method further comprises:
performing tone analysis on the voice command to obtain a tone label of the voice command;
determining that the voice command is sent by a second user under the condition that the tone color label of the voice command is the same as a second tone color label, wherein the second tone color label is a preset tone color label of voice;
and responding based on the voice instruction to obtain second response information.
6. The method of claim 5, wherein responding based on the voice command to obtain second response information comprises:
if the voice command is used for controlling a second object, responding to the voice command to obtain second response information, wherein the second object is an object which is not controlled by the first user in a historical time period, and the starting time of the historical time period is the time when the first user wakes up the voice recognition system;
and displaying the second response information on a preset target screen.
7. The method of claim 1, wherein determining a target display area from the first location comprises:
in the case that the vehicle includes a plurality of display screens, taking the display screen closest to the first position as a target display area;
alternatively, in the case where the vehicle includes a plurality of display screens, a display screen closest to the first position is taken as a target display screen, and an area closest to the first position in the target display screen is taken as a target display area.
8. An information display device, characterized by being applied to a vehicle-mounted terminal including a voice recognition system, comprising:
the receiving module is used for receiving the voice instruction;
the acquisition module is used for responding based on the voice command to obtain first response information under the condition that the voice command is determined to be sent by a first user, wherein the first user is a user who wakes up the voice recognition system;
the identification module is used for identifying a first position where the first user is located;
the determining module is used for determining a target display area according to the first position;
and the display module is used for displaying the first response information in the target display area.
9. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method of any of claims 1-7.
10. A vehicle comprising the computer storage medium of claim 9.
CN202311806986.7A 2023-12-25 2023-12-25 Information display method, information display device, vehicle and computer storage medium Pending CN117785091A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311806986.7A CN117785091A (en) 2023-12-25 2023-12-25 Information display method, information display device, vehicle and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311806986.7A CN117785091A (en) 2023-12-25 2023-12-25 Information display method, information display device, vehicle and computer storage medium

Publications (1)

Publication Number Publication Date
CN117785091A true CN117785091A (en) 2024-03-29

Family

ID=90401232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311806986.7A Pending CN117785091A (en) 2023-12-25 2023-12-25 Information display method, information display device, vehicle and computer storage medium

Country Status (1)

Country Link
CN (1) CN117785091A (en)

Similar Documents

Publication Publication Date Title
CN110047487B (en) Wake-up method and device for vehicle-mounted voice equipment, vehicle and machine-readable medium
CN111653277A (en) Vehicle voice control method, device, equipment, vehicle and storage medium
CN105527710A (en) Intelligent head-up display system
CN105539357A (en) Vehicle safety system and vehicle control method and device
CN106314424B (en) Householder method of overtaking other vehicles, device and automobile based on automobile
CN113486760A (en) Object speaking detection method and device, electronic equipment and storage medium
JP2022122981A (en) Method and apparatus for connecting through on-vehicle bluetooth, electronic device, and storage medium
CN111145750A (en) Control method and device for vehicle-mounted intelligent voice equipment
CN109102801A (en) Audio recognition method and speech recognition equipment
CN114187637A (en) Vehicle control method, device, electronic device and storage medium
CN113488043B (en) Passenger speaking detection method and device, electronic equipment and storage medium
CN114332941A (en) Alarm prompting method and device based on riding object detection and electronic equipment
JP4666202B2 (en) Automobile equipment remote control system
CN112667084B (en) Control method and device for vehicle-mounted display screen, electronic equipment and storage medium
CN109017559A (en) The method and apparatus for generating prompt information
CN117785091A (en) Information display method, information display device, vehicle and computer storage medium
CN115101070A (en) Vehicle control method and device, vehicle and electronic equipment
CN113534781B (en) Voice communication method and device based on vehicle
CN111902864A (en) Method for operating a sound output device of a motor vehicle, speech analysis and control device, motor vehicle and server device outside the motor vehicle
CN113911054A (en) Vehicle personalized configuration method and device, electronic equipment and storage medium
CN113990318A (en) Control method, control device, vehicle-mounted terminal, vehicle and storage medium
CN106228825A (en) Auxiliary driving method and device
CN206826516U (en) A kind of novel on-vehicle rearview mirror display device
CN107021019A (en) A kind of novel on-vehicle rearview mirror display device
CN114889552B (en) Control method and system applied to vehicle, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination