CN116560610A - Interaction method and device, storage medium and electronic device - Google Patents

Interaction method and device, storage medium and electronic device Download PDF

Info

Publication number
CN116560610A
CN116560610A CN202210109418.0A CN202210109418A CN116560610A CN 116560610 A CN116560610 A CN 116560610A CN 202210109418 A CN202210109418 A CN 202210109418A CN 116560610 A CN116560610 A CN 116560610A
Authority
CN
China
Prior art keywords
information
target
vehicle
user
responded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210109418.0A
Other languages
Chinese (zh)
Inventor
蔡若冰
陈川
徐俊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Futurus Technology Co Ltd
Original Assignee
Futurus Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futurus Technology Co Ltd filed Critical Futurus Technology Co Ltd
Priority to CN202210109418.0A priority Critical patent/CN116560610A/en
Publication of CN116560610A publication Critical patent/CN116560610A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/48Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for in-vehicle communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The invention provides an interaction method and device, a storage medium and an electronic device, wherein the method comprises the following steps: determining the initiating position of information to be responded in the vehicle; selecting a target device related to the initiating position from a plurality of electronic devices in the vehicle at least according to the initiating position; and displaying the information to be responded through the target equipment. The invention solves the problems of poor display effect and poor privacy of the display system in the vehicle in the related technology, and achieves the effects of protecting the privacy of the user and improving the user experience.

Description

Interaction method and device, storage medium and electronic device
Technical Field
The invention relates to an interaction method and device, a storage medium and an electronic device.
Background
When the interior space of the existing vehicle is not divided and audio demands such as telephone, voice and video invitations are accessed, entertainment or other contents in the vehicle are often forced to be interrupted, and a plurality of people cannot use a display system in the vehicle at the same time, such as an audio system and a video system for communication. In addition, when information is played in the vehicle, the privacy content is not protected.
Disclosure of Invention
The embodiment of the invention provides an interaction method and device, a storage medium and an electronic device, which are used for at least solving the problems of poor display effect and poor privacy in vehicles in the related technology.
According to an embodiment of the present invention, there is provided an interaction method including: determining the initiating position of information to be responded in the vehicle; selecting a target device related to the initiating position from a plurality of electronic devices in the vehicle at least according to the initiating position; and displaying the information to be responded through the target equipment.
For example, in some embodiments, the method further comprises: determining user information according to the initiating position; and displaying the information to be responded in a preset mode through the target equipment according to the user information.
For example, in some embodiments, the displaying, by the target device, the information to be responded in a preset manner includes: and playing target sound information and target display content through the target equipment, wherein the target sound information and the target display content correspond to the user information.
For example, in some embodiments, the target device includes a target sound projecting device and a target display device; the method further comprises the steps of: playing target sound information corresponding to the user information through the target sound equipment; and playing the target display content corresponding to the user information through the target display device.
For example, in some embodiments, the target display device comprises at least one of a heads-up display and an in-vehicle display screen, and the target sound projecting device comprises at least one of an in-vehicle horn and an ultrasound directional speaker.
For example, in some embodiments, the information to be responded to comprises sound information and/or control information; wherein the sound information comprises audio information within the vehicle and/or voice information of a user within the vehicle; the control information includes touch information of a user within the vehicle acting on the vehicle.
For example, in some embodiments, determining an initiation location of information to be responded to within a vehicle includes: the information to be responded comprises sound information, the sound source position for sending out the sound information is determined through sound positioning equipment and/or tone analysis equipment, and the initiating position is determined; and/or the information to be responded comprises control information, and the position of the equipment receiving the touch information is determined to be the initiating position.
For example, in some embodiments, selecting a target device associated with the originating location from a plurality of electronic devices within the vehicle based at least on the originating location comprises: and selecting and determining the electronic device of the vehicle closest to the initiating position as a target device.
For example, in some embodiments, the method further comprises: and determining the user closest to the initiating position as a target object, and determining the user information according to the target object.
For example, in some embodiments, before determining the initiation location corresponding to the information to be responded to, the method further comprises: and connecting the vehicle and a terminal device of a user in the vehicle.
For example, in some embodiments, after connecting the vehicle and the terminal device of the user within the vehicle, the method further comprises: displaying first display content corresponding to the terminal equipment through the electronic equipment; receiving information to be responded corresponding to the first display content; and determining the initiating position based on the information to be responded.
For example, in some embodiments, the information to be responded to is displayed by the target device, and the method further includes: and sending the playing content of the current target device corresponding to the current user information to other target devices corresponding to other target user information different from the current target user information, wherein the other target devices are different from the current target device.
For example, in some embodiments, the method further comprises: and acquiring sound information and/or display content played by the other target devices through the current target device.
According to still another embodiment of the present invention, there is provided an interaction apparatus including: the first determining module is used for determining the initiating position of the information to be responded in the vehicle; a second determining module, configured to select a target device related to the initiation location from a plurality of electronic devices in the vehicle according to at least the initiation location; and the first playing module displays the information to be responded through the target equipment.
According to a further embodiment of the invention, there is also provided a storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the invention, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the method and the device, the initiating position of the information to be responded in the vehicle is determined; selecting a target device related to the initiating position from a plurality of electronic devices in the vehicle at least according to the initiating position; displaying the information to be responded through the target equipment; the purpose of displaying information according to the initiating position in the vehicle is achieved, so that a plurality of users can use the display system in the vehicle at the same time, and the privacy of the users can be protected. Therefore, the problems of poor display effect and poor privacy of the display system in the vehicle in the related technology can be solved, and the effects of protecting the privacy of the user and improving the user experience are achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of an interaction method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an interaction method according to an embodiment of the invention;
FIG. 3 is a flow chart of determining a user sound source location according to an embodiment of the invention;
FIG. 4 is a schematic diagram of interactions between various display devices according to an embodiment of the invention;
FIG. 5 is a schematic diagram of interactions between display devices according to an embodiment of the invention (II);
FIG. 6 is a schematic diagram of interactions between various display devices (III) according to an embodiment of the invention;
fig. 7 is a block diagram of an interactive apparatus according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the drawings in conjunction with embodiments. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
At least one embodiment of the present application provides a method embodiment that may be performed in a mobile terminal, a computer terminal, or a similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal according to an interaction method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal 10 may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, e.g., the mobile terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1 or have a different configuration than equivalent functions shown in FIG. 1 or more functions than shown in FIG. 1.
The memory 104 may be used to store computer programs, such as software programs and modules of application software, such as computer programs corresponding to the interaction methods in the embodiments of the present invention, and the processor 102 executes the computer programs stored in the memory 104 to perform various functional applications and data processing, i.e., implement the methods described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. The specific examples of networks described above may include wireless networks provided by the communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
In at least one embodiment of the present application, an interaction method is provided, and fig. 2 is a flowchart of an interaction method according to an embodiment of the present invention, as shown in fig. 2, where the flowchart includes the following steps:
step S202, determining the initiating position of information to be responded in a vehicle;
for example, vehicles include various types of land traffic equipment such as automobiles, or water traffic equipment such as boats, or air traffic equipment such as airplanes. For example, vehicles include, but are not limited to, automobiles.
For example, the information to be responded to in the vehicle includes at least one of sound information and control information. For example, the sound information includes audio information within the vehicle, such as audio information emitted by devices within the vehicle (e.g., a first display device, a user's cell phone, an electronic computer, etc.), including at least one of an incoming call voice reminder, a video voice reminder, and a voice operation reminder; alternatively, the voice information includes voice information of a user within the vehicle, such as voice information uttered by the user (e.g., making a call, voice command, etc.). For example, the control information includes touch information of a user within the vehicle acting on the vehicle, for example, the control information may be a touch operation of the user acting on the in-vehicle display screen.
In at least one embodiment of the present application, the initiation location corresponding to the information to be responded to is determined by at least one of the following means. For example, the originating location may be determined by at least one of a sound localization device and a voice parsing device to determine the location of the sound source from which the sound information originates. For example, sound source localization is carried out on the emitted sound information through a sound source localization device, so as to obtain an initiating position; for example, the sound source localization device may be a multi-microphone array disposed within the vehicle.
For example, as shown in fig. 3, the method for performing sound source localization on the emitted sound information by the sound source localization device to obtain the initiating position includes the following steps:
s301, collecting sound in the vehicle. For example, when a phone call or video is received in a car, the user receives the video or phone call, and sound information of the user is sent out.
S302, voice information of the user is transmitted to the ECU (Electronic Control Unit ) through the microphone array.
S303, the ECU identifies sound source angle information of sound information of the user in the vehicle.
S304, the ECU outputs sound source angle information to a central control processor in the vehicle.
S305, the in-vehicle central control processor determines the sound source angle in the sound source angle information. For example, with the microphone array mounting position as the origin, the sound source angle may include: for example, in the clockwise direction, 0 to 60 degrees is the co-pilot sound source, 60 to 90 degrees is the right rear seat sound source, 90 to 120 degrees is the left rear seat sound source, and 120 to 180 degrees is the primary drive sound source. S306, the sound source positioning device judges the position of the sound source in the vehicle according to the angle information (for example, 0 to 60 degrees) returned by the central control processor.
S307, the sound source positioning device outputs the sound source position of the user. For example, the sound source positioning device outputs the sound source position as the co-driver position, thereby determining the originating position as the co-driver position.
In some implementations, the location of the device receiving the touch information may be determined as the originating location. For example, positioning data of a user is received through screen interaction, and an initiating position is obtained. For example, a user clicking on a screen may be considered to interact with the screen; at this time, the position of the screen where the user operation is received may be determined as the originating position.
For example, the vehicle includes a plurality of users, the information to be responded also includes a plurality of users, for example, the number of the information to be responded can be in one-to-one correspondence with the number of the users; for example, a plurality of initiation positions corresponding to the plurality of information to be responded can be confirmed through the plurality of information to be responded. For example, a first user sends out first information to be responded, and a first starting position is determined according to the first information to be responded; the second user sends out second information to be responded, and a second initiating position is determined according to the second information to be responded; etc. For example, a user of the main driver's seat makes a call, and the first starting position can be confirmed as the main driver's seat according to the sound information; and, the user of co-driver's seat controls the display screen of car, can confirm that the second initiating position is the co-driver's seat according to touch-control information.
Step S204, selecting a target device associated with the initiation location from a plurality of electronic devices in the vehicle based at least on the initiation location.
For example, a vehicle includes a plurality of electronic devices. For example, the electronic device includes at least one of a display device and a sound projecting device. For example, the electronic device includes at least one of an in-vehicle display screen, a sound, and a heads-up display. For example, a vehicle includes at least two electronic devices.
For example, the electronic device closest to the distance-originating location is selected and determined as the target device. For example, a mobile phone bell of a passenger sitting in a rear seat is subjected to sound source positioning through a sound source positioning device, sound emitted by the rear seat position is determined, and the rear seat position is determined as an initiating position; a display and/or a vehicle-mounted loudspeaker and the like arranged on a rear seat are used as target equipment; at the moment, the display arranged on the rear seat can be controlled to display the mobile phone incoming call information of the rear passenger, and the vehicle-mounted loudspeaker arranged at or near the rear seat is controlled to play the mobile phone bell to the rear passenger, so that other passengers are not disturbed, and the privacy of a target object is also protected.
Step S206, the information to be responded is displayed through the target equipment.
For example, presenting information to be responded to includes, but is not limited to, displaying and/or playing sound; for example, sound may also be presented to the driver through the HUD in the form of text by converting it into sound. For example, if the user information includes that the target user is a driver and the information to be responded is text, the information may be displayed in a text or sound manner; if the user information includes that the target user is a passenger and the information to be responded is text, the information can be displayed in a sound form. For example, according to different types of user information, information to be responded is displayed in different forms, so that user experience can be improved.
For example, the target sound information and the target display content correspond to a target object. For example, the co-driver sitting in the co-driver seat may send out voice information (e.g., "play a movie"), sound the sent voice information by the sound source positioning device, determine that the voice information is sent out by the co-driver seat, determine that the co-driver seat is the initiating position, control a display (e.g., a display closest to the co-driver) provided at the co-driver seat to play the movie, and control a vehicle-mounted speaker provided at the co-driver seat to play audio of the movie to the co-driver, or use a directional pronunciation device to directionally play audio of the movie to the co-driver.
For example, a driver of the driver seat sends out voice information (e.g., "announces road conditions"), the sound source positioning device performs sound source positioning on the sent voice information, determines that the voice information is sent out by the driver seat, determines that the driver seat is the initiating position, can control a display (e.g., a head-up display) arranged at the driver seat to play road condition information, and can convert the announced voice information into text information for display on the head-up display.
In at least one embodiment of the present application, the interaction method further includes: determining user information according to the initiating position; and displaying the information to be responded in a preset mode through the target equipment according to the user information.
For example, the user information includes user position information from which a target user, such as a driver who can sit in a main seat, a co-driver who can sit in a co-driver seat, and a passenger who can sit in a rear seat in the vehicle, can be confirmed; the target user corresponds to the information to be responded, for example, the information to be responded sent by the target user, and/or the information to be responded is to be displayed to the target user.
For example, displaying in a preset manner includes at least one of: the target equipment plays at least one of characters, videos and animations; the target device plays the sound information.
For example, the user closest to the originating location is determined as the target object, and the user information is determined from the target object. For example, after determining the target object, the user information may be determined based on the type of the target object. For example, the target object is determined to be a driver, the user information including driving information can be determined based on the user type, and information to be displayed can be displayed through the head up display.
For example, determining the display device and the sound projecting device closest to the originating location as target devices; for example, the target device is the nearest electronic device to the user to facilitate responding to the user's operation and playing the corresponding content to the user, such as displaying the content and playing audio. For example, a mobile phone of a passenger sitting in a rear seat rings, the emitted mobile phone rings are subjected to sound source positioning through a sound source positioning device, sound emitted by the position of the rear seat is determined, the position of the rear seat is determined to be an initiating position, user information is determined according to the initiating position, the passenger sitting in the rear seat is determined to be a target object according to the user information, and a display arranged on the rear seat and a vehicle-mounted loudspeaker arranged on the rear seat are used as target devices at the rear position; at the moment, the display arranged on the rear seat can be controlled to display the mobile phone incoming call information of the rear passenger, and the vehicle-mounted loudspeaker arranged at or near the rear seat is controlled to play the mobile phone bell to the rear passenger, so that other passengers are not disturbed, and the privacy of a target object is also protected.
In at least one embodiment of the present application, the target device includes a target audio device and a target display device. For example, the target display device is configured to play information to be responded to corresponding to the user information to the target object. For example, the target display device includes at least one of a heads-up display and an in-vehicle display screen. For example, the in-vehicle display screen may be at least one of a center console screen of the vehicle, a rear row screen of the vehicle, and a secondary driving screen of the vehicle. For example, the in-vehicle display screen may be a touch screen device or a non-touch screen device, etc. For example, the heads-up display may be at least one of a HUD device for viewing by a primary driver and a HUD device for viewing by a secondary driver.
For example, the vehicle includes a plurality of display devices, one or more of which may each be a target display device. For example, a vehicle includes two or more display devices. For example, a vehicle includes two display devices. For example, a vehicle includes four display devices. For example, in one embodiment, the vehicle includes a HUD for viewing by a primary driver and 3 on-board display screens, the 3 on-board display screens being positioned in front of secondary and rear rows for use by secondary and rear row users, respectively.
For example, the target sound projecting apparatus is used to play target sound information corresponding to user information to a target object. For example, the target sound projecting device includes at least one of a vehicle horn and an ultrasonic directional speaker. For example, the vehicle includes one or more vehicle horns, one or more of which may each be a target sound projecting device. Alternatively, in another embodiment, the vehicle includes an ultrasonic directional speaker, which may act as a target sound projecting device, projecting sound information corresponding to the user information.
For example, in at least one embodiment of the present application, after determining the initiation location, the display device closest to the initiation location may be determined as the target display device; determining an ultrasonic directional loudspeaker as a target sound throwing device; for example, the target display device is the display device closest to the user, and the target audio projecting device is the above-mentioned ultrasonic directional speaker, so as to present information to be responded to, such as display content and play audio, to the user. For example, a mobile phone bell of a driver sitting in a driver seat, sound localization is carried out on the emitted mobile phone bell through a sound source localization device, sound emitted by the driver at the driver seat is determined, the driver seat is determined to be an initiating position, and HUD equipment is taken as target equipment; the HUD device for displaying the driver seat can be controlled to display the incoming call information of the mobile phone, the ultrasonic directional loudspeaker is controlled to play the mobile phone bell to the driver in a directional mode, the privacy of the user can be protected, and the user experience is improved.
In at least one embodiment of the present application, the target object includes at least a user in a vehicle, for example, a driver in a primary driver seat, a secondary driver in a secondary driver seat, and a passenger in a rear seat in a vehicle.
For example, the main body of execution of the above steps may be a terminal, such as a mobile phone, an electronic computer, etc., but is not limited thereto.
Through the steps, the initiating position of the information to be responded in the vehicle is determined; selecting a target device related to the initiating position from a plurality of electronic devices in the vehicle at least according to the initiating position; displaying the information to be responded through the target equipment; the purpose of displaying information according to the initiating position in the vehicle is achieved, so that a plurality of users can use the display system in the vehicle at the same time, and the privacy of the users can be protected. Therefore, the problems of poor display effect and poor privacy of the display system in the vehicle in the related technology can be solved, and the effects of protecting the privacy of the user and improving the user experience are achieved.
In an exemplary embodiment, before determining the initiation location corresponding to the information to be responded to, the method further comprises:
s11, connecting the vehicle and terminal equipment of a user in the vehicle.
For example, the terminal device of the user includes at least one of a mobile phone, a tablet computer and a personal computer; for example, after a passenger gets on a vehicle, all wireless communication terminal devices (for example, mobile phones) which are matched in the automatic connection space of the vehicle, such as a bluetooth module, a wifi module or a local area network module of the vehicle, are connected with the mobile phones of the user. For example, the vehicle detecting that there is no matching terminal device may prompt the user to connect through a screen (e.g., the first display device above), and the user may connect with the vehicle by operating the screen (e.g., clicking to confirm the connection).
In one exemplary embodiment, after connecting the vehicle and the terminal device of the user within the vehicle, the method further comprises:
s21, displaying first display content of the corresponding terminal equipment through the electronic equipment;
for example, the HUD device corresponding to the driver's seat may display video displayed by the phone of the primary driver. For example, the HUD device shares video displayed in the phone of the primary driver.
S22, receiving information to be responded corresponding to the first display content;
for example, information to be responded, such as a play instruction sent by the user, of playing music through the HUD device is received.
S23, determining an initiating position based on the information to be responded.
For example, after the user sends out the voice information of playing music from the HUD device corresponding to the main driver's seat, the user may determine that the initiating position is the main driver's seat. And then can confirm that main driver is the target object, HUD equipment or on-vehicle loudspeaker broadcast music.
For example, a vehicle may include multiple display devices therein, which may interact with each other, such as pushing information and/or sharing information with each other. For example, content displayed in a first display device may be shared to a second display device, e.g., the HUD device shares music played to play in other displays; for example, the first display device may also share content displayed in the second display device, and each display device may be corresponding to each user. For example, HUD devices share music played in other displays. The HUD device corresponds to a user at a driver seat, and the other displays correspond to the user at the corresponding position and respond to the information to be responded of the corresponding user.
In an exemplary embodiment, the information to be responded to is displayed by the target device, and the method further includes:
and sending the playing content of the current target device corresponding to the current user information to other target devices corresponding to other target user information different from the current target user information, wherein the other target devices are different from the current target device. For example, other target devices are provided within the vehicle, and other users are within the vehicle. For example, there is interaction between other target devices and the current target device. For example, the target device acquires the sound information and/or display content played in other target devices. For example, music information and/or video information played in a display at a rear seat is acquired by a HUD device. For example, the current user is a driver, the current target device corresponding to the driver is a head-up display, and the driver can send the playing content of the head-up display to the target device corresponding to the co-driver, so that information sharing and mutual pushing can be realized.
The invention is illustrated below with reference to specific examples:
the vehicle in this embodiment is described by taking a vehicle as an example, and in order to improve the use experience of a user in displaying and playing in-vehicle information, this embodiment provides an interaction method.
First, the sound source position of the user in the vehicle is determined, as shown in fig. 3, including the steps of:
s301, collecting sound in the vehicle. For example, when a user receives a telephone in a car, the user can send out sound information.
S302, voice information of a user is transmitted to the ECU through the microphone array.
S303, the ECU recognizes sound source angle information of sound information uttered by the user in the vehicle, for example, the sound of the user at the driver' S seat.
S304, the ECU outputs sound source angle information;
s305, the in-vehicle central control processor determines the angle at which the sound source angle information is located, for example, the output sound source angle information is 120-180 degrees.
S306, judging the position of the angle of the sound source in the vehicle, for example, 120-180 degrees is the primary driving sound source;
s307, the position of the user (e.g., the user is in the driver' S seat) is determined.
Next, after determining the sound source position of the user, for example, a HUD device closest to the sound source position of the user is determined, and an operation of the user is responded to in the HUD device. For example, depending on the orientation of the voice call, a corresponding screen is displayed on the first display device and audio (e.g., telephone or video) is projected in a directed manner. For example, a telephone call is displayed in the HUD device.
Again, in this embodiment, there may be interactive operations between the plurality of display devices in the vehicle. For example, content displayed in the current display device is shared to other display devices, or playback by other display devices is accessed, or the like. For example, video content displayed in the HUD device is shared to other displays (e.g., vehicle displays) for display.
For example, as shown in fig. 4, after the user selects the closest first display device through voice information or touch information, an application function is selected from the display screen of the first display device, for example, the main driver selects an incoming video application from the HUD device; before entering the video application, the main driver selects other displays which enter the video application together and share video information in the video application, for example, the main driver can select to join or synchronize the content displayed by the display at the rear seat, and the ultrasonic directional loudspeaker can direct different users to play corresponding audio, so that the purpose of synchronously displaying the video application by the HUD device and the display at the rear seat is realized.
In at least one embodiment of the present application, the method may be incorporated into entertainment or communication applications of other users in the vehicle through voice information or touch operation of a screen. For example, as shown in FIG. 5, a passenger may view from a display device in front of an application being used by others in the vehicle and apply for joining. When an application is applied to join another user, the partner apparatus displays a query as to whether or not to permit the other user to join. After the ultrasonic directional loudspeaker is added, different users can be directed to play the corresponding audio, and the displays corresponding to the users respectively display related pictures.
In at least one embodiment of the present application, other users in the vehicle are invited to join the application through voice information or touch operation of the screen. For example, as shown in fig. 6, the passenger views the application being used by other persons in the vehicle and the current state from the display device in front of the passenger, and invites the corresponding other person to join the application. Upon issuing the invitation, the partner device displays a query as to whether to accept the join. After the ultrasonic directional loudspeaker is added, different users can be directed to play the corresponding audio, and the displays corresponding to the users respectively display related pictures.
In summary, the embodiment determines the closest display device based on the sound source position of the user, which not only improves the use experience and privacy of the user audio or video in the vehicle, but also does not affect the use of others.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The present embodiment also provides an interaction device, which is used to implement the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 7 is a block diagram of an interaction device according to an embodiment of the present invention, as shown in fig. 7, the device includes:
a first determining module 72 for determining an initiation location of information to be responded to within the vehicle;
a second determination module 74 for selecting a target device associated with the origination location from a plurality of electronic devices within the vehicle based at least on the origination location;
a first playing module 76, configured to display the information to be responded to through the target device.
For example, in some embodiments, the apparatus further comprises:
a third determining module, configured to determine user information according to the initiation location;
the first display module is used for displaying the information to be responded in a preset mode through the target equipment according to the user information.
For example, in some embodiments, the first display module includes:
and the first playing unit is used for playing the target sound information and the target display content through the target equipment, wherein the target sound information and the target display content correspond to the user information.
For example, in some implementations, the target devices include a target pitch device and a target display device; the apparatus further comprises:
the second playing module is used for playing the target sound information corresponding to the user information through the target sound equipment; and
and the third playing module is used for playing the target display content corresponding to the user information through the target display device.
For example, in some embodiments, the target display device comprises at least one of a heads-up display and an in-vehicle display screen, and the target sound projecting device comprises at least one of an in-vehicle horn and an ultrasound directional speaker.
For example, in some embodiments, the information to be responded to comprises sound information and/or control information;
wherein the sound information comprises audio information within the vehicle and/or voice information of a user within the vehicle; the control information includes touch information that a user within the vehicle acts on the vehicle.
For example, in some embodiments, the first determining module 72 includes:
the first determining unit is used for determining the position of a sound source emitting sound information through sound positioning equipment and/or tone analysis equipment and determining an initiating position; and/or the number of the groups of groups,
and the second determining unit is used for determining the position of the equipment receiving the touch information as an initiating position when the information to be responded comprises control information.
For example, in some embodiments, the second determination module 74 includes:
and the third determining unit is used for selecting and determining the electronic device of the vehicle closest to the initiating position as the target device.
For example, in some embodiments, the apparatus further comprises:
and the fourth determining unit is used for determining the user closest to the initiating position as a target object and determining user information according to the target object.
For example, in some embodiments, the apparatus further comprises:
and the connection module is used for connecting the vehicle and the terminal equipment of the user in the vehicle before determining the initiating position corresponding to the information to be responded.
For example, in some embodiments, the apparatus further comprises: the first display module is used for displaying first display content corresponding to terminal equipment through electronic equipment after connecting the vehicle and the terminal equipment of a user in the vehicle;
the first receiving module is used for receiving information to be responded corresponding to the first display content;
and the fourth determining module is used for determining the initiating position based on the information to be responded.
For example, in some embodiments, the apparatus further comprises:
and the first sending module is used for sending the playing content of the current target device corresponding to the current user information to other target devices corresponding to other target user information different from the current target user information, wherein the other target devices are different from the current target device.
For example, in some embodiments, the apparatus further comprises:
the first acquisition module is used for acquiring sound information and/or display content played by other target devices through the current target device.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
An embodiment of the invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
For example, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
s1, determining an initiating position of information to be responded in a vehicle;
s2, selecting target equipment related to the initiating position from a plurality of electronic equipment in the vehicle at least according to the initiating position;
and S3, displaying the information to be responded through the target equipment.
For example, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
For example, the electronic apparatus may further include a transmission device connected to the processor, and an input/output device connected to the processor.
For example, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, determining an initiating position of information to be responded in a vehicle;
s2, selecting target equipment related to the initiating position from a plurality of electronic equipment in the vehicle at least according to the initiating position;
and S3, displaying the information to be responded through the target equipment.
For example, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, for example, they may be implemented in program code executable by computing devices, so that they may be stored in a memory device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than what is shown or described, or they may be separately fabricated into individual integrated circuit modules, or a plurality of modules or steps in them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (16)

1. An interaction method, comprising:
determining the initiating position of information to be responded in the vehicle;
selecting a target device related to the initiating position from a plurality of electronic devices in the vehicle at least according to the initiating position; and
and displaying the information to be responded through the target equipment.
2. The method according to claim 1, wherein the method further comprises:
determining user information according to the initiating position;
and displaying the information to be responded in a preset mode through the target equipment according to the user information.
3. The method according to claim 2, wherein displaying the information to be responded to by the target device in a preset manner comprises:
and playing target sound information and target display content through the target equipment, wherein the target sound information and the target display content correspond to the user information.
4. The method of claim 2, wherein the target device comprises a target pitch device and a target display device; the method further comprises the steps of:
playing target sound information corresponding to the user information through the target sound equipment; and
and playing the target display content corresponding to the user information through the target display device.
5. The method of claim 4, wherein the target display device comprises at least one of a heads-up display and an in-vehicle display screen, and the target sound projecting device comprises at least one of an in-vehicle horn and an ultrasound directional speaker.
6. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the information to be responded comprises sound information and/or control information;
wherein the sound information comprises audio information within the vehicle and/or voice information of a user within the vehicle; the control information includes touch information of a user within the vehicle acting on the vehicle.
7. The method of any of claims 1-6, wherein determining an origination location of information to be responded to within a vehicle comprises:
the information to be responded comprises sound information, the sound source position for sending out the sound information is determined through sound positioning equipment and/or tone analysis equipment, and the initiating position is determined; and/or the number of the groups of groups,
the information to be responded comprises control information, and the position of the equipment receiving the touch information is determined to be the initiating position.
8. The method of any of claims 1-6, wherein selecting a target device associated with the originating location from a plurality of electronic devices within the vehicle based at least on the originating location comprises:
and selecting and determining the electronic device of the vehicle closest to the initiating position as a target device.
9. The method according to claim 2, wherein the method further comprises:
and determining the user closest to the initiating position as a target object, and determining the user information according to the target object.
10. The method according to any one of claims 1-9, wherein prior to determining the initiation location corresponding to the information to be responded to, the method further comprises:
and connecting the vehicle and a terminal device of a user in the vehicle.
11. The method of claim 10, wherein after connecting the vehicle and the terminal device of the user within the vehicle, the method further comprises:
displaying first display content corresponding to the terminal equipment through the electronic equipment;
receiving information to be responded corresponding to the first display content;
and determining the initiating position based on the information to be responded.
12. The method of claim 2, wherein the information to be responded to is presented by the target device, the method further comprising:
and sending the playing content of the current target device corresponding to the current user information to other target devices corresponding to other target user information different from the current target user information, wherein the other target devices are different from the current target device.
13. The method according to claim 12, wherein the method further comprises:
and acquiring sound information and/or display content played by the other target devices through the current target device.
14. An interactive apparatus, comprising:
the first determining module is used for determining the initiating position of the information to be responded in the vehicle;
a second determining module, configured to select a target device related to the initiation location from a plurality of electronic devices in the vehicle according to at least the initiation location;
and the first playing module displays the information to be responded through the target equipment.
15. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1 to 12 when run.
16. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of the claims 1 to 12.
CN202210109418.0A 2022-01-28 2022-01-28 Interaction method and device, storage medium and electronic device Pending CN116560610A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210109418.0A CN116560610A (en) 2022-01-28 2022-01-28 Interaction method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210109418.0A CN116560610A (en) 2022-01-28 2022-01-28 Interaction method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN116560610A true CN116560610A (en) 2023-08-08

Family

ID=87486673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210109418.0A Pending CN116560610A (en) 2022-01-28 2022-01-28 Interaction method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN116560610A (en)

Similar Documents

Publication Publication Date Title
CN102520391B (en) Cognitive load reduction
US8712564B2 (en) Audio conversation apparatus
CN105933822B (en) Vehicle audio control system and method
CN107835398A (en) A kind of customization navigation information display methods based on throwing screen, device
CN110336892B (en) Multi-device cooperation method and device
CN110197400A (en) The method for pushing and device of advertisement, head-up display HUD and server
CN112818311A (en) Service method of vehicle-mounted function, man-machine interaction system and electronic equipment
CN107864389A (en) The video sharing method and device of vehicular amusement apparatus
US7831461B2 (en) Real time voting regarding radio content
US10110332B2 (en) Devices and methods for in-vehicle content localization
CN115061652A (en) Vehicle-mounted multimedia screen projection method, device, equipment and readable storage medium
JP2019086805A (en) In-vehicle system
CN112437246B (en) Video conference method based on intelligent cabin and intelligent cabin
US20220095045A1 (en) In-car headphone acoustical augmented reality system
CN116560610A (en) Interaction method and device, storage medium and electronic device
CN116486798A (en) Voice interaction method, device, equipment, vehicle and storage medium
CN114760434A (en) Automobile intelligent cabin capable of realizing multi-person online video conference and method
CN115431911A (en) Interaction control method and device, electronic equipment, storage medium and vehicle
JP6405964B2 (en) Voice control system, in-vehicle device, voice control method
CN115440207A (en) Multi-screen voice interaction method, device, equipment and computer readable storage medium
CN115079967A (en) Vehicle multi-screen grading control method and vehicle-mounted system thereof
CN112738447B (en) Video conference method based on intelligent cabin and intelligent cabin
JP2019018771A (en) On-vehicle system
CN117336315A (en) Vehicle-mounted multimedia control method, device and system
EP4207752A2 (en) In-vehicle communications and media mixing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination