CN116521113A - Multi-screen control method and device and vehicle - Google Patents

Multi-screen control method and device and vehicle Download PDF

Info

Publication number
CN116521113A
CN116521113A CN202310620427.0A CN202310620427A CN116521113A CN 116521113 A CN116521113 A CN 116521113A CN 202310620427 A CN202310620427 A CN 202310620427A CN 116521113 A CN116521113 A CN 116521113A
Authority
CN
China
Prior art keywords
window
target
split screen
control
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310620427.0A
Other languages
Chinese (zh)
Inventor
王洪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd, Apollo Zhixing Technology Guangzhou Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202310620427.0A priority Critical patent/CN116521113A/en
Publication of CN116521113A publication Critical patent/CN116521113A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The disclosure provides a multi-screen control method, a multi-screen control device and a vehicle, relates to the technical field of artificial intelligence, and particularly relates to the technical fields of natural language processing, computer vision and voice. The specific implementation scheme is as follows: acquiring a voice to be processed in a vehicle cabin, and identifying text and sound source positions corresponding to the voice to be processed; selecting a target split screen from a plurality of split screens of a vehicle-to-vehicle system in a vehicle cabin according to the sound source position; acquiring a target window in a target split screen and control information in the target window; determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information; the target control in the target window is controlled according to the target operation, so that the voice interaction processing of the object in the cabin and the split screens at other positions can be realized, the voice interaction efficiency is improved, and the control efficiency of a plurality of split screens of the vehicle machine system is further improved.

Description

Multi-screen control method and device and vehicle
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of natural language processing, computer vision and voice, and particularly relates to a multi-screen control method, a multi-screen control device and a vehicle.
Background
Currently, a plurality of split screens can be arranged in a vehicle system of a vehicle cabin. Multiple split screens may be located in different locations for displaying different content. The position is, for example, front of the main drive, front of the sub drive, and the like.
In the current car scene voice interaction technology, when an object in a cabin outputs voice, a car system defaults to adopt a split screen in front of a main driver to perform response processing, namely, voice content is recognized, and the split screen in front of the main driver is operated according to the voice content, so that the object in the cabin is difficult to perform voice interaction processing with the split screen in other positions, and the voice interaction efficiency is poor.
Disclosure of Invention
The disclosure provides a multi-screen control method, a multi-screen control device and a vehicle.
According to an aspect of the present disclosure, there is provided a multi-screen control method, the method including: acquiring voice to be processed in a vehicle cabin, and identifying text and sound source positions corresponding to the voice to be processed; selecting a target split screen from a plurality of split screens of a vehicle-to-vehicle system in the vehicle cabin according to the sound source position; acquiring a target window in the target split screen and control information in the target window; determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information; and controlling the target control in the target window according to the target operation.
According to another aspect of the present disclosure, there is provided a multi-screen control apparatus including: the first acquisition module is used for acquiring the voice to be processed in the vehicle cabin, and the identification text and the sound source position corresponding to the voice to be processed; the selection module is used for selecting a target split screen from a plurality of split screens of a vehicle-to-vehicle system in the vehicle cabin according to the sound source position; the second acquisition module is used for acquiring a target window in the target split screen and control information in the target window; the first determining module is used for determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information; and the processing module is used for controlling and processing the target control in the target window according to the target operation.
According to another aspect of the present disclosure, there is provided a vehicle including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the multi-screen control method set forth above in the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the multi-screen control method proposed above of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of the multi-screen control method proposed above by the present disclosure.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;
fig. 5 is a block diagram of a vehicle for implementing a multi-screen control method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Currently, a plurality of split screens can be arranged in a vehicle system of a vehicle cabin. Multiple split screens may be located in different locations for displaying different content. The position is, for example, front of the main drive, front of the sub drive, and the like.
In the current car scene voice interaction technology, when an object in a cabin outputs voice, a car system defaults to adopt a split screen in front of a main driver to perform response processing, namely, voice content is recognized, and the split screen in front of the main driver is operated according to the voice content, so that the object in the cabin is difficult to perform voice interaction processing with the split screen in other positions, and the voice interaction efficiency is poor.
In view of the above problems, the present disclosure provides a multi-screen control method, a device and a vehicle.
Fig. 1 is a schematic diagram of a first embodiment of the present disclosure, and it should be noted that the multi-screen control method of the embodiment of the present disclosure may be applied to a multi-screen control device, where the device may be a vehicle system in a cabin of a vehicle, or a voice application software embedded in the vehicle system, so that the vehicle system may perform a multi-screen control function. In the following embodiments, the following embodiments will be described by taking the execution subject as the voice application software in the car system as an example.
The multi-screen control device may be other electronic devices in communication with the vehicle-mounted system. The electronic device may be any device with computing capability, for example, may be a personal computer (Personal Computer, abbreviated as PC), a mobile terminal, a server, etc., and the mobile terminal may be, for example, a vehicle-mounted device, a mobile phone, a tablet computer, a personal digital assistant, a wearable device, a smart speaker, etc., and has various hardware devices including an operating system, a touch screen, and/or a display screen.
As shown in fig. 1, the multi-screen control method may include the steps of:
step 101, acquiring a voice to be processed in a vehicle cabin, and a recognition text and a sound source position corresponding to the voice to be processed.
In the embodiment of the present disclosure, the process of executing step 101 by the voice application software in the vehicle system may be, for example, acquiring the voice to be processed in the vehicle cabin; performing text recognition processing on the voice to be processed to obtain a recognition text corresponding to the voice to be processed; and performing sound source positioning processing on the voice to be processed to obtain a sound source position corresponding to the voice to be processed.
The manner in which the voice application software obtains the voice to be processed in the vehicle cabin may be, for example, that a microphone in the vehicle system collects the voice in the vehicle cabin; the voice application software interacts with control software of a microphone in the car machine system to acquire voice acquired by the microphone; and taking the voice acquired by the microphone as the voice to be processed.
The sound source of the voice to be processed may be an object in a vehicle cabin. Wherein objects in the vehicle cabin, such as a driver or a passenger, etc. The sound source position, i.e. the position where the driver is located, or the position where the passenger is located. The position of the driver and the position of the passenger can be represented by the seat position.
Step 102, selecting a target split screen from a plurality of split screens of a vehicle-mounted system in a vehicle cabin according to the sound source position.
In the embodiment of the present disclosure, the process of executing step 102 by the voice application software in the vehicle system may be, for example, determining the positions of multiple split screens of the vehicle system; determining distances between the sound source position and the positions of the plurality of split screens; and selecting a target split screen from the multiple split screens according to the distance.
The method for selecting the target split screen from the multiple split screens according to the distance can be, for example, that the minimum distance in the multiple distances is obtained; and determining the split screen corresponding to the minimum distance as a target split screen.
In the embodiment of the disclosure, the split screen in the vehicle-mounted system may include a main driving split screen and at least one of the following split screens: sub-driving split screen and rear seat split screen. Wherein, the backseat split screen can include: the left side of the rear seat is split with a screen, and/or the right side of the rear seat is split with a screen. The main driving split screen can be positioned in front of a seat where a driver is located, namely in front of the main driving. The secondary driver split screen may be located in front of the seat in which the secondary driver is located, i.e., in front of the secondary driver. The rear seat split screen may be located in front of the seat in which the rear seat passenger is located, i.e., in front of the rear seat.
And step 103, acquiring a target window in the target split screen and control information in the target window.
In embodiments of the present disclosure, there may be multiple windows in a split screen. The window refers to an application interface displayed after at least one application in the vehicle-mounted system is opened, or a system interface displayed in the vehicle-mounted system. Wherein the system interface, e.g., a time interface in a split screen, etc. Application interfaces, such as map navigation interfaces, car music interfaces, vehicle recorder interfaces, and the like.
Wherein, the windows in a split screen may belong to different applications or belong to the same application. That is, for one application in the vehicle-mounted system, an application interface of the application may not be displayed on the split screen, or one application interface of the application may be displayed on the split screen, or a plurality of application interfaces of the application may be displayed on the split screen, or the like. Wherein for a particular application, multiple application interfaces, e.g., application main interface, pop-up interface, etc.
In the embodiment of the disclosure, a plurality of windows in one split screen may be displayed in different areas of the split screen. The target window in the target split screen may be a window in a central area among a plurality of windows currently displayed in the target split screen, and the like.
In the disclosed embodiments, for each window of each split screen in the in-vehicle system, control information within that window may be stored in different areas in an accessibility node (Accessibility Node Info) in the in-vehicle system. Wherein the control information of the window may include an identification of at least one control visible within the window. Wherein the visible controls may include, for example, at least one of: text, icons, slidable views, inputtable edit boxes, etc. Where the icons are, for example, application icons. The text, for example, text below the application icon. Slidable views such as a slider bar on the right side of the page within the window, etc.
And 104, determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information.
In the embodiment of the present disclosure, the process of executing step 104 by the voice application software in the vehicle-mounted system may be, for example, extracting the recognition text to obtain the operation related words in the recognition text; inquiring operation words corresponding to each operation of the control in the control information according to the operation related words, and determining a first operation word matched with the operation related words, and an operation corresponding to the first operation word and the control; determining a control corresponding to the first operation word as a target control; and determining the operation corresponding to the first operation word as a target operation aiming at the target control.
In embodiments of the present disclosure, the control information within the target window may include an identification of at least one control visible within the target window. For each control within the target window, the vehicle system or voice application software in the vehicle system may be provided with an operation word for at least one operation of the control. Wherein, the operation words are words describing the operation.
Taking the target window as a popup window interface as an example, a 'determination' control and a 'cancellation' control are displayed in the popup window interface. Wherein, for an operation of the "determine" control, such as a determine operation; for operations of the "cancel" control, such as cancel operations. Wherein the operational terms of the operation are determined, e.g., determined, agreed, etc. The operation words of the cancel operation, such as cancel, exit, return, etc.
According to the operation related words in the recognition text, the operation words corresponding to each operation of the control in the control information are queried, the target control and the target operation aiming at the target control can be accurately determined, the operation intention of an object outputting the voice to be processed in the cabin to the target split screen can be accurately known, the voice interaction efficiency of the object in the cabin and the split screen of the vehicle-mounted system is improved, and the control efficiency of a plurality of split screens of the vehicle-mounted system is further improved.
And 105, performing control processing on the target control in the target window according to the target operation.
In the embodiment of the present disclosure, the process of executing step 105 by the voice application software in the vehicle-mounted system may be, for example, obtaining the identifier of the target control, the identifier of the target window, and the identifier of the target split screen; and sending a processing request carrying the target operation, the identification of the target control, the identification of the target window and the identification of the target split screen to the vehicle machine system so that the vehicle machine system can control and process the target control according to the target operation.
The vehicle-mounted system is provided with an operating system for performing control processing on a plurality of split screens, for example, controlling display contents on the split screens. The operation system in the vehicle-mounted system has operation authority for a plurality of split screens, so that the voice application software can call an interface provided by the operation system of the vehicle-mounted system, carry the target operation, the identification of the target control, the identification of the target window and the identification of the target split screen in an interface call function aiming at the interface, and send the processing request to the vehicle-mounted system. The interface provided by the operating system of the vehicle system may be, for example, android.
After receiving the processing request, the vehicle-mounted system controls the target control in the target window of the target split screen according to the target operation, and displays the processed page in the target window of the target split screen.
The voice application software sends a processing request carrying the target operation, the identification of the target control, the identification of the target window and the identification of the target split screen to the vehicle machine system to realize the operation processing of the target control, so that the voice control processing of the control in the split screen window can be realized, and the voice interaction efficiency is further improved.
In the embodiment of the disclosure, it should be noted that the voice to be processed output by the object in the cabin may be related to the content currently displayed in the target split screen or may be unrelated to the content currently displayed in the target split screen. In the case where the voice to be processed is related to the content currently displayed in the target split screen, the voice application software may acquire the target control and the target operation for the target control. In the case where the voice to be processed is independent of the content currently displayed in the target split screen, the voice application software may not obtain the target control and the target operation for the target control.
Thus, the voice application software in the in-car system may also perform the following process: under the condition that a target control is not determined according to the identification text and the control information, acquiring a control instruction in the identification text; and sending a processing request carrying a control instruction to the vehicle machine system so that the vehicle machine system can control and process the display content on the target split screen according to the control instruction. After the control instruction in the identification text is acquired, whether the control instruction is complete or not can be judged; if the control command is complete, a processing request carrying the control command is sent to the vehicle machine system; if not, interaction with the object in the cabin can be performed to supplement the control instructions intact.
Wherein the complete control instruction is, for example, "play XX Song"; incomplete control commands such as "play" and the like.
Under the condition that the target control is not determined according to the identification text and the control information, a control instruction in the identification text can be acquired, and control processing is carried out on the target split screen, so that an object in the cabin can control the target split screen to display corresponding content according to own requirements, the voice interaction efficiency is further improved, and the control efficiency of a plurality of split screens of a vehicle-mounted system is improved.
According to the multi-screen control method, the voice to be processed in the vehicle cabin and the identification text and the sound source position corresponding to the voice to be processed are obtained; selecting a target split screen from a plurality of split screens of a vehicle-to-vehicle system in a vehicle cabin according to the sound source position; acquiring a target window in a target split screen and control information in the target window; determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information; the target control in the target window is controlled according to the target operation, so that the voice interaction processing of the object in the cabin and the split screens at other positions can be realized, the voice interaction efficiency is improved, and the control efficiency of a plurality of split screens of the vehicle machine system is further improved.
In order to accurately select a target split screen for which the object wants to perform voice interaction, response processing of the voice to be processed of the object by adopting an error split screen is avoided, a first configuration table can be set in advance, and the target split screen matched with the sound source position is determined by querying the first configuration table. As shown in fig. 2, fig. 2 is a schematic diagram of a second embodiment according to the present disclosure, and the embodiment shown in fig. 2 may include the following steps:
Step 201, acquiring a voice to be processed in a vehicle cabin, and a recognition text and a sound source position corresponding to the voice to be processed.
Step 202, inquiring a first configuration table according to the sound source position, and acquiring a first seat position matched with the sound source position in the first configuration table and a split screen identification of at least one split screen corresponding to the first seat position.
In the embodiment of the present disclosure, before step 202, the voice application software in the vehicle-mounted system may determine, according to the seat positions of each seat in the vehicle cabin and the positions of each split screen, a corresponding relationship between the seat positions and the split screens, so as to generate a first configuration table, further ensure that the selected target split screen is the split screen where the object actually wants to perform voice interaction, and improve accuracy in determining the target split screen. Correspondingly, before step 202, the following steps may be specifically executed by the voice application software in the vehicle system: determining a plurality of seats in a vehicle cabin, and seat positions and surrounding areas of the plurality of seats; for each seat, determining a split screen identification of a first split screen in the vehicle cabin, which is located in a peripheral area of the seat; establishing a corresponding relation between the seat position of the seat and the split screen identification of the first split screen; and generating a first configuration table according to the corresponding relation.
The seats in the vehicle cabin are, for example, main driver seats, sub driver seats, rear seat, and the like. The split screen in the vehicle-mounted system is generally positioned in front of the seat, so that an object positioned on the seat can conveniently view the split screen content. Thus, the peripheral region of the seat may be a range of motion region when the subject is located on the seat. The present invention is not particularly limited, and may be set according to actual needs.
Wherein the number of corresponding split screen identifiers may be one or more for a single seat position.
Step 203, selecting a split screen identifier from at least one split screen identifier.
In an embodiment of the present disclosure, a split screen in a vehicle system includes: in the case of the main driving split screen, the auxiliary driving split screen and the rear seat split screen, the voice application software in the vehicle-mounted system executes the process of step 203, for example, may be that the split screen identifier of the main driving split screen is selected when the split screen identifier of the main driving split screen exists in at least one split screen identifier; selecting the split screen identification of the auxiliary driving split screen under the condition that the split screen identification of the main driving split screen does not exist in the at least one split screen identification and the split screen identification of the auxiliary driving split screen exists; and selecting the split screen identifier of the rear seat split screen under the condition that the split screen identifier of the main driving split screen does not exist in the at least one split screen identifier and the split screen identifier of the auxiliary driving split screen does not exist.
That is, the priority is set for the main driving split screen, the auxiliary driving split screen and the rear seat split screen, and the priority of the main driving split screen is higher than that of the auxiliary driving split screen; the priority of the secondary driving split screen is higher than that of the rear seat split screen. Under the condition that at least one split screen identifier corresponds to the sound source position, the split screen identifier of the main driving split screen with higher priority is preferentially selected; if the split screen identification of the main driving split screen does not exist, the split screen identification of the auxiliary driving split screen is preferentially selected; and if the sub-driving split screen identification does not exist, selecting the split screen identification of the rear seat split screen.
In the vehicle cabin, if the vehicle is in the driving process, a driver is generally arranged on the main driving seat, and passengers are not necessarily arranged on the auxiliary driving seat and the rear seat, so that in order to avoid response processing of other split screens except the main driving split screen when the driver outputs voice, the split screen identification of the main driving split screen can be preferentially selected when the sound source position corresponds to a plurality of split screen identifications, the use experience of the driver on the main driving split screen is ensured, the voice interaction efficiency of objects in the cabin and the split screens in the vehicle system is further improved, and the control efficiency of the split screens in the vehicle system is further improved.
And 204, determining the split corresponding to the selected split identifier as a target split.
Step 205, obtaining a target window in the target split screen and control information in the target window.
And 206, determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information.
And 207, performing control processing on the target control in the target window according to the target operation.
It should be noted that, for details of step 201, step 205 to step 207, reference may be made to step 101, step 103 to step 105 in the embodiment shown in fig. 1, and detailed description thereof will not be given here.
According to the multi-screen control method, the voice to be processed in the vehicle cabin and the identification text and the sound source position corresponding to the voice to be processed are obtained; inquiring a first configuration table according to the sound source position, and acquiring a first seat position matched with the sound source position in the first configuration table and a split screen identification of at least one split screen corresponding to the first seat position; selecting a split screen identifier from at least one split screen identifier; determining the split screen corresponding to the selected split screen identification as a target split screen; acquiring a target window in a target split screen and control information in the target window; determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information; the target control in the target window is controlled according to the target operation, so that the voice interaction processing of the object in the cabin and the split screens at other positions can be realized, the voice interaction efficiency is improved, and the control efficiency of a plurality of split screens of the vehicle machine system is further improved.
In order to accurately select a target window in which an object in a target split screen wants to perform voice interaction, response processing of the target voice to be processed by adopting an incorrect window in the target split screen is avoided, the target window can be selected by combining window information of each window in the target split screen, and accuracy of target window selection is improved. As shown in fig. 3, fig. 3 is a schematic diagram of a third embodiment according to the present disclosure, and the embodiment shown in fig. 3 may include the following steps:
step 301, acquiring a voice to be processed in a vehicle cabin, and a recognition text and a sound source position corresponding to the voice to be processed.
Step 302, selecting a target split screen from a plurality of split screens of a vehicle system in a vehicle cabin according to a sound source position.
Step 303, obtaining at least one window in the target split screen and window information of the at least one window; the window information includes at least one of: window type, whether the window is in an active state, whether the window acquires focus.
In the disclosed embodiments, window types, such as system types and application types. The window of the system type may be a system window. The application type window may be an application window.
In the embodiment of the disclosure, the voice application software in the vehicle-mounted system can send a call request to the vehicle-mounted system through an interface provided by an operating system of the vehicle-mounted system so as to acquire window information of a window in any one split screen in the vehicle-mounted system. The interface may be, for example, android.
The window information of the windows in each split screen of the vehicle-mounted system can be stored in the operating system of the vehicle-mounted system in the form of key value pairs. Wherein, the key (key) in the key value pair may be the identifier of the split screen, and the value (value) may be the window information of at least one window in the split screen.
Step 304, selecting a target window from the at least one window according to the window information of the at least one window.
In the embodiment of the present disclosure, the process of executing step 304 by the voice application software in the vehicle-mounted system may be, for example, acquiring a first window in at least one window according to window information of the at least one window, and taking the first window as a target window; the window type of the first window is an application type, the first window is in an activated state, and the first window acquires a focus; under the condition that the first window is not acquired, acquiring a second window in at least one window, and taking the second window as a target window; the window type of the second window is an application type, and the second window is in an activated state; under the condition that the first window is not acquired and the second window is not acquired, acquiring a third window in at least one window, and taking the third window as a target window; the window type of the third window is an application type.
The method comprises the steps that a window type is preferentially selected to be an application type and is in an activated state, and a first window with a focus is obtained to be a target window; when the first window is not acquired, selecting a second window with the window type being the application type and in an activated state as a target window; when the second window is not acquired, a third window with the window type being the application type is selected as a target window, so that the probability that the selected target window is the window in which the object actually wants to interact with voice is improved, the window in which the object actually wants to interact with voice is further ensured to be selected, and the voice interaction efficiency of the object in the vehicle cabin and the split screen in the vehicle-mounted system is further improved.
In step 305, control information within the target window is obtained.
And 306, determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information.
And 307, performing control processing on the target control in the target window according to the target operation.
It should be noted that, for details of steps 301 to 302 and steps 306 to 307, reference may be made to steps 101 to 102 and steps 104 to 105 in the embodiment shown in fig. 1, and detailed description thereof will not be provided here.
According to the multi-screen control method, the voice to be processed in the vehicle cabin and the identification text and the sound source position corresponding to the voice to be processed are obtained; selecting a target split screen from a plurality of split screens of a vehicle-to-vehicle system in a vehicle cabin according to the sound source position; acquiring at least one window in a target split screen and window information of the at least one window; the window information includes at least one of: window type, whether the window is in an activated state, whether the window acquires focus; selecting a target window from the at least one window according to the window information of the at least one window; acquiring control information in a target window; determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information; the target control in the target window is controlled according to the target operation, so that the voice interaction processing of the object in the cabin and the split screens at other positions can be realized, the voice interaction efficiency is improved, and the control efficiency of a plurality of split screens of the vehicle machine system is further improved.
In order to achieve the above embodiments, the present disclosure further provides a multi-screen control device. As shown in fig. 4, fig. 4 is a schematic diagram according to a fourth embodiment of the present disclosure. The multi-screen control device 40 may include: a first acquisition module 401, a selection module 402, a second acquisition module 403, a first determination module 404 and a processing module 405.
The first obtaining module 401 is configured to obtain a voice to be processed in a vehicle cabin, and an identification text and a sound source position corresponding to the voice to be processed;
a selection module 402, configured to select a target split screen from a plurality of split screens of a vehicle-to-vehicle system in the vehicle cabin according to the sound source position;
a second obtaining module 403, configured to obtain a target window in the target split screen and control information in the target window;
a first determining module 404, configured to determine, according to the identification text and the control information, a target control in the target window, and a target operation for the target control;
and the processing module 405 is configured to perform control processing on the target control in the target window according to the target operation.
As a possible implementation manner of the embodiments of the present disclosure, the selecting module 402 is specifically configured to query a first configuration table according to the sound source position, obtain a first seat position matched with the sound source position in the first configuration table, and a split screen identifier of at least one split screen corresponding to the first seat position; selecting one split screen identifier from at least one split screen identifier; and determining the split screen corresponding to the selected split screen identification as the target split screen.
As one possible implementation manner of the embodiments of the present disclosure, the split screen includes: main driving split screen, auxiliary driving split screen and rear seat split screen; the selecting module 402 is specifically further configured to select a split screen identifier of the main driving split screen when the split screen identifier of the main driving split screen exists in at least one split screen identifier; selecting the split screen identification of the auxiliary driving split screen under the condition that the split screen identification of the main driving split screen does not exist in at least one split screen identification and the split screen identification of the auxiliary driving split screen exists; and selecting the split screen identification of the rear seat split screen under the condition that the split screen identification of the main driving split screen does not exist in at least one split screen identification and the split screen identification of the auxiliary driving split screen does not exist.
As one possible implementation manner of the embodiments of the present disclosure, the apparatus further includes: the system comprises a second determining module, a third determining module, an establishing module and a generating module; the second determining module is used for determining a plurality of seats in the vehicle cabin, and seat positions and surrounding areas of the plurality of seats; the third determining module is used for determining a split screen identification of a first split screen located in the peripheral area of the seat in the vehicle cabin for each seat; the establishing module is used for establishing a corresponding relation between the seat position of the seat and the split screen identification of the first split screen; the generating module is configured to generate the first configuration table according to the corresponding relationship.
As one possible implementation manner of the embodiment of the present disclosure, the second obtaining module 403 is specifically configured to obtain at least one window in the target split screen and window information of at least one window; the window information includes at least one of: window type, whether the window is in an activated state, whether the window acquires focus; selecting the target window from at least one window according to the window information of at least one window; and acquiring control information in the target window.
As one possible implementation of the embodiments of the present disclosure, the window information includes: window type, whether the window is in an activated state, and whether the window acquires focus; the second obtaining module 403 is specifically further configured to obtain a first window of at least one window according to window information of the at least one window, and take the first window as the target window; the window type of the first window is an application type, the first window is in an activated state, and the first window acquires a focus; acquiring a second window in at least one window under the condition that the first window is not acquired, and taking the second window as the target window; the window type of the second window is an application type, and the second window is in an activated state; acquiring a third window in at least one window under the condition that the first window is not acquired and the second window is not acquired, and taking the third window as the target window; the window type of the third window is an application type.
As a possible implementation manner of the embodiment of the present disclosure, the first determining module 404 is specifically configured to extract the recognition text and obtain an operation related word in the recognition text; inquiring operation words corresponding to each operation of the control in the control information according to the operation related words, and determining a first operation word matched with the operation related words, and an operation corresponding to the first operation word and the control; determining a control corresponding to the first operation word as the target control; and determining the operation corresponding to the first operation word as the target operation aiming at the target control.
As one possible implementation manner of the embodiments of the present disclosure, the processing module 405 is specifically configured to obtain an identifier of the target control, an identifier of the target window, and an identifier of the target split screen; and sending a processing request carrying the target operation, the identification of the target control, the identification of the target window and the identification of the target split screen to the vehicle-mounted system, so that the vehicle-mounted system controls and processes the target control according to the target operation.
As one possible implementation manner of the embodiments of the present disclosure, the apparatus further includes: the third acquisition module and the sending module; the third acquisition module is used for acquiring a control instruction in the identification text under the condition that the target control is not determined according to the identification text and the control information; the sending module is used for sending a processing request carrying the control instruction to the vehicle-mounted system so that the vehicle-mounted system can control and process the display content on the target split screen according to the control instruction.
According to the multi-screen control device, the voice to be processed in the vehicle cabin and the identification text and the sound source position corresponding to the voice to be processed are obtained; selecting a target split screen from a plurality of split screens of a vehicle-to-vehicle system in a vehicle cabin according to the sound source position; acquiring a target window in a target split screen and control information in the target window; determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information; the target control in the target window is controlled according to the target operation, so that the voice interaction processing of the object in the cabin and the split screens at other positions can be realized, the voice interaction efficiency is improved, and the control efficiency of a plurality of split screens of the vehicle machine system is further improved.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user are performed on the premise of proving the consent of the user, and all the processes accord with the regulations of related laws and regulations, and the public welfare is not violated.
According to embodiments of the present disclosure, the present disclosure also provides a vehicle, a readable storage medium, and a computer program product.
FIG. 5 illustrates a schematic block diagram of an example vehicle 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the vehicle 500 includes a computing unit 501 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the vehicle 500 may also be stored. The computing unit 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Various components in the vehicle 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the vehicle 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the respective methods and processes described above, such as a multi-screen control method. For example, in some embodiments, the multi-screen control method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the vehicle 500 via the ROM 502 and/or the communication unit 509. When a computer program is loaded into RAM 503 and executed by computing unit 501, one or more steps of the multi-screen control method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the multi-screen control method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (21)

1. A multi-screen control method, the method comprising:
acquiring voice to be processed in a vehicle cabin, and identifying text and sound source positions corresponding to the voice to be processed;
selecting a target split screen from a plurality of split screens of a vehicle-to-vehicle system in the vehicle cabin according to the sound source position;
acquiring a target window in the target split screen and control information in the target window;
determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information;
And controlling the target control in the target window according to the target operation.
2. The method of claim 1, wherein the selecting a target split screen from a plurality of split screens of a vehicle-to-vehicle system in the vehicle cabin according to the sound source position comprises:
inquiring a first configuration table according to the sound source position, and acquiring a first seat position matched with the sound source position in the first configuration table and a split screen identification of at least one split screen corresponding to the first seat position;
selecting one split screen identifier from at least one split screen identifier;
and determining the split screen corresponding to the selected split screen identification as the target split screen.
3. The method of claim 2, wherein the split screen comprises: main driving split screen, auxiliary driving split screen and rear seat split screen; the selecting a split screen identifier from at least one split screen identifier comprises the following steps:
selecting the screen division identifier of the main driving screen division under the condition that the screen division identifier of the main driving screen division exists in at least one screen division identifier;
selecting the split screen identification of the auxiliary driving split screen under the condition that the split screen identification of the main driving split screen does not exist in at least one split screen identification and the split screen identification of the auxiliary driving split screen exists;
And selecting the split screen identification of the rear seat split screen under the condition that the split screen identification of the main driving split screen does not exist in at least one split screen identification and the split screen identification of the auxiliary driving split screen does not exist.
4. The method of claim 2, wherein the method further comprises:
determining a plurality of seats in the vehicle cabin, and a plurality of seat positions and surrounding areas of the seats;
for each seat, determining a split screen identification of a first split screen in the vehicle cabin, which is located in a peripheral area of the seat;
establishing a corresponding relation between the seat position of the seat and the split screen identification of the first split screen;
and generating the first configuration table according to the corresponding relation.
5. The method of claim 1, wherein the obtaining the target window in the target split screen and the control information within the target window comprises:
acquiring at least one window in the target split screen and window information of at least one window; the window information includes at least one of: window type, whether the window is in an activated state, whether the window acquires focus;
selecting the target window from at least one window according to the window information of at least one window;
And acquiring control information in the target window.
6. The method of claim 5, wherein the window information comprises: window type, whether the window is in an activated state, and whether the window acquires focus; the selecting the target window from at least one window according to the window information of at least one window comprises:
according to the window information of at least one window, a first window in the at least one window is obtained, and the first window is used as the target window; the window type of the first window is an application type, the first window is in an activated state, and the first window acquires a focus;
acquiring a second window in at least one window under the condition that the first window is not acquired, and taking the second window as the target window; the window type of the second window is an application type, and the second window is in an activated state;
acquiring a third window in at least one window under the condition that the first window is not acquired and the second window is not acquired, and taking the third window as the target window; the window type of the third window is an application type.
7. The method of claim 1, wherein the determining a target control within the target window and a target operation for the target control according to the recognition text and the control information comprises:
extracting the identification text to obtain operation related words in the identification text;
inquiring operation words corresponding to each operation of the control in the control information according to the operation related words, and determining a first operation word matched with the operation related words, and an operation corresponding to the first operation word and the control;
determining a control corresponding to the first operation word as the target control;
and determining the operation corresponding to the first operation word as the target operation aiming at the target control.
8. The method of claim 1, wherein the controlling the target control within the target window according to the target operation comprises:
acquiring the identification of the target control, the identification of the target window and the identification of the target split screen;
and sending a processing request carrying the target operation, the identification of the target control, the identification of the target window and the identification of the target split screen to the vehicle-mounted system, so that the vehicle-mounted system controls and processes the target control according to the target operation.
9. The method of claim 1, wherein the method further comprises:
acquiring a control instruction in the identification text under the condition that the target control is not determined according to the identification text and the control information;
and sending a processing request carrying the control instruction to the vehicle-mounted system so that the vehicle-mounted system can control and process the display content on the target split screen according to the control instruction.
10. A multi-screen control device, the device comprising:
the first acquisition module is used for acquiring the voice to be processed in the vehicle cabin, and the identification text and the sound source position corresponding to the voice to be processed;
the selection module is used for selecting a target split screen from a plurality of split screens of a vehicle-to-vehicle system in the vehicle cabin according to the sound source position;
the second acquisition module is used for acquiring a target window in the target split screen and control information in the target window;
the first determining module is used for determining a target control in the target window and target operation aiming at the target control according to the identification text and the control information;
and the processing module is used for controlling and processing the target control in the target window according to the target operation.
11. The apparatus of claim 10, wherein the selection module is configured to,
inquiring a first configuration table according to the sound source position, and acquiring a first seat position matched with the sound source position in the first configuration table and a split screen identification of at least one split screen corresponding to the first seat position;
selecting one split screen identifier from at least one split screen identifier;
and determining the split screen corresponding to the selected split screen identification as the target split screen.
12. The apparatus of claim 11, wherein the split screen comprises: main driving split screen, auxiliary driving split screen and rear seat split screen; the selection module is in particular also used for,
selecting the screen division identifier of the main driving screen division under the condition that the screen division identifier of the main driving screen division exists in at least one screen division identifier;
selecting the split screen identification of the auxiliary driving split screen under the condition that the split screen identification of the main driving split screen does not exist in at least one split screen identification and the split screen identification of the auxiliary driving split screen exists;
and selecting the split screen identification of the rear seat split screen under the condition that the split screen identification of the main driving split screen does not exist in at least one split screen identification and the split screen identification of the auxiliary driving split screen does not exist.
13. The apparatus of claim 11, wherein the apparatus further comprises: the system comprises a second determining module, a third determining module, an establishing module and a generating module;
the second determining module is used for determining a plurality of seats in the vehicle cabin, and seat positions and surrounding areas of the plurality of seats;
the third determining module is used for determining a split screen identification of a first split screen located in the peripheral area of the seat in the vehicle cabin for each seat;
the establishing module is used for establishing a corresponding relation between the seat position of the seat and the split screen identification of the first split screen;
the generating module is configured to generate the first configuration table according to the corresponding relationship.
14. The apparatus of claim 10, wherein the second acquisition module is configured to,
acquiring at least one window in the target split screen and window information of at least one window; the window information includes at least one of: window type, whether the window is in an activated state, whether the window acquires focus;
selecting the target window from at least one window according to the window information of at least one window;
And acquiring control information in the target window.
15. The apparatus of claim 14, wherein the window information comprises: window type, whether the window is in an activated state, and whether the window acquires focus; the second acquisition module is in particular also used for,
according to the window information of at least one window, a first window in the at least one window is obtained, and the first window is used as the target window; the window type of the first window is an application type, the first window is in an activated state, and the first window acquires a focus;
acquiring a second window in at least one window under the condition that the first window is not acquired, and taking the second window as the target window; the window type of the second window is an application type, and the second window is in an activated state;
acquiring a third window in at least one window under the condition that the first window is not acquired and the second window is not acquired, and taking the third window as the target window; the window type of the third window is an application type.
16. The apparatus of claim 10, wherein the first determining means is specifically configured to,
Extracting the identification text to obtain operation related words in the identification text;
inquiring operation words corresponding to each operation of the control in the control information according to the operation related words, and determining a first operation word matched with the operation related words, and an operation corresponding to the first operation word and the control;
determining a control corresponding to the first operation word as the target control;
and determining the operation corresponding to the first operation word as the target operation aiming at the target control.
17. The apparatus of claim 10, wherein the processing module is configured to,
acquiring the identification of the target control, the identification of the target window and the identification of the target split screen;
and sending a processing request carrying the target operation, the identification of the target control, the identification of the target window and the identification of the target split screen to the vehicle-mounted system, so that the vehicle-mounted system controls and processes the target control according to the target operation.
18. The apparatus of claim 10, wherein the apparatus further comprises: the third acquisition module and the sending module;
The third acquisition module is used for acquiring a control instruction in the identification text under the condition that the target control is not determined according to the identification text and the control information;
the sending module is used for sending a processing request carrying the control instruction to the vehicle-mounted system so that the vehicle-mounted system can control and process the display content on the target split screen according to the control instruction.
19. A vehicle, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1 to 9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 9.
CN202310620427.0A 2023-05-29 2023-05-29 Multi-screen control method and device and vehicle Pending CN116521113A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310620427.0A CN116521113A (en) 2023-05-29 2023-05-29 Multi-screen control method and device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310620427.0A CN116521113A (en) 2023-05-29 2023-05-29 Multi-screen control method and device and vehicle

Publications (1)

Publication Number Publication Date
CN116521113A true CN116521113A (en) 2023-08-01

Family

ID=87408336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310620427.0A Pending CN116521113A (en) 2023-05-29 2023-05-29 Multi-screen control method and device and vehicle

Country Status (1)

Country Link
CN (1) CN116521113A (en)

Similar Documents

Publication Publication Date Title
US20210316745A1 (en) Vehicle-based voice processing method, voice processor, and vehicle-mounted processor
JP7375103B2 (en) Presentation method, device, vehicle, electronic device, storage medium, and computer program for current color information of traffic lights
CN111722825A (en) Interaction method, information processing method, vehicle and server
EP4030424A2 (en) Method and apparatus of processing voice for vehicle, electronic device and medium
CN115061762A (en) Page display method and device, electronic equipment and medium
CN116521113A (en) Multi-screen control method and device and vehicle
CN113641439B (en) Text recognition and display method, device, electronic equipment and medium
CN113743288A (en) Image recognition method, device, equipment and storage medium of cloud mobile phone
CN114035878A (en) Information display method, information display device, electronic equipment and storage medium
CN113734190A (en) Vehicle information prompting method and device, electronic equipment, medium and vehicle
EP4027336B1 (en) Context-dependent spoken command processing
CN113744728A (en) Voice processing method, device, equipment and storage medium
CN116978375A (en) User interface control method, device, equipment and storage medium
CN114706640A (en) Display method, display device, electronic equipment and computer storage medium
CN117198281A (en) Voice interaction method and device, electronic equipment and vehicle
CN117271714A (en) Voice skill scheduling method, device, equipment and storage medium
CN117877470A (en) Voice association method, device, equipment and storage medium
CN117133286A (en) Man-machine voice interaction method, device and equipment under vehicle-mounted environment and storage medium
CN116631396A (en) Control display method and device, electronic equipment and medium
CN114153312A (en) VPA control method, device, equipment, storage medium and program product
CN116380111A (en) Navigation interaction method and device based on map and electronic equipment
CN114063969A (en) Audio data processing method, device, equipment, storage medium and program product
CN113918334A (en) Equipment performance optimization method and device, electronic equipment and storage medium
CN115421630A (en) Display method, display device, electronic device, storage medium, and driving device
CN116612755A (en) Voice interaction method, device and equipment of vehicle-mounted terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination