CN116028009A - Man-machine interaction method, device, equipment and storage medium in projection display - Google Patents

Man-machine interaction method, device, equipment and storage medium in projection display Download PDF

Info

Publication number
CN116028009A
CN116028009A CN202310101724.4A CN202310101724A CN116028009A CN 116028009 A CN116028009 A CN 116028009A CN 202310101724 A CN202310101724 A CN 202310101724A CN 116028009 A CN116028009 A CN 116028009A
Authority
CN
China
Prior art keywords
page element
application program
target
user terminal
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310101724.4A
Other languages
Chinese (zh)
Inventor
黄启立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202310101724.4A priority Critical patent/CN116028009A/en
Publication of CN116028009A publication Critical patent/CN116028009A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a man-machine interaction method, device, equipment and storage medium in projection display, relates to the technical field of artificial intelligence, in particular to the technical fields of voice technology, internet of vehicles technology, man-machine interaction, intelligent driving and the like, and can be applied to scenes such as automatic driving, unmanned driving and the like. The man-machine interaction method in projection display comprises the following steps: receiving a voice instruction, wherein the voice instruction instructs to perform target operation on a target page element; responding to a voice instruction, searching a target page element in a projection page element, wherein the projection page element is a page element projected and displayed on the vehicle-mounted terminal by the user terminal; and controlling the user terminal to execute target operation on the target page element according to the attribute information of the target page element under the condition that the projected page element comprises the target page element. Therefore, the 'visible and can be said' in the projection display is realized, the convenience of man-machine interaction is improved, and the safety of man-machine interaction in driving scenes is improved.

Description

Man-machine interaction method, device, equipment and storage medium in projection display
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of voice technology, internet of vehicles technology, man-machine interaction, intelligent driving and the like, and can be applied to scenes such as automatic driving, unmanned driving and the like, in particular to a man-machine interaction method, device, equipment and storage medium in projection display.
Background
In the vehicle-mounted scene, the vehicle-mounted terminal can be interconnected with the user terminal, and the display picture of the user terminal is projected and displayed so as to enrich the ecology of the vehicle-mounted terminal.
In the related art, after a display screen of a user terminal is projected to a vehicle-mounted terminal, if a user wants to open or control an application program on the user terminal, manual operations such as screen clicking, screen sliding and the like need to be performed on the vehicle-mounted terminal.
However, in the running process of the vehicle, the user performs manual operation on the vehicle-mounted terminal to control the application program on the user terminal, so that high potential safety hazards exist.
Disclosure of Invention
The disclosure provides a man-machine interaction method, a device, equipment and a storage medium for improving the man-machine interaction safety in a driving scene in projection display.
According to a first aspect of the present disclosure, there is provided a human-computer interaction method in projection display, including:
receiving a voice instruction, wherein the voice instruction indicates target operation on a target page element;
responding to the voice instruction, searching the target page element in a projection page element, wherein the projection page element is a page element projected and displayed on the vehicle-mounted terminal by the user terminal;
And controlling the user terminal to execute the target operation on the target page element according to the attribute information of the target page element under the condition that the projected page element comprises the target page element.
According to a second aspect of the present disclosure, there is provided a human-machine interaction device in a projection display, comprising:
the receiving unit is used for receiving a voice instruction, wherein the voice instruction indicates target operation on a target page element;
the searching unit is used for responding to the voice instruction and searching the target page element in a projection page element, wherein the projection page element is a page element projected and displayed on the vehicle-mounted terminal by the user terminal;
and the control unit is used for controlling the user terminal to execute the target operation on the target page element according to the attribute information of the target page element under the condition that the projected page element comprises the target page element.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of human-machine interaction in a projection display as described in the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of man-machine interaction in a projection display according to the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program stored in a readable storage medium, from which it can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the method of man-machine interaction in a projection display as described in the first aspect.
According to a fifth aspect of the present disclosure there is provided an intelligent driving vehicle comprising an electronic device as described in the third aspect.
According to the technical scheme provided by the disclosure, in response to a voice instruction indicating to perform target operation on a target page element, searching the target page element in the page element of the vehicle-mounted terminal in a projection display mode of the user terminal, and if the target page element is searched in the page element of the vehicle-mounted terminal in the projection display mode of the user terminal, controlling the user terminal to perform target operation on the target page element based on attribute information of the target page element. Therefore, the visual can be said in the projection display of the vehicle-mounted terminal is realized, the user can operate the page elements projected and displayed on the vehicle-mounted terminal by the user terminal in a voice mode, and manual operation is not needed on the user terminal or the vehicle-mounted terminal, so that the convenience of man-machine interaction is improved, the safety of man-machine interaction in a driving scene is improved, and the intelligent degree of a vehicle is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic diagram of an application scenario to which an embodiment of the present disclosure is applicable;
FIG. 2 is a flowchart illustrating a method of human-computer interaction in a projection display according to an embodiment of the disclosure;
FIG. 3 is a second flowchart of a method of human-computer interaction in projection display according to an embodiment of the disclosure;
FIG. 4 is a third flow chart of a method of human-computer interaction in a projection display according to an embodiment of the disclosure;
FIG. 5 is a flowchart illustrating a method of human-computer interaction in a projection display according to an embodiment of the disclosure;
fig. 6 is a flowchart of a man-machine interaction method in projection display according to an embodiment of the disclosure;
fig. 7 is a flowchart of a human-computer interaction method in projection display according to an embodiment of the disclosure;
FIG. 8 is a flowchart of a human-computer interaction method in a projection display according to an embodiment of the disclosure;
Fig. 9 is a schematic structural diagram of a man-machine interaction device in projection display according to an embodiment of the disclosure;
fig. 10 is a schematic structural diagram of a man-machine interaction device in projection display according to an embodiment of the disclosure;
fig. 11 is a schematic block diagram of an example electronic device 1100 that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In a vehicle-mounted scenario, if a user wants to operate an application program projected onto a vehicle from a mobile phone, for example, open a certain application program, click a key in a certain application program, and slide a page, the user typically performs operations such as clicking and sliding on a screen of the vehicle or a screen of the mobile phone. The man-machine interaction mode is low in convenience and has certain potential safety hazard in the running process of the vehicle.
In order to solve the above-mentioned defect, the present disclosure provides a man-machine interaction method, device, equipment and storage medium in projection display, which are applied to the technical field of artificial intelligence, specifically can be the technical fields of voice technology, internet of vehicles technology, man-machine interaction, intelligent driving and the like, and can be applied to scenes such as automatic driving, unmanned driving and the like. In the man-machine interaction method in projection display, a voice instruction is received, the voice instruction indicates to perform target operation on a target page element, and under the condition that the target page element is located in a page element of a user terminal and is projected and displayed in a vehicle-mounted terminal, the user terminal is controlled to perform target operation on the target page element according to attribute information of the target page element. Therefore, the visual can be said in the projection display is realized, the convenience of man-machine interaction in the projection display is improved, and the driving safety and the intelligent degree of the vehicle are improved.
Fig. 1 is a schematic diagram of an application scenario to which an embodiment of the present disclosure is applicable. In the application scenario, the related devices include a vehicle 110 and a user terminal 120, where the vehicle 110 is provided with a vehicle-mounted terminal 111, and a voice application (also called a voice assistant) may be installed on the vehicle-mounted terminal 111, and a voice interaction function may be implemented through the voice application. The user terminal 120 may be connected to the in-vehicle terminal 111 by wireless or wired communication, and may project its display screen onto the display screen of the in-vehicle terminal 111 to display. In the man-machine interaction process, the vehicle-mounted terminal 111 may receive the user voice, and control the user terminal 120 to operate the page element projected and displayed on the vehicle-mounted terminal 111 according to the user voice.
Alternatively, as shown in fig. 1, a communication interconnection application may be installed on the vehicle-mounted terminal 111, and a communication interconnection application (communication interconnection application on the user terminal 120 is not shown in fig. 1) may be installed on the user terminal 120, so that the vehicle-mounted terminal 111 and the user terminal 120 communicate wirelessly with each other through the respective communication interconnection application.
The user terminal may be a personal digital processing (personal digital assistant, PDA for short), a handheld device with a wireless communication function (e.g., a smart phone, a tablet computer), a computing device (e.g., a personal computer (personal computer, PC for short)), a wearable device (e.g., a smart watch, a smart bracelet), a smart home device (e.g., a smart speaker, a smart display device), etc.
The following describes the technical scheme of the present disclosure and how the technical scheme of the present disclosure solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart illustrating a human-computer interaction method in projection display according to an embodiment of the disclosure. As shown in fig. 2, the human-computer interaction method in the projection display includes:
S201, receiving a voice instruction, wherein the voice instruction indicates target operation on a target page element.
The target page element may be a layout element supporting interaction on the application page, or the target page element may be an icon of the application. For example, when the user instructs to perform a click operation on layout elements such as an input box and a key on a certain application page through a voice instruction, the target page element is the layout element such as the input box and the key. For another example, when a user opens or closes a certain application program through a voice command, the target page element is an icon of the application program. The target page element may be one or more. When a user interacts a plurality of page elements through voice, the target page elements are a plurality of.
Wherein the target operation may include one or more interactive operations. For example, the target operation may include one or more of a click operation, an input operation, a long press operation, a slide operation, a switch operation, a start operation, a close operation, and an exit operation.
In this embodiment, the vehicle-mounted terminal may receive a voice command of a user when the voice application is in the wake-up state. For example, after waking up a voice application on the in-vehicle terminal, the user speaks a voice instruction indicating the input destination in the navigation application, and the in-vehicle terminal receives the voice instruction.
S202, searching a target page element in a projection page element in response to a voice instruction, wherein the projection page element is a page element projected and displayed on the vehicle-mounted terminal by the user terminal.
After the user terminal and the vehicle-mounted terminal are in communication connection, the user terminal can send display content of the user terminal to the vehicle-mounted terminal, and the vehicle-mounted terminal displays a projection picture based on the display content of the user terminal. The projection picture comprises a plurality of page elements, and the page elements are page elements in the display content of the user terminal, namely the page elements which are projected and displayed on the vehicle-mounted terminal by the user terminal. For simplicity of description and ease of distinction, page elements in a projected picture are referred to as projected page elements.
In this embodiment, after receiving the voice command, the vehicle-mounted terminal may perform voice recognition on the voice command to obtain a voice recognition result, and determine, based on the voice recognition result, that the voice command indicates to perform the target operation on the target page element. And then searching the target page element in the projected page element to determine whether the projected page element comprises the target page element, namely determining whether the target page element belongs to the page element projected and displayed on the vehicle-mounted terminal by the user terminal.
S203, controlling the user terminal to execute target operation on the target page element according to the attribute information of the target page element under the condition that the projected page element comprises the target page element.
The attribute information of the target page element may include at least one of identification information of the target page element, location information of the target page element, and interaction operations supported by the target page element.
In this embodiment, the vehicle-mounted terminal may obtain the attribute information of the projected page element through communication connection between the vehicle-mounted terminal and the user terminal. In the case that the projected page element includes the target page element, the vehicle-mounted terminal may acquire attribute information of the target page element from attribute information of the projected page element, and send the attribute information of the target page element and the target operation to the user terminal, so that the user terminal performs the target operation on the target page element according to the attribute information of the target page element.
In the embodiment of the disclosure, in response to a voice instruction, a target page element of which the voice instruction indicates an operation is searched in a projected page element, and in the case that the projected page element comprises the target page element, the user terminal is controlled to perform target operation on the target page element based on attribute information of the target page element. Therefore, voice interaction of the page elements of the projection display is realized, visibility and speaking in a projection display scene are realized, convenience of human-computer interaction of the user on the page elements of the projection display is improved, potential safety hazards caused by the human-computer interaction to vehicle driving are reduced, and the intelligent degree of the vehicle is improved.
Fig. 3 is a second flowchart of a man-machine interaction method in projection display according to an embodiment of the disclosure. As shown in fig. 3, the human-computer interaction method in the projection display includes:
s301, receiving a voice instruction, wherein the voice instruction indicates target operation on a target page element.
The implementation principle and technical effect of S301 may refer to the foregoing embodiments, and will not be described herein.
S302, responding to the voice instruction, and determining whether a target application program to which the target page element belongs is an application program on a user terminal or an application program on a vehicle-mounted terminal.
The application program on the user terminal refers to an application program installed on the user terminal, and the application program on the vehicle-mounted terminal refers to an application program installed on the vehicle-mounted terminal. For the projected page element, the application to which the projected page element belongs is an application installed on the user terminal.
In this embodiment, after receiving the voice command and determining that the voice command indicates to perform the target operation on the target page element, the vehicle-mounted terminal may determine an application program to which the target page element belongs, and for simplicity and convenience in description, the application program to which the target page element belongs is referred to as a target application program. Then, the vehicle-mounted terminal may determine whether the target application is an application on the user terminal or an application on the vehicle-mounted terminal, and any one of the following determination results may occur: the target application is an application on the in-vehicle terminal and is not an application on the user terminal, the target application is an application on the in-vehicle terminal and is an application on the user terminal, and the target application is an application on the user terminal and is not an application of the in-vehicle terminal. If the target application is an application on the user terminal and is not an application on the in-vehicle terminal, execution may continue S303. If the target application is an application on the in-vehicle terminal and is an application on the user terminal, or if the target application is an application on the in-vehicle terminal and is not an application on the user terminal, then no subsequent steps need to be performed.
Alternatively, if the target application is an application on the vehicle-mounted terminal and is an application on the user terminal, or if the target application is an application on the vehicle-mounted terminal and is not an application on the user terminal, the vehicle-mounted terminal performs the target operation on the target page element, or the vehicle-mounted terminal may prompt the user that the target application is not an application on the user terminal.
S303, if the target application program is an application program on the user terminal and is not an application program on the vehicle-mounted terminal, searching the target page element in the projected page element.
S304, controlling the user terminal to execute target operation on the target page element according to the attribute information of the target page element under the condition that the projected page element comprises the target page element.
In the present embodiment, in the case where the target application is an application on the user terminal and is not an application on the in-vehicle terminal, S303 and S304 are executed. The implementation principles and technical effects of S303 and S304 may refer to the foregoing embodiments, and are not described herein.
In the embodiment of the disclosure, whether the target application program to which the target page element belongs is an application program on the user terminal instead of the application program on the vehicle-mounted terminal is judged first. And under the condition that the target application program is an application program on the user terminal and not an application program on the vehicle-mounted terminal, judging whether a target page element of the voice instruction indication operation is projected and displayed on a display screen of the vehicle-mounted terminal, and if the target page element is projected on the display screen of the vehicle-mounted terminal, controlling the user terminal to execute target operation on the target page element according to attribute information of the target page element on the basis of communication connection between the vehicle-mounted terminal and the user terminal. Therefore, voice interaction in projection display is realized, and the accuracy of voice interaction is improved while the convenience and safety of man-machine interaction are improved.
In some embodiments, in determining whether the target application to which the target page element belongs is an application on the user terminal or an application on the vehicle-mounted terminal, one possible implementation includes; determining whether the target application program is an application program on the vehicle-mounted terminal; if the target application program is not the application program on the vehicle-mounted terminal, acquiring an application program list corresponding to the user terminal; determining whether the application program list contains attribute information of a target application program; if the application list contains attribute information of the target application, determining that the target application is an application on the user terminal, otherwise determining that the target application is not the application on the user terminal.
The application program list corresponding to the user terminal includes attribute information of a plurality of application programs on the user terminal, the attribute information of the application programs can include identification information of the application programs, and the identification information of the application programs can include icons of the application programs, application names of the application programs and names of program packages corresponding to the application programs (such as names of installation packages and names of configuration data packages).
In this embodiment, the identification information of the target application program may be matched with the identification information of the application program on the vehicle-mounted terminal, if the identification information of the target application program exists in the identification information of the application program on the vehicle-mounted terminal, it is determined that the target application program is the application program on the vehicle-mounted terminal, otherwise, it is determined that the target application program is not the application program on the vehicle-mounted terminal. And judging whether the target application program is the application program on the user terminal or not under the condition that the target application program is not the application program on the vehicle-mounted terminal.
The vehicle-mounted terminal can obtain an application program list corresponding to the user terminal in advance from the user terminal through communication connection between the vehicle-mounted terminal and the user terminal and store the application program list, and after determining that the target application program is not the application program on the vehicle-mounted terminal, the application program list corresponding to the user terminal is obtained from the storage space. Alternatively, the in-vehicle terminal may acquire the application manifest corresponding to the user terminal from the user terminal after determining that the target application is not the application on the in-vehicle terminal. Then, the vehicle-mounted terminal can search whether the application program list contains attribute information of the target application program, and particularly, can search whether the application program list contains identification information of the target application program. If the application program list contains attribute information of the target application program, determining that the target application program is the application program on the user terminal, otherwise, determining that the target application program is not the application program on the user terminal. Therefore, based on the application program list corresponding to the user terminal, the accuracy and efficiency of judging whether the target application program is the application program on the user terminal are improved.
Further, the vehicle-mounted terminal can acquire an application program list corresponding to the user terminal from the memory space corresponding to the voice application program; the voice application program is used for executing a voice interaction function on the vehicle-mounted terminal. Specifically, the vehicle-mounted terminal may acquire an application program list corresponding to the user terminal from the user terminal, store the application program list in a memory space corresponding to the voice application program, and acquire the application program list corresponding to the user terminal from the memory space corresponding to the voice application program in the process of determining whether the target application program is the application program on the user terminal. Therefore, the acquisition efficiency of the application program list is improved, and the man-machine interaction efficiency in projection display is improved.
Based on the scheme that the application program list is stored in the memory space corresponding to the voice application program, fig. 4 is a flowchart of a human-computer interaction method in projection display according to an embodiment of the disclosure.
As shown in fig. 4, the human-computer interaction method in the projection display includes:
s401, after the communication connection between the vehicle-mounted terminal and the user terminal is established, an application program list is obtained from the user terminal, wherein the application program list contains attribute information of a plurality of application programs on the user terminal.
In this embodiment, after the vehicle-mounted terminal and the user terminal establish communication connection, the display screen of the user terminal may be projected and displayed as a projection screen on the vehicle-mounted terminal, and the vehicle-mounted terminal may request the user terminal to obtain the application program list, and receive the application program list returned by the user terminal.
In one possible implementation, after the vehicle-mounted terminal establishes a communication connection with the user terminal, the vehicle-mounted terminal may send an application acquisition request to the user terminal; and the vehicle-mounted terminal receives an application program list returned by the user terminal in response to the application program acquisition request. The application program acquisition request is used for requesting to acquire attribute information of an application program with the application program type being the target type, and the application program list comprises the attribute information of the application program with the application program type being the target type on the user terminal.
The target type may include an application type that the vehicle-mounted terminal supports projection display. For example, the target type includes a picture play type, an audio/video play type, a navigation type, and a map type.
In this implementation manner, after the vehicle-mounted terminal and the user terminal establish communication connection, the vehicle-mounted terminal may send an application program acquisition request to the user terminal, where the application program acquisition request may include a target type to request to acquire attribute information of an application program with the application program type being the target type; after receiving the application acquisition request, the user terminal can respond to the application acquisition request to acquire the attribute information of the application program with the application program type being the target type from the attribute information of the application program installed by the user terminal, and return the attribute information of the application program with the application program type being the target type to the vehicle-mounted terminal. Therefore, in the process of acquiring the application program list, the types of the application programs supported by the vehicle-mounted terminal for projection display are considered, the accuracy of the application program list is improved, and the accuracy of judging whether the target application program is the application program on the user terminal is further improved.
Further, after the vehicle-mounted terminal and the user terminal establish communication connection, the vehicle-mounted terminal can send an application program acquisition request to the communication interconnection application program on the user terminal through the communication interconnection application program on the vehicle-mounted terminal; the user terminal may also send attribute information of an application whose application type is a target type to the in-vehicle terminal through a communication interconnect application on the user terminal.
S402, storing the application program list into a memory space corresponding to the voice application program.
In this embodiment, after receiving the application program list, the vehicle-mounted terminal stores the application program list in a memory space corresponding to the voice application program, so as to improve efficiency of acquiring the application program list in the voice interaction process and improve efficiency of voice interaction.
S403, receiving a voice instruction, wherein the voice instruction indicates to perform target operation on the target page element.
S404, in response to the voice instruction, determining whether the target application program is an application program on the vehicle-mounted terminal.
S405, if the target application program is not the application program on the vehicle-mounted terminal, acquiring an application program list corresponding to the user terminal, and determining whether the application program list contains attribute information of the target application program.
S406, if the application program list contains attribute information of the target application program, determining that the target application program is the application program on the user terminal, otherwise, determining that the target application program is not the application program on the user terminal.
S407, if the target application program is an application program on the user terminal and is not an application program on the vehicle-mounted terminal, searching the target page element in the projected page element.
S408, controlling the user terminal to execute target operation on the target page element according to the attribute information of the target page element under the condition that the projected page element comprises the target page element.
The implementation principles and technical effects of S403 to S408 may refer to the foregoing embodiments, and are not repeated.
In the embodiment of the disclosure, after the vehicle-mounted terminal and the user terminal establish communication connection, an application program list is obtained from the user terminal, and the application program list is stored in a memory space corresponding to the voice application program. After the voice command is received, whether the target application program to which the target page element of the voice command indication operation belongs is an application program on the user terminal can be judged based on the application program list, so that the efficiency and the accuracy of application program judgment are improved. In the case where the target application is an application on the user terminal and is not an application on the in-vehicle terminal, if the projected page element includes a target page element, the user terminal is controlled to perform a target operation on the target page element according to attribute information of the target page element. Therefore, voice interaction in projection display is realized, and accuracy and efficiency of voice interaction in projection display are improved while convenience and safety of man-machine interaction are improved.
In some embodiments, based on any of the preceding embodiments, one possible implementation of looking up the target page element in the projected page element includes: acquiring page element information, wherein the page element information comprises attribute information of projection page elements, and the attribute information of the projection page elements comprises identification information of the projection page elements; and matching the identification information of the target page element with the identification information of the projected page element in the page element information, and determining whether the projected page element comprises the target page element. Therefore, based on the page element information, the accuracy and efficiency of judging whether the target page element is projected and displayed on the vehicle-mounted terminal are improved.
Fig. 5 is a flowchart illustrating a human-computer interaction method in projection display according to an embodiment of the disclosure. As shown in fig. 5, the human-computer interaction method in the projection display includes:
s501, receiving a voice instruction, wherein the voice instruction indicates to perform target operation on a target page element.
S502, responding to the voice instruction, and determining whether the target application program to which the target page element belongs is an application program on the user terminal or an application program on the vehicle-mounted terminal.
The implementation principles and technical effects of S501 to S502 may refer to the foregoing embodiments, and are not repeated.
S503, if the target application program is an application program on the user terminal and is not an application program on the vehicle-mounted terminal, acquiring page element information, wherein the page element information comprises attribute information of a projection page element, and the attribute information of the projection page element comprises identification information of the projection page element.
Wherein the identification information of the projected page element may include at least one of a unique number of the projected page element, an element name of the projected page element, and a pattern of the projected page element.
In this embodiment, after the vehicle-mounted terminal performs projection display on the projection page element, the vehicle-mounted terminal may obtain, in advance, an application program list corresponding to the user terminal from the user terminal through communication connection between the vehicle-mounted terminal and the user terminal, and store the application program list, where after it is determined that the target application program is an application program on the user terminal and is not an application program on the vehicle-mounted terminal, the application program list corresponding to the user terminal is obtained from the storage space. Alternatively, the in-vehicle terminal may acquire the page element information from the user terminal after the target application is an application on the user terminal and is not an application on the in-vehicle terminal.
S504, the identification information of the target page element is matched with the identification information of the projection page element in the page element information, and whether the projection page element comprises the target page element is determined.
In this embodiment, the identification information of the target page element is matched with the identification information of the projected page element in the page element information, if the identification information matched with the identification information of the target page element exists in the page element information, it may be determined that the projected page element contains the target page element, or else it may be determined that the projected page element does not contain the target page element.
S505, controlling the user terminal to execute target operation on the target page element according to the attribute information of the target page element under the condition that the projected page element comprises the target page element.
The implementation principle and technical effect of S505 may refer to the foregoing embodiments, and will not be described herein.
In the embodiment of the disclosure, under the condition that the target application program to which the target page element of the voice instruction indicating operation belongs is determined to be the application program on the user terminal and the application program on the vehicle-mounted terminal, the vehicle-mounted terminal determines whether the projected page element contains the target page element or not based on the page element information containing the attribute information of the projected page element, so that the accuracy and the efficiency of judging whether the projected page element contains the target page element or not are improved. And if the projected page element comprises the target page element, controlling the user terminal to execute target operation on the target page element according to the attribute information of the target page element. Therefore, voice interaction in projection display is realized, and accuracy and efficiency of voice interaction in projection display are improved while convenience and safety of man-machine interaction are improved.
In some embodiments, after determining that the target application is an application on the user terminal and not an application on the vehicle-mounted terminal, the vehicle-mounted terminal may acquire page element information from a memory space corresponding to the voice application. Specifically, after projection display, the vehicle-mounted terminal may acquire page element information from the user terminal, store the page element information into a memory space corresponding to the voice application program, and acquire the page element information from the memory space corresponding to the voice application program after determining that the target application program is the application program on the user terminal and is not the application program on the vehicle-mounted terminal. Therefore, the acquisition efficiency of the page element information is improved, and the efficiency of man-machine interaction in projection display is improved.
In some embodiments, the vehicle-mounted terminal may acquire updated page element information from the user terminal according to the update period under the condition that the voice application is in the awake state, and store the updated page element information. Therefore, the situation that the projection picture on the vehicle-mounted terminal is possibly changed is fully considered, for example, the navigation picture projected to the vehicle-mounted terminal by the user terminal can be changed in real time in the running process of the vehicle, page element information is updated in time according to the updating period in the voice awakening state, the accuracy of the page element information is improved, and the accuracy of judging whether the projection page element contains the target page element is further improved.
The user can wake up the voice application program by speaking the wake-up word; or, a period of time during which the voice application is in the awake state may be preset, and during this period of time, the vehicle-mounted terminal opens and runs the voice application so that the voice application is in the awake state.
In this embodiment, the vehicle-mounted terminal may acquire updated page element information from the user terminal every other update period under the condition that the voice application is in the awake state, store the updated page element information, and update the page element information in time.
Furthermore, the vehicle-mounted terminal can store the updated page element information into the memory space of the voice application program, so that the efficiency of acquiring the page element information in the voice interaction process is improved.
Further, the vehicle-mounted terminal can acquire updated page element information from the user terminal through the communication interconnection application program according to the update period under the condition that the voice application program is in the wake-up state, and the updated page element information is stored.
In some embodiments, the projected page elements are page elements displayed on a virtual screen of the user terminal, rather than page elements displayed on a real display screen of the user terminal. After the communication connection between the vehicle-mounted terminal and the user terminal is established, the user terminal can generate a virtual screen corresponding to the vehicle-mounted terminal, display content on the virtual screen is projected and displayed to the vehicle-mounted terminal, the virtual screen is special for projection, an application program opened through the vehicle-mounted terminal can be displayed on the virtual screen, display content of a real display screen of the user terminal is not affected, and user experience in projection display is improved.
In some embodiments, the vehicle-mounted terminal may send a page element acquisition request to the user terminal according to the update period when the voice application is in the awake state, where the page element acquisition request indicates that a page element displayed on a virtual screen of the user terminal is acquired; after receiving the page element acquisition request, the user terminal can acquire attribute information of the page element on a display screen of the user terminal, package the attribute information of the page element to obtain page element package data, and send the page element package data to the vehicle-mounted terminal; and the vehicle-mounted terminal receives the page element encapsulation data returned by the user terminal, and performs data processing on the page element encapsulation data to obtain updated page element information. Therefore, under the condition that the voice application program is in the awakening state, the attribute information of the page element displayed by the virtual screen of the voice application program is polled and periodically acquired from the user terminal, and the accuracy of the page element information is improved.
In this embodiment, when the voice application is in the wake-up state, the vehicle-mounted terminal sends a page element acquisition request to the user terminal according to the update period; after receiving the page element acquisition request, the user terminal packages the attribute information of the page element into a format supported by the operating system of the vehicle-mounted terminal for analysis by considering that the operating system of the vehicle-mounted terminal is different from the operating system of the user terminal, obtains page element package data, and sends the page element package data to the vehicle-mounted terminal; the vehicle-mounted terminal receives page element encapsulation data returned by the user terminal, performs data processing such as format analysis and data reading on the page element encapsulation data, and obtains updated page element information.
Further, the vehicle-mounted terminal can send a page element acquisition request to the user terminal through a communication interconnection application program on the vehicle-mounted terminal; the user terminal may send the page element package data to the vehicle terminal via a communication interconnect application on the user terminal.
In some embodiments, in the page element information, the attribute information of the projected page element may include position information and size information of the projected page element. Based on the above, the position information of the projection picture where the projection page element is located can be updated according to the updated page element information. Specifically, the position information of the projection picture where the projection page element is located can be determined and updated on the vehicle-mounted terminal according to the position information and the size information of the projection page element in the updated page element information, so that the position accuracy of the projection picture is improved.
In this embodiment, the position information of the projected page element may be a position coordinate of the projected page element on the display screen of the user terminal, and the size information of the projected page element may be size information of the projected page element on the display screen of the user terminal. The position coordinates of the projection picture on the vehicle-mounted terminal can be calculated according to the position information of the projection page elements and the size information of the projection page elements. For example, the position information and the size information of the projection page element on the vehicle-mounted terminal may have a certain proportional relationship with the position information and the size information of the projection page element on the display screen of the user terminal, according to the proportional relationship, the position information and the size information of the projection page element on the vehicle-mounted terminal may be calculated first, and according to the position information and the size information of the projection page element on the vehicle-mounted terminal, the position coordinate of the projection screen on the vehicle-mounted terminal may be calculated.
Based on the scheme that the voice command indicates clicking operation is performed on the target page element and the attribute information of the target page element includes the position information of the target page element, fig. 6 is a flowchart of a man-machine interaction method in projection display according to an embodiment of the disclosure. As shown in fig. 6, the human-computer interaction method in the projection display includes:
s601, receiving a voice instruction, wherein the voice instruction indicates clicking operation on the target page element.
S602, searching a target page element in a projection page element in response to a voice instruction, wherein the projection page element is a page element projected and displayed on the vehicle-mounted terminal by the user terminal.
The implementation principles and technical effects of S601 to S602 may refer to the foregoing embodiments, and are not repeated.
S603, sending the position information of the target page element to the user terminal to instruct the user terminal to perform clicking operation on the target page element based on the position information of the target page element when the projected page element comprises the target page element.
The position information of the target page element may be a position coordinate of a projection screen of the target page element on the vehicle terminal, or may be a position coordinate of a display screen of the target page element on the user terminal.
In this embodiment, when the projected page element includes the target page element, the vehicle-mounted terminal may acquire attribute information of the target page element from attribute information of the view page element, and acquire position information of the target page element from attribute information of the target page element. The location information of the target page element may be sent to the user terminal via the communication interconnect application so that the user terminal performs a click operation on the target page element based on the location information of the target page element. Thus, the simulated clicking of the target page element is realized on the vehicle-mounted terminal, and the page element in the projection display is clicked in a voice mode.
As an example, after receiving a voice command of a user, the vehicle-mounted terminal may search the identification information of the target page element in the identification information of the projected page element; if the identification information of the projection page element is matched with the identification information of the target page element, the center point coordinate of the matched projection page element can be obtained, the center point coordinate is sent to a communication interconnection application program of the vehicle-mounted terminal, the communication interconnection application program initiates a click simulating operation, namely the communication interconnection application program sends the center point coordinate to the user terminal, and the user terminal performs the click operation on the target page element based on the center point coordinate.
In the embodiment of the disclosure, when the voice command indicates to perform the clicking operation on the target page element and the attribute information of the target page element includes the position information of the target page element, the vehicle-mounted terminal may perform the simulated clicking operation on the target page element by transmitting the position information of the target page element to the user terminal. Therefore, clicking on the page element is realized in a voice interaction mode, and convenience and safety of man-machine interaction for clicking on the page element in projection display by a user are improved.
Based on the scheme that the voice command indicates that the content input operation is performed on the target page element and the attribute information of the target page element includes the identification information of the target page element, fig. 7 is a flowchart of a man-machine interaction method in the projection display according to the embodiment of the disclosure. As shown in fig. 7, the human-computer interaction method in the projection display includes:
s701, receiving a voice instruction, wherein the voice instruction indicates content input operation on a target page element.
S702, searching a target page element in a projection page element in response to a voice instruction, wherein the projection page element is a page element projected and displayed on the vehicle-mounted terminal by the user terminal.
The implementation principles and technical effects of S701 to S702 may refer to the foregoing embodiments, and are not repeated.
S703, acquiring the content to be input from the voice command.
In this embodiment, the vehicle-mounted terminal may parse the voice command to obtain the content to be input.
S704, sending the content to be input and the identification information of the target page element to the user terminal so as to instruct the user terminal to input the content to be input to the target page element.
The target page element may be a control supporting input content, such as an input box and a selection box, and the identification information of the target page element may include at least one of a unique number of the target page element, an element name of the target page element, and a pattern of the target page element.
In this embodiment, the vehicle-mounted terminal may send the content to be input and the identification information of the target page element to the user terminal. After receiving the content to be input and the identification information of the target page element, the user terminal can search the target page element in the multiple page elements displayed by the user terminal according to the identification information of the target page element, and then input the content to be input into the target page element.
As an example, if the voice instruction is "input destination a city", it may be determined that the voice instruction indicates that "a city" is input into the destination input box, the destination page element is the destination input box, and the content to be input is "a city". The in-vehicle terminal may transmit the identification information of the destination input box and the input content of "a city" to the user terminal so that the user terminal inputs "a city" at the destination input box.
In the embodiment of the disclosure, when the voice command indicates that the content input operation is performed on the target page element and the attribute information of the target page element includes the identification information of the target page element, the vehicle-mounted terminal may transmit the content to be input and the identification information of the target page element to the user terminal, and the user terminal inputs the content to be input to the target page element. Therefore, content input is realized in a voice interaction mode, so that the man-machine interaction convenience and user experience of inputting content to page elements in projection display by a user are improved, and driving safety is improved.
Fig. 8 is a flowchart of a man-machine interaction method in projection display according to an embodiment of the disclosure. As shown in fig. 8, the human-computer interaction method in the projection display includes:
s801, a voice instruction is received, wherein the voice instruction indicates target operation on a target page element.
S802, responding to a voice instruction, and searching a target page element in a projection page element, wherein the projection page element is a page element projected and displayed on the vehicle-mounted terminal by the user terminal.
The implementation principles and technical effects of S801 to S802 may refer to the foregoing embodiments, and are not repeated.
S803, sending an application program starting instruction to the user terminal when the projected page element does not comprise the target page element, wherein the application program starting instruction indicates the user terminal to start the target application program to which the target page element belongs.
In this embodiment, the projected page element does not include the target page element, which indicates that the target page element is not projected and displayed on the vehicle-mounted terminal, and the target application program to which the target page element belongs is not started. Therefore, under the condition that the projected page element does not comprise the target page element, the vehicle-mounted terminal sends an application program starting instruction to the user terminal, wherein the application program starting instruction comprises identification information of the target application program; after receiving the application program starting instruction, the user terminal can acquire the identification information of the target application program from the application program starting instruction, and start the target application program according to the identification information of the target application program.
In the embodiment of the disclosure, under the condition that the target page element of the voice instruction indicating operation is not projected and displayed on the vehicle-mounted terminal, the vehicle-mounted terminal can control the user terminal to start the target application program to which the target page element belongs, the starting of the application program in the projected and displayed process is realized in a voice interaction mode, the man-machine interaction convenience of the starting of the application program is improved, and the driving safety is improved.
Fig. 9 is a schematic structural diagram of a man-machine interaction device in projection display according to an embodiment of the disclosure. As shown in fig. 9, the human-computer interaction device 900 in the projection display includes:
the receiving unit 901 is configured to receive a voice instruction, where the voice instruction indicates a target operation on a target page element;
the searching unit 902 is configured to respond to a voice instruction, and search a target page element in a projected page element, where the projected page element is a page element that is projected by the user terminal and displayed on the vehicle-mounted terminal;
the control unit 903 is configured to control the user terminal to perform a target operation on the target page element according to attribute information of the target page element, where the projected page element includes the target page element.
Fig. 10 is a schematic structural diagram of a man-machine interaction device in projection display according to an embodiment of the disclosure. As shown in fig. 10, the human-computer interaction device 1000 in projection display includes:
a receiving unit 1001, configured to receive a voice instruction, where the voice instruction indicates performing a target operation on a target page element;
the searching unit 1002 is configured to respond to a voice instruction, and search a target page element in a projected page element, where the projected page element is a page element projected and displayed on the vehicle-mounted terminal by the user terminal;
And a control unit 1003, configured to control the user terminal to perform a target operation on the target page element according to the attribute information of the target page element, in a case where the projected page element includes the target page element.
In some embodiments, the lookup unit 1002 includes: an application judging module 10021, configured to determine whether the target application to which the target page element belongs is an application on the user terminal or an application on the vehicle-mounted terminal; the page element searching module 10022 is configured to search the projected page element for the target page element if the target application is an application on the user terminal and not an application on the vehicle-mounted terminal.
In some embodiments, the application determination module 10021 includes: a first judging sub-module (not shown in the figure) for determining whether the target application is an application on the vehicle-mounted terminal; a list obtaining sub-module (not shown in the figure) for obtaining an application list corresponding to the user terminal if the target application is not an application on the vehicle-mounted terminal, wherein the application list contains attribute information of a plurality of applications on the user terminal; a second judging sub-module (not shown in the figure) for determining whether the application program list contains attribute information of the target application program; an application determining sub-module (not shown in the figure) for determining that the target application is an application on the user terminal if the application list contains attribute information of the target application, and otherwise determining that the target application is not an application on the user terminal.
In some embodiments, the manifest acquisition submodule is specifically configured to: acquiring an application program list from a memory space corresponding to a voice application program; the voice application program is used for executing a voice interaction function on the vehicle-mounted terminal.
In some embodiments, the human-machine interaction device in the projection display further comprises: a manifest acquiring unit 1004, configured to acquire an application manifest from a user terminal after the vehicle-mounted terminal establishes a communication connection with the user terminal; the list storage unit 1005 is configured to store the application list in a memory space corresponding to the voice application.
In some embodiments, the manifest acquisition unit 1004 includes: an application acquisition request sending module (not shown in the figure) for sending an application acquisition request to the user terminal, where the application acquisition request is used for requesting to acquire attribute information of an application whose application type is a target type; and the list receiving module (not shown in the figure) is used for receiving an application list returned by the user terminal in response to the application acquisition request, wherein the application list comprises attribute information of an application with the type of the application being the target type on the user terminal.
In some embodiments, the lookup unit 1002 includes: the page element obtaining module 10023 is configured to obtain page element information, where the page element information includes attribute information of a projected page element, and the attribute information of the projected page element includes identification information of the projected page element; the page element matching module 10024 is configured to match the identification information of the target page element with the identification information of the projected page element in the page element information, and determine whether the projected page element includes the target page element.
In some embodiments, the page element acquisition module 10023 includes: a page element obtaining sub-module (not shown in the figure) for obtaining page element information from the memory space corresponding to the voice application program; the voice application program is used for executing a voice interaction function on the vehicle-mounted terminal.
In some embodiments, the human-machine interaction device in the projection display further comprises: a page element updating unit 1006, configured to acquire updated page element information from the user terminal according to an update period when the speech application is in an awake state; the page element storage unit 1007 is configured to store updated page element information into the memory space.
In some embodiments, the page element update unit 1006 includes: an element acquisition request sending module (not shown in the figure) is configured to send a page element acquisition request to the user terminal according to an update period when the vehicle-mounted terminal is in voice wake-up, where the page element acquisition request indicates that a page element displayed on a virtual screen of the user terminal is acquired; an element receiving module (not shown in the figure) for receiving page element encapsulation data returned by the user terminal; and the data processing module (not shown in the figure) is used for carrying out data processing on the page element encapsulation data to obtain updated page element information.
In some embodiments, the human-machine interaction device in the projection display further comprises: and the location updating unit 1008 is configured to update location information of a projection screen where the projected page element is located according to the updated page element information.
In some embodiments, the voice instruction indicates a click operation on a target page element, the attribute information of the target page element includes location information of the target page element, and the control unit 1003 includes: the location sending module 10031 is configured to send location information of the target page element to the user terminal, so as to instruct the user terminal to perform a click operation on the target page element based on the location information of the target page element.
In some embodiments, the voice instruction indicates a content input operation on a target page element, attribute information of the target page element includes identification information of the target page element, and the control unit 1003 includes: an input content acquisition module 10032, configured to acquire content to be input from the voice command; the content identifier sending module 10033 is configured to send the content to be input and the identifier information of the target page element to the user terminal, so as to instruct the user terminal to input the content to be input to the target page element.
In some embodiments, the human-machine interaction device in the projection display further comprises: the application starting unit 1009 is configured to send an application starting instruction to the user terminal when the projected page element does not include the target page element, where the application starting instruction instructs the user terminal to start the target application to which the target page element belongs.
The man-machine interaction device in the projection display provided in fig. 9 to 10 may execute the above corresponding method embodiments, and its implementation principle and technical effects are similar, and will not be described herein again.
According to an embodiment of the present disclosure, the present disclosure further provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the aspects provided in any one of the embodiments described above.
According to an embodiment of the present disclosure, there is further provided an autonomous vehicle, the autonomous vehicle including the electronic device provided in the foregoing embodiment, and a processor in the electronic device in the autonomous vehicle being capable of executing the solution provided in any one of the foregoing embodiments.
According to an embodiment of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the solution provided by any one of the above embodiments.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 11 is a schematic block diagram of an example electronic device 1100 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the electronic device 1100 includes a computing unit 1101 that can execute various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) (fig. 11 is exemplified by a ROM 1102) or a computer program loaded from a storage unit 1108 into a random access Memory (Random Access Memory, RAM) (fig. 11 is exemplified by a RAM 1103). In the RAM 1103, various programs and data required for the operation of the electronic device 1100 can also be stored. The computing unit 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface (I/O interface 1105 is also connected to bus 1104, as exemplified in FIG. 11).
A number of components in the electronic device 1100 are connected to the I/O interface 1105, including: an input unit 1106 such as a keyboard, a mouse, etc.; an output unit 1107 such as various types of displays, speakers, and the like; a storage unit 1108, such as a magnetic disk, optical disk, etc.; and a communication unit 1109 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 1109 allows the electronic device 1100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 1101 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1101 include, but are not limited to, a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphic Processing Unit, GPU), various dedicated artificial intelligence (Artificial Intelligence, AI) computing chips, various computing units running machine learning model algorithms, digital signal processors (Digital Signal Process, DSP), and any suitable processors, controllers, microcontrollers, etc. The computing unit 1101 performs the various methods and processes described above, such as human-machine interaction methods in projection displays. For example, in some embodiments, the human-machine interaction method in a projection display may be implemented as a computer software program tangibly embodied on a machine-readable medium, e.g., storage unit 1108. In some embodiments, some or all of the computer programs may be loaded and/or installed onto electronic device 1100 via ROM 1102 and/or communication unit 1109. When the computer program is loaded into the RAM 1103 and executed by the computing unit 1101, one or more steps of the human interaction method in projection display described above can be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured to perform the human-machine interaction method in the projection display by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field programmable gate arrays (Field Program Gate Array, FPGAs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), application specific standard products (Application Specific Standard Parts, ASSPs), systems On a Chip (SOC), complex programmable logic devices (Complex Programming Logic Device, CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM or flash Memory), an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN) and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (32)

1. A human-machine interaction method in projection display, comprising:
receiving a voice instruction, wherein the voice instruction indicates target operation on a target page element;
responding to the voice instruction, searching the target page element in a projection page element, wherein the projection page element is a page element projected and displayed on the vehicle-mounted terminal by the user terminal;
and controlling the user terminal to execute the target operation on the target page element according to the attribute information of the target page element under the condition that the projected page element comprises the target page element.
2. The human-computer interaction method in projection display according to claim 1, wherein the searching the target page element in the projection page elements comprises:
determining whether a target application program to which the target page element belongs is an application program on the user terminal or an application program on the vehicle-mounted terminal;
and if the target application program is an application program on the user terminal and is not the application program on the vehicle-mounted terminal, searching the target page element in the projection page element.
3. The method for human-computer interaction in projection display according to claim 2, wherein the determining whether the target application to which the target page element belongs is an application on the user terminal or an application on the vehicle-mounted terminal includes:
determining whether the target application program is an application program on the vehicle-mounted terminal;
if the target application program is not the application program on the vehicle-mounted terminal, acquiring an application program list corresponding to the user terminal, wherein the application program list contains attribute information of a plurality of application programs on the user terminal;
determining whether the application program list contains attribute information of the target application program;
And if the application program list contains the attribute information of the target application program, determining that the target application program is the application program on the user terminal, otherwise, determining that the target application program is not the application program on the user terminal.
4. A method of man-machine interaction in a projection display according to claim 3, wherein said obtaining an application manifest corresponding to the user terminal comprises:
acquiring an application program list from a memory space corresponding to a voice application program;
the voice application program is used for executing the voice interaction function on the vehicle-mounted terminal.
5. The method of human-machine interaction in a projection display of claim 4, further comprising:
after the communication connection between the vehicle-mounted terminal and the user terminal is established, acquiring the application program list from the user terminal;
and storing the application program list into a memory space corresponding to the voice application program.
6. The method for human-computer interaction in a projection display according to claim 5, wherein said obtaining the application manifest from the user terminal comprises:
an application program acquisition request is sent to the user terminal, and the application program acquisition request is used for requesting to acquire attribute information of an application program with the application program type being a target type;
And receiving the application program list returned by the user terminal in response to the application program acquisition request, wherein the application program list comprises attribute information of an application program with the application program type of the target type on the user terminal.
7. The human-machine interaction method in a projection display of any of claims 1-6, wherein finding the target page element in the projected page element comprises:
acquiring page element information, wherein the page element information comprises attribute information of the projection page element, and the attribute information of the projection page element comprises identification information of the projection page element;
and matching the identification information of the target page element with the identification information of the projection page element in the page element information, and determining whether the projection page element comprises the target page element.
8. The method for human-computer interaction in a projection display of claim 7, wherein the acquiring page element information comprises:
acquiring the page element information from a memory space corresponding to a voice application program;
the voice application program is used for executing the voice interaction function on the vehicle-mounted terminal.
9. The method of human-machine interaction in a projection display of claim 8, further comprising:
acquiring updated page element information from the user terminal according to an updating period under the condition that the voice application program is in an awake state;
and storing the updated page element information into the memory space.
10. The human-computer interaction method in projection display according to claim 9, wherein the acquiring updated page element information from the user terminal according to an update period in a case where the voice application is in an awake state comprises:
sending a page element acquisition request to the user terminal according to an update period under the condition that the voice application program is in an awake state, wherein the page element acquisition request indicates to acquire a page element displayed on a virtual screen of the user terminal;
receiving page element encapsulation data returned by the user terminal;
and carrying out data processing on the page element encapsulation data to obtain updated page element information.
11. The method of human-machine interaction in a projection display of claim 9, further comprising:
and updating the position information of the projection picture where the projection page element is positioned according to the updated page element information.
12. The human-computer interaction method in projection display according to any one of claims 1 to 6, wherein the voice instruction indicates a click operation on the target page element, attribute information of the target page element includes location information of the target page element, and the controlling the user terminal to perform the target operation on the target page element according to the attribute information of the target page element includes:
and sending the position information of the target page element to the user terminal so as to instruct the user terminal to perform clicking operation on the target page element based on the position information of the target page element.
13. The human-computer interaction method in projection display according to any one of claims 1 to 6, wherein the voice instruction indicates a content input operation on the target page element, attribute information of the target page element includes identification information of the target page element, and the controlling the user terminal to perform the target operation on the target page element according to the attribute information of the target page element includes:
acquiring content to be input from the voice instruction;
And sending the content to be input and the identification information of the target page element to the user terminal so as to instruct the user terminal to input the content to be input to the target page element.
14. The human-machine interaction method in a projection display of any of claims 1-6, further comprising:
and sending an application program starting instruction to the user terminal under the condition that the projected page element does not comprise the target page element, wherein the application program starting instruction indicates the user terminal to start a target application program to which the target page element belongs.
15. A human-machine interaction device in a projection display, comprising:
the receiving unit is used for receiving a voice instruction, wherein the voice instruction indicates target operation on a target page element;
the searching unit is used for responding to the voice instruction and searching the target page element in a projection page element, wherein the projection page element is a page element projected and displayed on the vehicle-mounted terminal by the user terminal;
and the control unit is used for controlling the user terminal to execute the target operation on the target page element according to the attribute information of the target page element under the condition that the projected page element comprises the target page element.
16. The human-machine interaction device in a projection display of claim 15, wherein the lookup unit comprises:
the application program judging module is used for determining whether the target application program to which the target page element belongs is an application program on the user terminal or an application program on the vehicle-mounted terminal;
and the page element searching module is used for searching the target page element in the projection page element if the target application program is an application program on the user terminal and is not an application program on the vehicle-mounted terminal.
17. The human-machine interaction device in a projection display of claim 16, wherein the application judgment module comprises:
the first judging sub-module is used for determining whether the target application program is an application program on the vehicle-mounted terminal;
a list obtaining sub-module, configured to obtain an application list corresponding to the user terminal if the target application is not an application on the vehicle-mounted terminal, where the application list includes attribute information of a plurality of applications on the user terminal;
a second judging sub-module, configured to determine whether the application manifest includes attribute information of the target application;
And the application program determining submodule is used for determining that the target application program is the application program on the user terminal if the application program list contains the attribute information of the target application program, otherwise, determining that the target application program is not the application program on the user terminal.
18. The human-machine interaction device in a projection display of claim 17, wherein the manifest retrieval submodule is specifically configured to:
acquiring an application program list from a memory space corresponding to a voice application program;
the voice application program is used for executing the voice interaction function on the vehicle-mounted terminal.
19. The human-machine interaction device in a projection display of claim 18, further comprising:
the list acquisition unit is used for acquiring the application program list from the user terminal after the vehicle-mounted terminal and the user terminal are in communication connection;
and the list storage unit is used for storing the application program list into a memory space corresponding to the voice application program.
20. The human-machine interaction device in a projection display according to claim 19, wherein the manifest acquisition unit includes:
An application acquisition request sending module, configured to send an application acquisition request to the user terminal, where the application acquisition request is used to request to acquire attribute information of an application whose application type is a target type;
and the list receiving module is used for receiving the application list returned by the user terminal in response to the application acquisition request, wherein the application list comprises attribute information of the application with the application type of the target type on the user terminal.
21. The human-machine interaction device in a projection display of any of claims 15 to 20, wherein the lookup unit comprises:
the page element acquisition module is used for acquiring page element information, wherein the page element information comprises attribute information of the projection page element, and the attribute information of the projection page element comprises identification information of the projection page element;
and the page element matching module is used for matching the identification information of the target page element with the identification information of the projection page element in the page element information and determining whether the projection page element comprises the target page element.
22. The human-machine interaction device in a projection display of claim 21, wherein the page element acquisition module comprises:
the page element acquisition sub-module is used for acquiring the page element information from the memory space corresponding to the voice application program;
the voice application program is used for executing the voice interaction function on the vehicle-mounted terminal.
23. The human-machine interaction device in a projection display of claim 22, further comprising:
the page element updating unit is used for acquiring updated page element information from the user terminal according to an updating period under the condition that the voice application program is in an awakening state;
and the page element storage unit is used for storing the updated page element information into the memory space.
24. The human-machine interaction device in a projection display of claim 23, wherein the page element updating unit comprises:
the element acquisition request sending module is used for sending a page element acquisition request to the user terminal according to an update period under the condition that the voice application program is in an awake state, wherein the page element acquisition request indicates to acquire a page element displayed on a virtual screen of the user terminal;
The element receiving module is used for receiving page element encapsulation data returned by the user terminal;
and the data processing module is used for carrying out data processing on the page element encapsulation data to obtain updated page element information.
25. The human-machine interaction device in a projection display of claim 24, further comprising:
and the position updating unit is used for updating the position information of the projection picture where the projection page element is positioned according to the updated page element information.
26. The man-machine interaction device in a projection display according to any one of claims 15 to 20, wherein the voice instruction instructs a click operation on the target page element, attribute information of the target page element includes position information of the target page element, and the control unit includes:
and the position sending module is used for sending the position information of the target page element to the user terminal so as to instruct the user terminal to perform clicking operation on the target page element based on the position information of the target page element.
27. The man-machine interaction device in a projection display according to any one of claims 15 to 20, wherein the voice instruction instructs a content input operation to the target page element, attribute information of the target page element includes identification information of the target page element, and the control unit includes:
The input content acquisition module is used for acquiring the content to be input from the voice instruction;
and the content identification sending module is used for sending the identification information of the content to be input and the target page element to the user terminal so as to instruct the user terminal to input the content to be input to the target page element.
28. The human-machine interaction device in a projection display of any of claims 15 to 20, further comprising:
and the application starting unit is used for sending an application program starting instruction to the user terminal under the condition that the projected page element does not comprise the target page element, and the application program starting instruction indicates the user terminal to start the target application program to which the target page element belongs.
29. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of human-machine interaction in a projection display as claimed in any one of claims 1 to 14.
30. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the human-machine interaction method in a projection display of any of claims 1-14.
31. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the human interaction method in a projection display as claimed in any one of claims 1 to 14.
32. An intelligent drive vehicle comprising the electronic device of claim 29.
CN202310101724.4A 2023-01-29 2023-01-29 Man-machine interaction method, device, equipment and storage medium in projection display Pending CN116028009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310101724.4A CN116028009A (en) 2023-01-29 2023-01-29 Man-machine interaction method, device, equipment and storage medium in projection display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310101724.4A CN116028009A (en) 2023-01-29 2023-01-29 Man-machine interaction method, device, equipment and storage medium in projection display

Publications (1)

Publication Number Publication Date
CN116028009A true CN116028009A (en) 2023-04-28

Family

ID=86072370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310101724.4A Pending CN116028009A (en) 2023-01-29 2023-01-29 Man-machine interaction method, device, equipment and storage medium in projection display

Country Status (1)

Country Link
CN (1) CN116028009A (en)

Similar Documents

Publication Publication Date Title
EP4011674A2 (en) Method and apparatus for controlling display in a screen projection scenario, device and program product
WO2022089594A1 (en) Information display method and apparatus, and electronic device
EP3865996A2 (en) Method and apparatus for testing response speed of on-board equipment, device and storage medium
CN111797184A (en) Information display method, device, equipment and medium
WO2022022729A1 (en) Rendering control method, device and system
CN112506465B (en) Method and device for switching scenes in panoramic roaming
KR20210042272A (en) Intelligent response method and device, equipment, storage medium and computer product
CN113448668B (en) Method and device for skipping popup window and electronic equipment
CN115762503A (en) Vehicle-mounted voice system, vehicle-mounted voice autonomous learning method, device and medium
CN116028009A (en) Man-machine interaction method, device, equipment and storage medium in projection display
CN113641439B (en) Text recognition and display method, device, electronic equipment and medium
CN115797660A (en) Image detection method, image detection device, electronic equipment and storage medium
CN114428917A (en) Map-based information sharing method, map-based information sharing device, electronic equipment and medium
CN112492381B (en) Information display method and device and electronic equipment
CN112671970B (en) Control method and control device for mobile equipment and cloud mobile phone, electronic equipment, mobile equipment, cloud server and medium
CN115131562B (en) Three-dimensional scene segmentation method, model training method, device and electronic equipment
CN110019583B (en) Environment display method and system based on virtual reality
CN117742862A (en) Application page display method and device, electronic equipment and storage medium
CN113051491A (en) Method, apparatus, storage medium, and program product for map data processing
CN116631396A (en) Control display method and device, electronic equipment and medium
CN116521113A (en) Multi-screen control method and device and vehicle
CN114237479A (en) Application program control method and device and electronic equipment
CN115761093A (en) Rendering method, rendering device, electronic equipment and storage medium
CN114432698A (en) Game operation execution method and device based on NFC and electronic equipment
CN116978375A (en) User interface control method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination