WO2023103948A1 - 一种显示方法及电子设备 - Google Patents

一种显示方法及电子设备 Download PDF

Info

Publication number
WO2023103948A1
WO2023103948A1 PCT/CN2022/136529 CN2022136529W WO2023103948A1 WO 2023103948 A1 WO2023103948 A1 WO 2023103948A1 CN 2022136529 W CN2022136529 W CN 2022136529W WO 2023103948 A1 WO2023103948 A1 WO 2023103948A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
electronic device
interface
service
instruction
Prior art date
Application number
PCT/CN2022/136529
Other languages
English (en)
French (fr)
Inventor
汤明
张腾
康禹
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210093485.8A external-priority patent/CN116301557A/zh
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023103948A1 publication Critical patent/WO2023103948A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Definitions

  • the present application relates to the technical field of computers, in particular to a display method and electronic equipment.
  • the embodiment of the present application discloses a display method and an electronic device, which can simplify the interaction mode, reduce user operations, and improve efficiency in a multi-device interconnection scenario.
  • an embodiment of the present application provides a display method, which is applied to a first device, and the first device is connected to a second device.
  • the method includes: displaying a first interface, the first interface includes first information, and the above-mentioned
  • the first information is related to the first service; receiving the first user operation; in response to the first user operation, identifying the first interface to determine intent information, the intent information indicating the execution of a first instruction, the first instruction is used to implement the first instruction
  • a service sending the intent information to the second device, where the intent information is used by the second device to execute the first instruction and generate second information, and the second information is used by the second device to display a second interface.
  • the above-mentioned first instruction is an instruction obtained by parsing the above-mentioned intent information, and in other embodiments, the above-mentioned first instruction is an instruction included in the above-mentioned intent information.
  • the second information is used by the second device to display the second interface and play the first audio. In other embodiments, the second information is used for the second device to play the first audio, and the second device does not display the second interface.
  • the first device when the first device receives the first user operation, it can identify the user's intention based on the currently displayed first interface, and execute the first instruction through the second device, and the first instruction is used to realize the recognized intention
  • the first service corresponding to the information does not require the user to manually operate the first device or the second device to trigger the realization of the first service, reducing user operations and making the interaction method in the multi-device interconnection scenario more efficient and convenient.
  • the above-mentioned first interface further includes third information, and the above-mentioned third information is related to the second business;
  • the above-mentioned identifying the above-mentioned first interface to determine the intention information includes: identifying the above-mentioned first information to determine the above-mentioned The fourth information, identifying the third information to determine the fifth information, the fourth information indicates the execution of the first instruction, the fifth information indicates the execution of the second instruction, and the second instruction is used to realize the second service;
  • the first preset rule is to determine the above-mentioned intention information as the above-mentioned fourth information from the above-mentioned fourth information and the above-mentioned fifth information, and the above-mentioned first preset rule includes at least one of the following: the device type of the above-mentioned second device is a preset device
  • the types and services supported by the second device include the first service, and the priority of the first service is higher than that of the second service.
  • the first information and the third information are instant messaging messages, and the first preset rule includes that the receiving time of the first information is later than the receiving time of the third information.
  • the first device may also use the first preset rule to determine the intent information that is more in line with the user's needs in the current scene, so as to further improve the accuracy of the interaction and improve the user experience.
  • the above-mentioned first information is location information
  • the above-mentioned first service is a navigation service
  • the above-mentioned second service is different from the above-mentioned first service
  • the above-mentioned first preset rule includes the device type of the above-mentioned second device It is the above-mentioned preset device type, and the above-mentioned preset device type is a vehicle-mounted device.
  • the above-mentioned first information is video information
  • the above-mentioned first service is a video playback service
  • the above-mentioned second service is different from the above-mentioned first service
  • the above-mentioned first preset rule includes the device
  • the type is the above preset device type
  • the above preset device type includes smart TV and smart screen.
  • the first information is information indicating a first location
  • the first service is a navigation service
  • the second information is display information generated by performing a navigation operation on the first location.
  • the above-mentioned first device is a smart phone
  • the above-mentioned second device is a vehicle-mounted device.
  • the navigation service for the above location information can be implemented through the second device, without the user manually inputting the above location information on the second device.
  • Location information and manual triggering of the above navigation operations make the interaction method more efficient and convenient in the scenario of multi-device interconnection.
  • the first information is information indicating a first video
  • the first service is a video playing service
  • the second information is display information generated by playing the first video.
  • the above-mentioned first device is a smart phone
  • the above-mentioned second device is a smart TV.
  • the service of playing the above video information can be realized through the second device, without the need for the user to manually search for the above video on the second device Information and manual trigger video playback services, making the interaction mode more efficient and convenient in the scenario of multi-device interconnection.
  • the first information is information indicating a first recipe
  • the first service is a cooking service
  • the second information is display information generated to realize the cooking service corresponding to the first recipe.
  • the above-mentioned first device is a smart phone
  • the above-mentioned second device is a smart cooking machine.
  • the cooking service corresponding to the recipe information can be realized through the second device, without the need for the user to manually search for the above-mentioned recipe information on the second device.
  • Recipe information and manual triggering of cooking services make the interaction method more efficient and convenient in the scenario of multi-device interconnection.
  • the above-mentioned first information is information indicating the first question and the answer to the above-mentioned first question
  • the above-mentioned first service is a test paper generation service
  • the above-mentioned second interface includes the above-mentioned first question, but does not include the above-mentioned The answer to the first question.
  • the above-mentioned first device is a smart phone
  • the above-mentioned second device is a tablet computer or a learning machine.
  • the second device when the first device displays the first interface including questions and answers, if an operation from the first user is received, the second device can display the questions but not the answers, so that children can use the second device to practice the questions without Parents manually search for the above questions on the second device and manually trigger the test paper generation business.
  • the interaction method is convenient and accurate, which can well meet the needs of parents and children.
  • the above-mentioned first user operation is a shake operation, a swing operation, a knuckle tapping operation, a knuckle sliding operation, a multi-finger tapping operation, or a multi-finger sliding operation.
  • the operation of the first user is simple and convenient, and the user no longer needs to perform cumbersome operations to trigger the realization of the first service, the interaction threshold is low, and the user is more convenient to use.
  • the present application provides yet another display method, which is applied to a first device, and the first device is connected to a second device.
  • the method includes: displaying a first interface, the first interface includes first information, and the first A piece of information related to the first business; receiving the first user operation; in response to the first user operation, identifying the above-mentioned first interface to determine the intention information; executing the first instruction according to the above-mentioned intention information to generate second information, and the above-mentioned first instruction uses To realize the above-mentioned first service; sending the above-mentioned second information to the above-mentioned second device, and the above-mentioned second information is used for the above-mentioned second device to display a second interface.
  • the above-mentioned first instruction is an instruction obtained by parsing the above-mentioned intent information, and in other embodiments, the above-mentioned first instruction is an instruction included in the above-mentioned intent information.
  • the second information is used by the second device to display the second interface and play the first audio. In other embodiments, the second information is used for the second device to play the first audio, and the second device does not display the second interface.
  • the first device when it receives the first user operation, it can identify the user's intention based on the currently displayed first interface, execute the first instruction indicated by the identified intention information, and output and execute the instruction through the second device.
  • the multimedia data generated by the first instruction can be understood as realizing the first service corresponding to the first instruction through the second device, without the need for the user to manually operate the first device or the second device to trigger the realization of the first service, reducing user operations and allowing multiple
  • the interaction method in the scenario of device interconnection is more efficient and convenient.
  • the above-mentioned first interface further includes third information, and the above-mentioned third information is related to the second business;
  • the above-mentioned identifying the above-mentioned first interface to determine the intention information includes: identifying the above-mentioned first information to determine the above-mentioned
  • the fourth information is to identify the third information to determine the fifth information, the fourth information indicates the execution of the first instruction, the fifth information indicates the execution of the second instruction, and the second instruction is used to realize the second service; based on A first preset rule, determining the intention information as the fourth information from the fourth information and the fifth information, the first preset rule includes that the device type of the second device is a preset device type, and/or The priority of the above-mentioned first service is higher than the priority of the above-mentioned second service.
  • the first information and the third information are instant messaging messages, and the first preset rule includes that the receiving time of the first information is later than the receiving time of the third information.
  • the first device may also use the first preset rule to determine the intent information that is more in line with the user's needs in the current scene, so as to further improve the accuracy of the interaction and improve the user experience.
  • the above-mentioned first information is location information
  • the above-mentioned first service is a navigation service
  • the above-mentioned second service is different from the above-mentioned first service
  • the above-mentioned first preset rule includes the device type of the above-mentioned second device It is the above-mentioned preset device type, and the above-mentioned preset device type is a vehicle-mounted device.
  • the first information is information indicating a first location
  • the first service is a navigation service
  • the second information is display information generated by performing a navigation operation on the first location.
  • the first information is information indicating a first video
  • the first service is a video playing service
  • the second information is display information generated by playing the first video.
  • the first information is information indicating a first recipe
  • the first service is a cooking service
  • the second information is display information generated to realize the cooking service corresponding to the first recipe.
  • the above-mentioned first information is information indicating the first question and the answer to the above-mentioned first question
  • the above-mentioned first service is a test paper generation service
  • the above-mentioned second interface includes the above-mentioned first question, but does not include the above-mentioned The answer to the first question.
  • the above-mentioned first user operation is a shake operation, a swing operation, a knuckle tapping operation, a knuckle sliding operation, a multi-finger tapping operation, or a multi-finger sliding operation.
  • the present application provides yet another display method, which is applied to a second device, and the second device is connected to the first device.
  • the method includes: receiving the intention information sent by the first device, and the intention information is the first
  • a device receives a first user operation, it is determined by identifying and displaying the first interface, the first interface includes first information, and the first information is related to the first business; executes the first instruction according to the above intention information, and generates the second Two information, the above-mentioned first instruction is used to realize the above-mentioned first service; display the second interface according to the above-mentioned second information.
  • the above-mentioned first instruction is an instruction obtained by parsing the above-mentioned intent information, and in other embodiments, the above-mentioned first instruction is an instruction included in the above-mentioned intent information.
  • the second information is used by the second device to display the second interface and play the first audio. In other embodiments, the second information is used for the second device to play the first audio, and the second device does not display the second interface.
  • the first device when it receives the first user operation, it may identify the user's intention based on the currently displayed first interface, and send the identified intention information to the second device.
  • the second device can execute the first instruction indicated by the intention information to realize the first service, without the need for the user to manually operate the first device or the second device to trigger the realization of the first service, reduce user operations, and allow the interaction mode in the scenario of multi-device interconnection More efficient and convenient.
  • the first information is information indicating a first location
  • the first service is a navigation service
  • the second information is display information generated by performing a navigation operation on the first location.
  • the first information is information indicating a first video
  • the first service is a video playing service
  • the second information is display information generated by playing the first video.
  • the first information is information indicating a first recipe
  • the first service is a cooking service
  • the second information is display information generated to realize the cooking service corresponding to the first recipe.
  • the above-mentioned first information is information indicating the first question and the answer to the above-mentioned first question
  • the above-mentioned first service is a test paper generation service
  • the above-mentioned second interface includes the above-mentioned first question, but does not include the above-mentioned The answer to the first question.
  • the above-mentioned first user operation is a shake operation, a swing operation, a knuckle tapping operation, a knuckle sliding operation, a multi-finger tapping operation, or a multi-finger sliding operation.
  • the present application provides yet another display method, which is applied to a second device, and the second device is connected to the first device.
  • the method includes: receiving first information sent by the first device, and the first information is Information generated by executing the first instruction, the first instruction is used to realize the first service, the first instruction is an instruction indicating execution by intent information, and the intent information is the identification display when the first device receives the first user operation determined by the first interface, the first interface includes second information, and the second information is related to the first service; and the second interface is displayed according to the first information.
  • the above-mentioned first instruction is an instruction obtained by parsing the above-mentioned intent information, and in other embodiments, the above-mentioned first instruction is an instruction included in the above-mentioned intent information.
  • the second information is used by the second device to display the second interface and play the first audio. In other embodiments, the second information is used for the second device to play the first audio, and the second device does not display the second interface.
  • the first device when it receives the first user operation, it can identify the user's intention based on the currently displayed first interface, execute the first instruction indicated by the identified intention information, and output and execute the instruction through the second device.
  • the multimedia data generated by the first instruction can be understood as realizing the first service corresponding to the first instruction through the second device, without the need for the user to manually operate the first device or the second device to trigger the realization of the first service, reducing user operations and allowing multiple
  • the interaction method in the scenario of device interconnection is more efficient and convenient.
  • the second information is information indicating a first location
  • the first service is a navigation service
  • the first information is display information generated by performing a navigation operation on the first location.
  • the second information is information indicating a first video
  • the first service is a video playing service
  • the first information is display information generated by playing the first video.
  • the second information is information indicating a first recipe
  • the first service is a cooking service
  • the first information is display information generated to realize the cooking service corresponding to the first recipe.
  • the second information is information indicating the first question and the answer to the first question
  • the first service is a test paper generation service
  • the second interface includes the first question but does not include the The answer to the first question.
  • the above-mentioned first user operation is a shake operation, a swing operation, a knuckle tapping operation, a knuckle sliding operation, a multi-finger tapping operation, or a multi-finger sliding operation.
  • the embodiment of the present application provides an electronic device, including one or more processors and one or more memories.
  • the one or more memories are coupled with one or more processors, the one or more memories are used to store computer program codes, the computer program codes include computer instructions, and when the one or more processors execute the computer instructions, cause the communication device to perform A display method in any possible implementation manner of any one of the above aspects.
  • the embodiment of the present application provides a computer storage medium, the computer storage medium stores a computer program, and when the computer program is executed by a processor, the display in any possible implementation manner of any one of the above aspects is implemented. method.
  • an embodiment of the present application provides a computer program product, which, when running on an electronic device, causes the electronic device to execute the display method in any possible implementation manner of any one of the above aspects.
  • the embodiments of the present application provide an electronic device, where the electronic device includes executing the method or apparatus introduced in any embodiment of the present application.
  • the aforementioned electronic device is, for example, a chip.
  • FIG. 1A is a schematic structural diagram of a communication system 10 provided by an embodiment of the present application.
  • FIG. 1B is a schematic structural diagram of another communication system 10 provided by an embodiment of the present application.
  • FIG. 1C is a schematic structural diagram of another communication system 10 provided by an embodiment of the present application.
  • FIG. 2A is a schematic diagram of a hardware structure of an electronic device 100 provided in an embodiment of the present application.
  • FIG. 2B is a schematic diagram of a hardware structure of an electronic device 200 provided in an embodiment of the present application.
  • FIG. 2C is a schematic diagram of a hardware structure of a network device 300 provided in an embodiment of the present application.
  • FIG. 2D is a schematic diagram of a software architecture of an electronic device 100 provided in an embodiment of the present application.
  • 3A-3C are schematic diagrams of some user interface embodiments provided by the embodiments of the present application.
  • FIGS. 4A-4B are schematic diagrams of some other user interface embodiments provided by the embodiments of the present application.
  • Fig. 4C is a schematic diagram of another user interface embodiment provided by the embodiment of the present application.
  • Fig. 5 is a schematic diagram of another user interface embodiment provided by the embodiment of the present application.
  • Fig. 6 is a schematic diagram of another user interface embodiment provided by the embodiment of the present application.
  • Fig. 7A is a schematic diagram of another user interface embodiment provided by the embodiment of the present application.
  • Fig. 7B is a schematic diagram of another user interface embodiment provided by the embodiment of the present application.
  • Fig. 8 is a schematic diagram of a user operation provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a display method provided by an embodiment of the present application.
  • Fig. 10 is a schematic flowchart of another display method provided by the embodiment of the present application.
  • FIG. 11 is a schematic flowchart of another display method provided by the embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, the “multiple” The meaning is two or more.
  • the embodiments of the present application may be applied to a scenario where multiple devices are connected and communicate with each other, for example, a distributed scenario.
  • users can use multiple devices at the same time.
  • the services of these multiple devices can be connected, such as casting a video on a smartphone to a smart TV for playback.
  • electronic devices in this scenario lack simple and efficient interaction methods, and user operations are cumbersome and complicated. Specific examples are as follows:
  • Example 1 In the scenario where the smartphone is connected to the on-board computer, if the user receives a communication message including location information through the smartphone, the user needs to open the map application on the on-board computer, set the destination as the location indicated by the location information, and It is cumbersome to realize navigation based on the location information, and if the user is in the driving process, it will also affect the safety of driving, and the user experience is not good.
  • Example 2 In the scenario where a smart phone is connected to a smart TV, if the user sees a movie’s information (such as introduction, movie review, etc.) Search for the movie to play, or the user needs to first open the video application on the smartphone and the playback interface of the movie, then operate the screen projection control, and select the device to be screencast (that is, a smart TV) to realize the projection of the movie Screen to watch on the smart TV, the operation is cumbersome and the interaction efficiency is low.
  • information such as introduction, movie review, etc.
  • Example 3 In the scenario where the smart phone and the smart cooking machine are connected, if the user sees the information of a certain recipe on the smart phone and wants to use the smart cooking machine to make the corresponding cooking, the user needs to search for the recipe on the smart cooking machine to proceed. Cooking, cumbersome operation, low interaction efficiency.
  • Example 4 In the scenario where a smartphone is connected to a tablet computer, children can use the tablet computer or learning machine to study, and parents can use the smartphone to find related exercises. Re-finding exercises on a tablet computer or a learning machine is cumbersome to operate and the interaction efficiency is low.
  • the embodiment of the present application provides a display method.
  • the first device can identify the currently displayed first interface and determine the intention information in response to the user operation, and the first device can implement the service indicated by the intention information through the second device. There is no need for the user to manually trigger the second device to realize the service indicated by the intent information, which provides an efficient and convenient interaction method applied in the scenario of multi-device interconnection, reduces user operations, and improves user experience.
  • a smart phone can respond to a shake operation (user operation), identify a chat interface (first interface) including a location card (a message showing a geographic location in the form of a card), and determine the intent information,
  • the intent information indicates a navigation service for navigating to the place represented by the location card, and the intent information may be obtained based on the location card.
  • the smart phone can instruct the onboard computer to execute the navigation service based on the intention information, and optionally perform an operation: set the location represented by the location card as a destination in the map application and perform navigation.
  • a communication system 10 involved in the embodiment of the present application is introduced below.
  • FIG. 1A exemplarily shows a schematic structural diagram of a communication system 10 provided by an embodiment of the present application.
  • the communication system 10 may include an electronic device 100, an electronic device 200, and a network device 300, wherein:
  • the electronic device 100 can be connected to at least one electronic device 200 in a wired and/or wireless manner.
  • the wired method includes, for example, a high definition multimedia interface (high definition multimedia interface, HDMI), a universal serial bus (universal serial bus, USB), coaxial cable, optical fiber, etc.
  • Wireless methods include, for example, Bluetooth, wireless fidelity (Wi-Fi), near field communication (NFC), ultra wide band (UWB) wait.
  • the electronic device 100 and the electronic device 200 can communicate through a connection line (such as Bluetooth, Wi-Fi). In this case, the rate of information transmission between the electronic device 100 and the electronic device 200 is relatively fast, and more information can be transmitted.
  • the electronic device 100 may be connected to the network device 300 in a wired and/or wireless manner, and the network device 300 may be connected to at least one electronic device 200 in a wired and/or wireless manner.
  • the electronic device 100 can communicate with the electronic device 200 through the network device 300.
  • the electronic device 100 is a smart phone
  • the electronic device 200 is a car
  • the network device 300 is a cloud server providing the HUAWEI HiCar function.
  • the electronic device 100 and the electronic device 200 can Connect and screencast through the HUAWEI HiCar function.
  • the electronic device 100 although the electronic device 100 is not connected to the electronic device 200, it can establish a connection with the electronic device 200 and then communicate. It can be understood that the electronic device 200 is an electronic device that is not connected to the electronic device 100 but can communicate.
  • the electronic device 100 may store connection information (such as Bluetooth address and password, Wi-Fi name and password, etc.) of at least one electronic device 200, and connect at least one electronic device 200 through the connection information (for example, to the electronic device corresponding to the Bluetooth address).
  • connection information such as Bluetooth address and password, Wi-Fi name and password, etc.
  • the connection information of the electronic device 200 can be obtained when the electronic device 100 connects to the electronic device 200 before, optionally, the connection information of the electronic device 200 can be electronic The device 100 obtains through the network device 300. For example, after the electronic device 100 logs in to a certain account, it can obtain the connection information of the electronic device 200 that has logged in the account before. Not limited.
  • the electronic device and the network device shown in FIG. 1A are only examples, and the specific device form is not limited.
  • the electronic device 100 may be a mobile terminal such as a mobile phone, a tablet computer, a handheld computer, a Personal Digital Assistant (PDA), a smart home device such as a smart TV, a smart camera, a smart cooking machine, a smart bracelet, a smart Wearable devices such as watches and smart glasses, or other devices such as desktops, laptops, laptops, ultra-mobile personal computers (Ultra-mobile Personal Computer, UMPC), netbooks, smart screens, learning machines, etc.
  • PDA Personal Digital Assistant
  • a smart home device such as a smart TV, a smart camera, a smart cooking machine, a smart bracelet, a smart Wearable devices such as watches and smart glasses, or other devices such as desktops, laptops, laptops, ultra-mobile personal computers (Ultra-mobile Personal Computer, UMPC), netbooks, smart screens, learning machines, etc.
  • UPDC Ultra-mobile Personal Computer
  • the network device 300 may include at least one server.
  • any server may be a hardware server.
  • any server may be a cloud server.
  • FIG. 1B exemplarily shows a schematic structural diagram of another communication system 10 provided by an embodiment of the present application.
  • the electronic device 100 in the communication system 10 may include an interface analysis module, an intent analysis module, and an intent trigger module, and the electronic device 200 in the communication system 10 may include an output module, wherein:
  • the electronic device 100 When the electronic device 100 detects a user operation, such as through the sensor module 180 shown in FIG. 2A , it may report an event corresponding to the user operation (which may be called a trigger event) to the interface analysis module.
  • a user operation such as through the sensor module 180 shown in FIG. 2A
  • it may report an event corresponding to the user operation (which may be called a trigger event) to the interface analysis module.
  • the interface analysis module of the electronic device 100 When the interface analysis module of the electronic device 100 receives the trigger event, it can identify the user interface displayed by the electronic device 100, and obtain the interface identification result. In some embodiments, the interface analysis module can extract keywords, understand natural language (natural language) understanding, NLU) and other means to identify and analyze the layer structure and text of the current interface.
  • the interface recognition result includes, for example, text information, and structural information representing structures in the user interface.
  • the interface recognition result is, for example, data in xml format, data in json format, or data in other existing formats, but is not limited thereto, and may also be data in a custom format.
  • the interface parsing module may send the interface recognition result to the intent parsing module.
  • the interface parsing module can identify some pages in the displayed user interface, and obtain an interface identification result.
  • the user interface displayed by the electronic device 100 is a split-screen interface. It is assumed that the split-screen interface includes a page of a first application and a page of a second application. Assuming that the application operated by the user last time is the first application, the interface parsing module can identify the first application. An application page and obtain a corresponding interface recognition result. Not limited thereto, the interface parsing module may identify the page of the application selected by the user, etc., and the present application does not limit the manner of determining the information to be identified in the user interface.
  • the intent analysis module of the electronic device 100 may perform intent recognition based on the interface recognition result, and obtain intent information, wherein the intent information may be specific data obtained by performing interface recognition and intent recognition on the user interface displayed by the electronic device 100, and the intent information is, for example, It is data in xml format, data in json format, or data in other existing formats, but is not limited thereto, and may also be data in a custom format, etc.
  • the intention information indicates a goal to be achieved.
  • the intention information indicates that the realized service corresponds to part of the service information in the user interface displayed by the electronic device 100 .
  • the interface recognition result includes the first structure information and the first text information
  • the intent parsing module can identify the first structure information and determine the interface structure indicated by the first structure information, and then based on the first text information and the determined interface
  • the structure gets the intent information.
  • the intent parsing module recognizes the interface structure of the location card and the interface structure of the text box, and based on the interface structure of the location card, determines that the type of text information "Beijing Station" included in the location card is address information, and determines the text based on the interface structure of the text box.
  • the type of text information "see here” included in the box is chat information, and the intention information indicating to navigate to the geographic location "Beijing Station” is obtained based on the address information "Beijing Station” and the chat information "see here".
  • the intent parsing module can send intent information to the intent triggering module.
  • the intent parsing module can further determine whether the intent information is valid, and only when the intent information is confirmed to be valid, the intent parsing module will send the intent information to the intent triggering module. For example, when the intent information indicates to navigate to the geographic location "Beijing Station", it is judged whether the address information "Beijing Station” in the intent information corresponds to the real and effective geographic location in the map. Send the intent message. For another example, when the intent information indicates to play a movie named "movie 1", judge whether the video information "movie 1" in the intent information corresponds to a real video that can be played, and if the judgment result is yes, the intent parsing module will trigger the intent The module sends the intent message.
  • the intent triggering module of the electronic device 100 may perform the intent operation based on the intent information.
  • the intent triggering module may analyze the intent information to obtain a specific instruction, and invoke the instruction to execute the intent operation.
  • the intent information indicates the goal to be achieved, and the intent operation may correspond to the user operation that the user needs to perform to achieve the goal, that is, the user must perform multiple user operations to control
  • the electronic device 100 performs the intended operation.
  • the intent triggering module can invoke the corresponding business module to perform the intended operation.
  • the intent triggering module can invoke the navigation module of the map application to execute the intended operation: Set the destination as the geographic location "Beijing Station” and navigate.
  • the intention trigger module executes the intention operation, it can send the corresponding multimedia data (such as the audio stream and video stream corresponding to the navigation service) to the output module of the electronic device 200 .
  • the output module of the electronic device 200 After the output module of the electronic device 200 receives the multimedia data sent by the intent trigger module of the electronic device 100, it can output the multimedia data, for example, play the audio stream corresponding to the navigation service, and display the video stream corresponding to the navigation service.
  • the interface analysis module of the electronic device 100 may include an interface analysis model, and use the interface analysis model to identify a displayed user interface and obtain an interface identification result.
  • the interface analysis module may use the content of the user interface displayed by the electronic device 100 as an input to the interface analysis model to obtain an output interface recognition result, for example, use the interface content including address information in text form as input to obtain an output
  • the text structure and/or the address information, or the interface content including the address information in the form of a card (such as the above-mentioned location card) is used as input to obtain the output card structure and/or the address information.
  • the intent parsing module of the electronic device 100 may include an intent parsing model, and the intention parsing module performs intent identification.
  • the intent parsing module may use the interface recognition result as an input of the intent parsing model to obtain output intent information.
  • the interface parsing module and the intent parsing module of the electronic device 100 may be set in the same fusion module, the fusion module may include a fusion model, and the intent information is determined based on the displayed user interface through the fusion model.
  • the fusion module may use the displayed interface content as an input of the fusion model to obtain the output intent information.
  • the interface content including address information is used as an input of the fusion model to obtain output intention information, and the intention information indicates that navigation is performed on the place represented by the address information.
  • the electronic device 100 can train the interface parsing model and/or the intent parsing model by itself, or the electronic device 100 can train the fusion model by itself.
  • the network device 300 in the communication system 10 can train the interface parsing module and/or the intent parsing model and send it to the electronic device 100 , or the network device 300 can train the fusion model and send it to the electronic device 100 .
  • This application does not limit the network device 300 to the electronic device 100 to send the interface analysis module and/or intent analysis model, or the mode of fusion model, for example, the electronic device 100 may send a request message to the network device 300 after receiving the user operation,
  • the network device 300 may send the above-mentioned model to the electronic device 100 every preset period of time, such as once a week, and for another example, when the version of the model is updated, the network device 300 may send the version to the electronic device 100 The updated model.
  • the electronic device 100 or the network device 300 can use the content of the user interface as input, and the structure and text included in the user interface as input to train the interface parsing model, and the examples of input and output are as described above through interface parsing
  • the example of using the model to identify the displayed user interface is similar, and will not be repeated here.
  • the electronic device 100 or the network device 300 may use the interface recognition result as an input and the corresponding intention operation and/or intention information as an output to train the intention parsing model.
  • the electronic device 100 or the network device 300 may use the content of the user interface as input, and use the corresponding intention operation and/or intention information as output to train the fusion model.
  • the content of the user interface including address information is used as input, and the intention operation (that is, setting the place represented by the address information as a destination and performing navigation) is used as output to train the fusion model.
  • the content of the user interface that does not include address information is used as input, and the corresponding user operation (such as the operation performed by the user when the electronic device 100 displays the user interface) is used as output to train the fusion model.
  • the fusion model is trained with the content of the user interface for address information as input and information indicating no navigational intent as output.
  • At least one of the interface analysis module, the intent analysis module and the intent trigger module may not be a module included in the electronic device 100, but a module included in the electronic device 200, for example , the intent triggering module is a module included in the electronic device 200.
  • the intent triggering module is a module included in the electronic device 200.
  • FIG. 1C As shown in FIG. The information executes the intended operation, and sends the multimedia data corresponding to the intended operation to the output module, and the output module outputs the multimedia data. Other descriptions are similar to those in FIG. 1B and will not be repeated here.
  • the electronic device 100, the electronic device 200, and the network device 300 involved in the embodiment of the present application are introduced below.
  • FIG. 2A exemplarily shows a schematic diagram of a hardware structure of the electronic device 100 .
  • the electronic device 100 shown in FIG. 2A is only an example, and the electronic device 100 may have more or fewer components than those shown in FIG. 2A , and two or more components may be combined, Or can have a different component configuration.
  • the various components shown in FIG. 2A may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, and a battery 142 , antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193 , a display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
  • the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves and radiate them through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband Code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division code division multiple access
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, etc.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element can be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (complementary metal-oxide-semiconductor, CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals.
  • Video codecs are used to compress or decompress digital video.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the receiver 170B can be placed close to the human ear to receive the voice.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C.
  • the earphone interface 170D is used for connecting wired earphones.
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 180A may be disposed on display screen 194 .
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of the electronic device 100 around three axes ie, x, y and z axes
  • the gyro sensor 180B can also be used for image stabilization, navigation, and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 may measure the distance by infrared or laser.
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect temperature.
  • the touch sensor 180K is also called “touch device”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the bone conduction sensor 180M can acquire vibration signals.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 may receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • the electronic device 100 can detect a user operation through the sensor module 180, and the processor 110 can respond to the user operation by performing intention identification based on the user interface displayed on the display screen 194, and based on the identified intention information through the mobile communication module.
  • 150 and/or the wireless communication module sends instruction information to the electronic device 200, and the electronic device 200 may output multimedia data corresponding to the intent information after receiving the instruction information, for example, display a navigation interface corresponding to the navigation intent.
  • the electronic device 100 detects the user's touch operation on the electronic device 100 through the pressure sensor 180A and/or the touch sensor 180K, such as tapping the display screen 194 with a knuckle, sliding a knuckle, two fingers or three fingers on the display screen 194, etc. .
  • the electronic device 100 detects the user's shaking operation and hand-waving operation through the gyro sensor 180B and/or the acceleration sensor 180E.
  • the electronic device 100 detects the user's gesture operation through the camera 193 , and this application does not limit the module for detecting the user's operation.
  • FIG. 2B exemplarily shows a schematic diagram of a hardware structure of the electronic device 200 .
  • the embodiment will be specifically described by taking the electronic device 200 as an example. It should be understood that the electronic device 200 shown in FIG. 2B is only one example, and that the electronic device 200 may have more or fewer components than those shown in FIG. 2B , two or more components may be combined, or Different component configurations are possible.
  • the electronic device 200 may include a processor 201 , a memory 202 , a wireless communication module 203 , an antenna 204 and a display screen 205 .
  • the electronic device 200 may further include a wired communication module (not shown). in:
  • Processor 201 may be used to read and execute computer readable instructions.
  • the processor 201 may mainly include a controller, an arithmetic unit, and a register.
  • the controller is mainly responsible for instruction decoding, and sends out control signals for the operations corresponding to the instructions.
  • the arithmetic unit is mainly responsible for saving the register operands and intermediate operation results temporarily stored during the execution of the instruction.
  • the hardware architecture of the processor 201 may be an application specific integrated circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, or an NP architecture, and so on.
  • the processor 201 may also be configured to generate signals sent by the wireless communication module 203 , such as Bluetooth broadcast signals and beacon signals.
  • the memory 202 is coupled with the processor 201 for storing various software programs and/or sets of instructions.
  • the memory 202 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices or other non-volatile solid-state storage devices.
  • the memory 202 can store operating systems, such as embedded operating systems such as uCOS, VxWorks, and RTLinux.
  • the memory 202 can also store a communication program, which can be used to communicate with the electronic device 100, or other devices.
  • the wireless communication module 203 may include one or more items of a WLAN communication module 203A and a Bluetooth communication module 203B.
  • the Bluetooth communication module 203B can be integrated with other communication modules (for example, the WLAN communication module 203A).
  • one or more of the WLAN communication module 203A and the Bluetooth communication module 203B can listen to signals transmitted by other devices, such as measurement signals, scanning signals, etc., and can send response signals, such as measurement responses, The scanning response and the like enable other devices to discover the electronic device 200, and establish a wireless communication connection with other devices through one or more of Bluetooth, WLAN or other short-range wireless communication technologies to perform data transmission.
  • the WLAN communication module 203A can transmit signals, such as broadcast detection signals and beacon signals, so that the router can discover the electronic device 200, and establish a wireless communication connection with the router through WLAN to connect the electronic device 100, Network device 300.
  • signals such as broadcast detection signals and beacon signals
  • the wired communication module (not shown) can be used to establish a connection with devices such as a router through a network cable, and connect the electronic device 100 and the network device 300 through the router.
  • the antenna 204 can be used to transmit and receive electromagnetic wave signals.
  • the antennas of different communication modules can be multiplexed or independent of each other, so as to improve the utilization rate of the antennas.
  • the antenna of the Bluetooth communication module 203B can be multiplexed as the antenna of the WLAN communication module 203A.
  • the display screen 205 can be used to display images, videos, and the like.
  • the display screen 205 includes a display panel.
  • the display panel can be a liquid crystal display, an organic light emitting diode, an active matrix organic light emitting diode or an active matrix organic light emitting diode, a flexible light emitting diode, a quantum dot light emitting diode, and the like.
  • the electronic device 200 may include 1 or N display screens 205 , where N is a positive integer greater than 1.
  • the electronic device 200 may further include a sensor.
  • a sensor For a specific example, refer to the sensor module 180 shown in FIG. 2A above, which will not be repeated here.
  • the electronic device 200 can receive the instruction information sent by the electronic device 100 through the wireless communication module 203 and/or the wired communication module (not shown), and the processor 201 can display the intention information through the display screen 205 based on the instruction information
  • the corresponding user interface for example, displays the navigation interface corresponding to the navigation intent.
  • FIG. 2C exemplarily shows a schematic diagram of a hardware structure of the network device 300 .
  • the network device 300 may include one or more processors 301, communication interfaces 302, and memories 303, wherein the processors 301, communication interfaces 302, and memories 303 may be connected via a bus or in other ways. Connection via bus 304 as an example. in:
  • Processor 301 may be composed of one or more general-purpose processors, such as CPUs.
  • the processor 301 can be used to run related program codes of the device control method.
  • the communication interface 302 may be a wired interface (such as an Ethernet interface) or a wireless interface (such as a cellular network interface or a wireless local area network interface), and is used for communicating with other nodes. In the embodiment of the present application, the communication interface 302 can specifically be used to communicate with the electronic device 100 and the electronic device 200 .
  • the memory 303 may include volatile memory (volatile memory), such as RAM; the memory may also include non-volatile memory (non-vlatile memory), such as ROM, flash memory (flash memory), HDD or solid state disk SSD.
  • volatile memory volatile memory
  • non-vlatile memory non-volatile memory
  • ROM read-only memory
  • flash memory flash memory
  • HDD solid state disk SSD
  • the memory 303 may also include a combination of the above-mentioned kinds of memories.
  • the memory 303 may be used to store a set of program codes, so that the processor 301 calls the program codes stored in the memory 303 to implement the method implemented on the server in the embodiment of the present application.
  • the storage 303 may also be a storage array, and the like.
  • the network device 300 may include multiple servers, for example, a web server, a background server, a download server, etc.
  • the hardware structure of these multiple servers may refer to the hardware structure of the network device 300 shown in FIG. 2C .
  • the network device 300 shown in FIG. 2C is only an implementation manner of the embodiment of the present application. In practical applications, the network device 300 may include more or fewer components, which is not limited here.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the software system of the layered architecture may be an Android system, a Huawei mobile services (HMS) system, or other software systems.
  • the embodiment of the present application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 .
  • FIG. 2D exemplarily shows a schematic diagram of a software architecture of the electronic device 100 .
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into four layers, which are respectively the application program layer, the application program framework layer, the Android runtime (Android runtime) and the system library, and the kernel layer from top to bottom.
  • the application layer can consist of a series of application packages.
  • the application package may include application programs such as camera, map, HiCar, music, chat application, entertainment application, home application, and learning application.
  • application programs such as camera, map, HiCar, music, chat application, entertainment application, home application, and learning application.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, an intent transfer service, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide communication functions of the electronic device 100 . For example, the management of call status (including connected, hung up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the intent transfer service may perform intent recognition based on an application program at the application layer. In some embodiments, the intent transfer service may perform intent recognition based on a user interface of the application program displayed on the electronic device 100 .
  • the electronic device 100 can implement the recognized intention through the electronic device 200. In one case, the service flow used to realize the intention on the electronic device 100 can be transferred to the electronic device 200. In another case, the electronic device 100 can The identified intention is sent to the electronic device 200 for implementation by the electronic device 200 .
  • the intent transfer service may provide services for system applications at the application layer, so as to implement intent recognition for third-party applications at the application layer.
  • the system application is a HiCar application
  • the third-party application is a map application, a chat application, an entertainment application, a home application, a learning application, and the like.
  • the intent transfer service may be a built-in service of an application at the application layer, for example, a server corresponding to the application (abbreviated as an application server) may provide the intent transfer service for the application.
  • a server corresponding to the application abbreviated as an application server
  • the electronic device 100 may send the content of the currently displayed user interface to the above-mentioned application server. 200 implement the intent information.
  • the intent transfer service may correspond to the intent parsing module shown in FIG. 1B , optionally the page parsing module, and optionally the intent triggering module.
  • the intent parsing module shown in FIG. 1B optionally the page parsing module, and optionally the intent triggering module.
  • the application program at the application program layer may correspond to the intent trigger module shown in FIG. 1B . In some embodiments, the application program of the application program layer may correspond to the display module shown in FIG. 1B .
  • Android runtime includes core library and virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • a system library can include multiple function modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver.
  • the sensor driver may correspond to the detection module shown in FIG. 1B .
  • the display screen 194 displays the user interface of the chat application, and the user interface is used to display the address information of the place 1 .
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps of touch operations, and other information). Raw input events are stored at the kernel level.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event.
  • the map application calls the interface of the application framework layer, starts the map application, and then starts the display driver by calling the kernel layer, through the display screen 194 A navigation interface is displayed, and the destination in the navigation interface is the above-mentioned location 1.
  • the software architecture of the electronic device 200 is similar to the software architecture of the electronic device 100 , and a specific example can be referred to FIG. 2D .
  • Scenario 1 the electronic device 100 is a smart phone, and the electronic device 200 is a vehicle-mounted computer.
  • the electronic device 100 displays a user interface including address information
  • a user operation such as a shake operation
  • the intended operation can be performed: set the place represented by the address information as the destination and perform navigation, and perform the intended operation
  • Corresponding audio streams and video streams are sent to the electronic device 200 for output.
  • the user does not need to manually input the address information in the map application on the electronic device 200 and operate the navigation control, that is, the user does not need to manually trigger the execution of the intended operation, and the interaction is more efficient and convenient.
  • 3A-3C exemplarily show an embodiment of a user interface in an application scenario (such as the above-mentioned scenario 1).
  • the electronic device 100 may display a user interface 310 of a chat application, and the user interface 310 may include a session name 311 and a chat window 312 .
  • the conversation name 311 may include the name of the chat partner "Xiao Wang", but not limited thereto. If the current conversation is a multi-person conversation, the conversation name 311 may include the name of the current conversation, such as a group name.
  • the chat window 312 can be used to display the chat record of the current session, such as the message 3121 and message 3122 sent by the chat object, the message 3121 includes the text "see here", the message 3122 includes the place name 3122A (including the text "Beijing Station"), and " The location information 3122B of "Beijing Railway Station” (including the text "No. 13, Maojiawan Hutong, Dongcheng District, Beijing"), and the message 3122 is a location card representing the geographic location "Beijing Railway Station".
  • the electronic device 100 can be connected with the electronic device 200, for example, through the HUAWEI HiCar function.
  • the electronic device 200 may display a desktop 320, and the desktop 320 may include one or more application icons, such as a map application icon, a call application icon, a music application icon, a radio application icon, a driving recorder application icon, and a setting application icon.
  • Desktop 320 may also include a main menu control that may be used to return to desktop 320 .
  • the electronic device 100 may receive a user operation (such as shaking the electronic device 100), and in response to the user operation, identify the currently displayed user interface 310.
  • the electronic device 100 recognizes The message 3122 obtains the location information of the geographic location "Beijing Station", and determines the intention information based on this: perform navigation for the geographic location "Beijing Station”.
  • the electronic device 100 may also combine the message 3121 to determine that the user wants to go to the geographic location " Beijing Station", and based on this, determine the above intention information.
  • the intention information corresponds to the navigation service, or it can be understood that the intention information corresponds to the message 3122 (location card).
  • the electronic device 100 can execute the intention operation corresponding to the intention information based on the obtained intention information: set the destination as the location information of the geographic location "Beijing Station" and perform navigation, and then send the audio and video stream corresponding to the intention operation to the electronic device 100.
  • the intended operation is used to realize the navigation service, and it can also be understood that the intended operation corresponds to the message 3122 (location card).
  • the electronic device 200 may display a user interface 330 of a map application, and the user interface 330 is used to display relevant information of a navigation service.
  • the user interface 330 may include a map window 331 , a route window 332 and a prompt box 333 . in:
  • the map window 331 is used to display a schematic diagram of the selected navigation route on the map.
  • Route window 332 includes navigation information 332A, route 332B, route 332C, and navigation controls 332D.
  • the navigation information 332A includes the text "Go to Maojia, Dongcheng District, Beijing", which is used to characterize the location information of the destination of the navigation.
  • the navigation information 332A only shows part of the location information of the destination.
  • the electronic device 200 can respond to the navigation information.
  • the touch operation (for example, click operation) of 332A displays all the location information of the destination.
  • Line 332B and line 332C can represent two navigation lines.
  • line 332B is highlighted (for example, the text of line 332B is bolded and highlighted, but the text of line 332C is not bolded and highlighted, etc.), representing The currently selected navigation route is the navigation route indicated by the route 332B.
  • the map window 331 is used to display a schematic diagram of the navigation route indicated by the route 332B on the map.
  • the electronic device 200 may respond to a touch operation (such as a click operation) acting on the line 332C, cancel the highlighted line 332B, and highlight the line 332C.
  • the selected navigation line is the navigation line indicated by the line 332C, and the map window 331 will be displayed on the map.
  • a schematic diagram of the navigation route indicated by route 332C is shown on .
  • the navigation control 332D can be used to start the navigation function, and the electronic device 200 can perform navigation based on the currently selected line (the navigation line indicated by the line 332B in the user interface 330) in response to a touch operation (such as a click operation) acting on the navigation control 332D .
  • the prompt box 333 is used for prompting the information of the current navigation service.
  • the prompt box 333 includes the text "Navigating for you to No. 13, Maojiawan Hutong, Dongcheng District, Beijing, which is in the chat with Xiao Wang", which can indicate: the detailed location information of the destination of the navigation service.
  • the navigation service is in Triggered by the chat session with the chat object "Xiao Wang" in the chat application, and the detailed location information of the destination is obtained from the chat session.
  • the address information (that is, the message 3122 ) included in the user interface 310 displayed by the electronic device 100 is in the form of a card, and is not limited thereto. In other examples, it may also be address information in the form of text. A specific example Refer to FIG. 3B below, which is not limited in the present application.
  • the electronic device 100 can display a user interface 340 of a chat application.
  • the user interface 340 is similar to the user interface 310 shown in the upper figure of FIG. 341 and message 342, the message 341 includes the text "see you there", and the message 342 includes the text "see you at Beijing Station".
  • the electronic device 100 can be connected to the electronic device 200, and the electronic device 200 can display the desktop 320 shown in the upper diagram of FIG. 3A.
  • the electronic device 100 may receive a user operation (for example, shake the electronic device 100), and in response to the user operation, identify the currently displayed user interface 340.
  • a user operation for example, shake the electronic device 100
  • the electronic device 100 identifies the message 342 to obtain the location where the user wants to go "Beijing Station", and determine the intent information based on this: perform navigation for the geographic location "Beijing Station".
  • the intention information corresponds to the message 342 .
  • the electronic device 100 can execute the intention operation corresponding to the intention information based on the obtained intention information: set the destination as the location information of the geographic location "Beijing Station” and perform navigation, and then send the audio and video stream corresponding to the intention operation to the electronic device 100.
  • the output of the device 200 refer to the lower figure of FIG. 3A for details.
  • the address information displayed by the electronic device 100 is in the form of a chat message (that is, the message 3122 in the user interface 310, and the message 342 in the user interface 340).
  • a specific example can be found in Figure 3C below, which is not limited in this application.
  • the electronic device 100 may display a user interface 350 of an entertainment application.
  • User interface 350 includes place name 351 and location control 352 .
  • the location name 351 includes the text "Capital Museum", which is the name of the location displayed on the user interface 350
  • the location control 352 includes the text "No. 16, Fuxingmenwai Street, Xicheng District”, which is the location information of the location displayed on the user interface 350, which can represent the location Address information for "Capital Museum”.
  • the electronic device 100 can be connected to the electronic device 200, and the electronic device 200 can display the desktop 320 shown in the upper diagram of FIG. 3A.
  • the electronic device 100 may receive a user operation (for example, shake the electronic device 100), and in response to the user operation, identify the currently displayed user interface 350 to obtain the location information of a place named "Capital Museum", and determine the intention information based on this: Navigate to the location "Capital Museum”.
  • the intention information corresponds to the position control 352 .
  • the electronic device 100 can send instruction information to the electronic device 200 based on the obtained intention information, and the electronic device 200 can perform the intention operation corresponding to the intention information based on the instruction information: set the destination as the location information of the location "Capital Museum” and perform navigation,
  • the specific example is similar to the lower figure of FIG. 3A , except that the destination is different, so the navigation route is also different.
  • the intention operates the corresponding position control 352 .
  • the electronic device 100 executes the intention operation corresponding to the intention information by itself, and then sends the audio and video stream corresponding to the intention operation to the electronic device 200 for output, which can be understood as the electronic device 100
  • the content is projected to the electronic device 200 for output, and what actually triggers the service on the electronic device 100 .
  • the electronic device 100 instructs the electronic device 200 to perform the intended operation corresponding to the intention information, and what is actually triggered is the service on the electronic device 200 .
  • the embodiment shown in FIGS. 3A-3B may also trigger a service on the electronic device 200
  • the embodiment shown in FIG. 3C may also trigger a service on the electronic device 100 .
  • the following embodiments are described by taking triggering a service on the electronic device 200 as an example, but there is no limitation in specific implementation.
  • the service type corresponding to the intention information determined by the electronic device 100 is related to the device type of the electronic device 200 connected to the electronic device 100 .
  • Scenario 2 the electronic device 100 is a smart phone, and when the electronic device 100 displays a user interface including address information and video information, a user operation (such as a shake operation) is received. If the electronic device 200 connected to the electronic device 100 is a vehicle-mounted computer, the electronic device 100 sends instruction information to the electronic device 200 in response to the user operation, and the electronic device 200 can perform the intended operation corresponding to the navigation service based on the instruction information: the above address The place represented by the information is set as a destination and navigated, a specific example can be found in FIG. 4A .
  • the electronic device 200 connected to the electronic device 100 is a smart TV
  • the electronic device 100 sends instruction information to the electronic device 200 in response to the user operation, and the electronic device 200 can perform the intended operation corresponding to the video service based on the instruction information: play the above video
  • Figure 4B For a video of information representation, see Figure 4B for a specific example. This is more in line with the needs of users in actual application scenarios, and further improves the accuracy of interaction.
  • 4A-4B exemplarily show an embodiment of a user interface in another application scenario (such as the above-mentioned scenario 2).
  • the electronic device 100 can display a user interface 410 of a chat application.
  • the user interface 410 is similar to the user interface 310 shown in the upper diagram of FIG.
  • Message 411 , message 412 , message 413 and message 414 may be included.
  • the message 411 and the message 412 are respectively the message 3121 and the message 3122 in the user interface 310 shown in the upper diagram of FIG. 3A , which will not be repeated here.
  • Message 413 includes the text "Watch this”
  • message 414 is a message displaying the video in the form of a card
  • message 414 includes the text "My day", which is the name of the displayed video.
  • the electronic device 100 can be connected with the electronic device 200 (vehicle computer), and the electronic device 200 (vehicle computer) can display the desktop 320 shown in the upper diagram of FIG. 3A .
  • the electronic device 100 may receive a user operation (such as shaking the electronic device 100), and in response to the user operation, identify the currently displayed user interface 410.
  • the electronic device 100 identifies the user interface 410 and obtains a message 412 corresponding to the navigation service
  • the video service corresponding to the message 414 the electronic device 100 can determine the corresponding navigation service according to the device type (vehicle computer) connected to the device, for example, the correspondence between the vehicle computer and the navigation service is preset, so the electronic device 100 recognizes the message 412 and determines the Intent information corresponding to the navigation service: perform navigation for the geographic location "Beijing Station".
  • the electronic device 100 can send instruction information to the electronic device 200 (vehicle computer) based on the obtained intention information, and the electronic device 200 (vehicle computer) can perform the intention operation corresponding to the intention information based on the instruction information: set the destination as the geographic location "Beijing For details, please refer to the lower diagram of FIG. 4A .
  • the user interface displayed by the electronic device 200 in the lower diagram of FIG. 4A is consistent with the user interface displayed by the electronic device 200 in the lower diagram of FIG. 3A .
  • the electronic device 100 may display the user interface 410 shown in the upper diagram of FIG. 4A .
  • the electronic device 100 can be connected with the electronic device 200 (smart TV), and the electronic device 200 (smart TV) can display a desktop 420, and the desktop 420 can include one or more classifications, such as TV drama classifications, movie classifications, animation classifications, children's classifications and games Classification etc.
  • the electronic device 100 may receive a user operation (such as shaking the electronic device 100), and in response to the user operation, identify the currently displayed user interface 410.
  • the electronic device 100 identifies the user interface 410 and obtains a message 412 corresponding to the navigation service
  • the video service corresponding to the message 414 the electronic device 100 can determine the corresponding video service according to the device type (smart TV) of the connected device, for example, the correspondence between the smart TV and the video service is preset, so the electronic device 100 recognizes the message 414 and determines the Intent information corresponding to the video service: play a video named "My Day".
  • the device type smart TV
  • the electronic device 100 can send instruction information to the electronic device 200 (smart TV) based on the obtained intention information, and the electronic device 200 (smart TV) can perform the intention operation corresponding to the intention information based on the instruction information: play a video called "My Day"
  • the electronic device 200 may display a user interface 430, the user interface 430 includes a title 431, and the title 431 includes the text "My Day", which is the name of the currently playing video.
  • the user may select the service information to be identified, and determine the service type corresponding to the intent information according to the user selection.
  • the service information may be identified, and determine the service type corresponding to the intent information according to the user selection.
  • the electronic device 100 may display the user interface 410 shown in the upper diagram of FIG. 4A .
  • the electronic device 100 may receive a user operation (eg, shake the electronic device 100 ), and in response to the user operation, display the user interface 440 shown in the right diagram of FIG. 4C .
  • a user operation eg, shake the electronic device 100
  • the user interface 440 may include prompt information 441, prompt box 442, and prompt box 443.
  • the prompt information 441 includes the text "Please select the business to be transferred", which is used to prompt the user to select the business information to be identified. .
  • the prompt box 442 includes a business name 442A and business information 442B.
  • the business name 442A includes the text "map navigation", and the business information 442B is the message 412 in the user interface 410 shown in the left diagram of FIG. 4C.
  • the electronic device 100 may respond to a touch operation (such as a click operation) acting on the prompt box 442, determine that the service information to be recognized is the message 412 in the user interface 410, and identify the message 412 to obtain the corresponding intention information of the navigation service: Location "Beijing Railway Station" for navigation.
  • the electronic device 100 can send indication information to the connected electronic device 200 based on the obtained intention information, and the electronic device 200 can perform the intention operation corresponding to the intention information based on the indication information.
  • the interface examples before and after the electronic device 200 receives the indication information can be respectively referred to in Fig.
  • the user interface 320 is shown in the upper diagram of FIG. 4A, and the user interface 330 is shown in the lower diagram of FIG. 4A.
  • the prompt box 443 includes a service name 443A and service information 443B.
  • the service name 443A includes the text "play video”
  • the service information 443B is the message 414 in the user interface 410 shown in the left diagram of FIG. 4C.
  • the electronic device 100 may respond to a touch operation (such as a click operation) acting on the prompt box 443, determine that the service information to be identified is the message 414 in the user interface 410, and identify the message 414 to obtain the intention information corresponding to the video service: play name For the "My Day" video.
  • the electronic device 100 can send indication information to the connected electronic device 200 based on the obtained intention information, and the electronic device 200 can perform the intention operation corresponding to the intention information based on the indication information.
  • the electronic device 200 can refer to FIG. 4B
  • Scenario 3 the electronic device 100 is a smart phone, and the electronic device 200 is a smart TV.
  • the electronic device 100 displays a user interface including video information
  • instruction information may be sent to the electronic device 200 .
  • the electronic device 200 may perform an intended operation based on the indication information: play the video represented by the video information. In this way, the user does not need to manually trigger the execution of the intended operation, and the interaction is more efficient and convenient.
  • Fig. 5 exemplarily shows an embodiment of a user interface in another application scenario (such as the above-mentioned scenario 3).
  • the electronic device 100 may display a user interface 510 of an entertainment application.
  • the user interface 510 includes a name 521.
  • the name 521 includes the text "movie 1", which is the name of the movie displayed on the user interface 510.
  • the user interface 510 Displays detailed information about "Movie 1", such as related videos, stills, reviews, etc.
  • the electronic device 100 can be connected to the electronic device 200, and the electronic device 200 can display the desktop 420 shown in the upper figure of FIG. the video to view and play it.
  • the electronic device 100 may receive a user operation (such as shaking the electronic device 100), and in response to the user operation, recognize the currently displayed user interface 510 to obtain information about the movie named "Movie 1", and determine the intent information based on this: play A movie named "Movie 1".
  • the electronic device 100 can send instruction information to the electronic device 200 based on the obtained intention information, and the electronic device 200 can perform the intention operation corresponding to the intention information based on the instruction information: play the movie named "Movie 1", for details, please refer to the lower part of FIG. picture.
  • the electronic device 200 may display a user interface 520, and the user interface 520 includes a title 521, and the title 521 includes text "Movie 1", which is the name of the currently playing video.
  • the electronic device 100 can obtain the video stream of "Movie 1" through the video application, and continuously send the video stream to the electronic device 200 for playback, which can be understood as casting the video on the electronic device 100 to the Play on the electronic device 200, so that the user does not need to open the video application and the video playback interface on the electronic device 100 (smart phone), operate the screen projection control and select the device (smart TV) to be screen projection.
  • the electronic device 200 After the electronic device 200 receives the instruction information, it searches for the video and plays it, which can be understood as playing the video on the electronic device 200, so that the user does not need to search for the video on the electronic device 200 (smart TV).
  • Video eg, implemented through the search control 421 in the user interface 420 shown in the upper diagram of FIG. 5 ). Therefore, user operations are simplified and interaction efficiency is greatly improved.
  • the video information displayed by the electronic device 100 is displayed in the movie introduction, but it is not limited thereto.
  • the video information may also be in the form of a chat message, such as in the user interface 410 shown in FIG. 4B
  • the message 414 see FIG. 4B for a specific scenario example, which is not limited in this application.
  • Scenario 4 The electronic device 100 is a smart phone, and the electronic device 200 is a smart cooking machine.
  • the electronic device 100 displays a user interface including recipe information
  • instruction information may be sent to the electronic device 200 .
  • the electronic device 200 may perform an intended operation based on the indication information: perform work according to the recipe information. In this way, the user does not need to search for the recipe on the smart cooking machine for cooking, that is, the user does not need to manually trigger the execution of the intended operation, and the interaction is more efficient and convenient.
  • Fig. 6 exemplarily shows an embodiment of a user interface in another application scenario (such as the above-mentioned scenario 4).
  • the electronic device 100 may display a user interface 610 of the household application, the user interface 610 includes a title 611, and the title 611 includes the text "crispy pork belly", which is the name of the recipe displayed on the user interface 610.
  • the user interface 610 is used to display the detailed information of the recipe named "Crispy Pork Belly", such as ingredients information 612 and preparation steps 613 .
  • the electronic device 100 can be connected with the electronic device 200, and the electronic device 200 can display a homepage 620, and the homepage 620 can include one or more categories, such as daily recipe category, Chinese category and Western category.
  • the electronic device 100 may receive a user operation (for example, shake the electronic device 100), and in response to the user operation, identify the currently displayed user interface 610 to obtain the recipe information named "crispy pork belly", and determine the intention information based on this: Cook the dish corresponding to this recipe.
  • the electronic device 100 can send instruction information to the electronic device 200 based on the obtained intention information, and the electronic device 200 can perform the intention operation corresponding to the intention information based on the instruction information: work according to the recipe, refer to the lower figure of FIG. 6 for details.
  • the electronic device 200 may display a user interface 630 including a title 631 and step information 632 .
  • Heading 631 includes the text "Crispy Pork Belly", the name of the recipe currently in use.
  • the step information 632 is the preparation step of the currently used recipe, which corresponds to the preparation step 613 in the user interface 610 shown in the upper diagram of FIG. 6 .
  • the user interface 630 may represent that the electronic device 200 is currently working on a recipe named "Crispy Pork Belly".
  • the electronic device 100 when the electronic device 100 recognizes the currently displayed user interface 610, it may only recognize the dish name "crispy pork belly” on the recipe, and determine the intention information based on this: the cooking dish name is " Crispy pork belly” dish. After receiving the indication information, the electronic device 200 may perform the intention operation corresponding to the intention information: search for the dish name to obtain the corresponding recipe, and perform work according to the searched recipe.
  • the electronic device 100 is a smart phone, which is used by parents, and the electronic device 200 is a tablet computer, which is used for children to study.
  • the electronic device 100 displays a user interface including learning information
  • instruction information may be sent to the electronic device 200 .
  • the electronic device 200 may perform an intended operation based on the indication information: display all or part of the learning information. In this way, parents do not need to search for the learning information on the tablet computer for their children to learn and use, that is, there is no need for the user to manually trigger the execution of the intended operation, and the interaction is more efficient and convenient.
  • FIG. 7A exemplarily shows an embodiment of a user interface in another application scenario (such as the above-mentioned scenario 5).
  • the electronic device 100 may display a user interface 710 of a learning application, the user interface 710 includes a title 711, the title 711 includes the text "English test paper", and the characterizing user interface 710 is used to display the name "English test paper” Details of the test paper.
  • the user interface 710 also includes detailed information of a plurality of exercises, such as an exercise 712 and an exercise 713.
  • the exercise 712 includes a topic 712A and an answer 712B
  • the exercise 713 includes a topic 713A and an answer 713B.
  • the user interface 710 also includes an exam control 714, which is used to provide a simulated exam function for the current exam paper.
  • the electronic device 100 can be connected with the electronic device 200, and the electronic device 200 can display a desktop 720, and the desktop 720 can include one or more application icons, such as a clock application icon, a calendar application icon, a gallery application icon, and a setting application icon.
  • the electronic device 100 may receive a user operation (such as shaking the electronic device 100), and in response to the user operation, recognize the currently displayed user interface 720 to obtain the information of the test paper named "English test paper", and determine the intention information based on this: for This paper is a mock exam.
  • the electronic device 100 can send instruction information to the electronic device 200 based on the obtained intention information, and the electronic device 200 can perform the intention operation corresponding to the intention information based on the instruction information: enable the mock exam function of the test paper, for details, please refer to the lower diagram of FIG. 7A .
  • the electronic device 200 can display a user interface 730 , and the user interface 730 includes a title 731 , a submission control 732 , topic information 733 and a switching control 734 .
  • the title 731 includes the text "English test paper", which is the name of the test paper of the mock test currently in progress.
  • the submit control 732 is used to end the current mock exam and display the result of the mock exam.
  • Topic information 733 displays the information of the topic currently being viewed, and the switch control 734 is used to switch the information of the topic currently being viewed.
  • the user interface 730 may represent the function of the simulated test of the test paper named "English test paper" which is currently opened.
  • the electronic device 200 can respond to the touch operation (such as a click operation) acting on the submission control 732, display the results of the mock exam, and send the results of the mock exam to the electronic device 100, so that parents can efficiently and conveniently Accurately grasp the child's learning situation.
  • the touch operation such as a click operation
  • the user interface 710 displayed by the electronic device 100 includes questions and answers, but the user interface 730 displayed after the electronic device 200 receives the instruction information only includes questions and does not include answers.
  • Parents not only do not need to go to the electronic device 200 Find the corresponding exercises on the website, and there is no need to manually open the mock test function, which further reduces user operations and improves the efficiency of interaction.
  • the electronic device 100 may respond to the operation of shaking the electronic device 100, identify the exercises 712 and/or exercises 713 in the currently displayed user interface 710, and based on This determination intention information: practice exercises 712 and/or exercises 713 .
  • the electronic device 200 can perform the corresponding intention operation: display the topic 712A in the exercise 712 and/or the topic 713A in the exercise 713 for children to practice , the specific example is similar to the user interface 730 shown in the lower diagram of FIG. 7A .
  • the user can select the service information to be identified, and determine the service content corresponding to the intent information according to the user selection.
  • the service information can be identified, and determine the service content corresponding to the intent information according to the user selection.
  • the electronic device 100 may display the user interface 710 shown in the upper diagram of FIG. 7A .
  • the electronic device 100 may receive a user operation (eg, shake the electronic device 100 ), and in response to the user operation, display the user interface 740 shown in the right diagram of FIG. 7B .
  • a user operation eg, shake the electronic device 100
  • the user interface 740 may include a prompt message 741, a prompt box 742, and a prompt box 743.
  • the prompt message 741 includes the text "Please select the content to be transferred", which is used to prompt the user to select the business information to be identified. .
  • the prompt box 742 is the title 712A of the exercise 712 in the user interface 740 shown in the left diagram of FIG. 7B .
  • the electronic device 100 may respond to a touch operation (such as a click operation) acting on the prompt box 742 , determine that the business information to be recognized is the exercise 712 in the user interface 740 , and identify the exercise 712 to obtain the intention information: the practice exercise 712 .
  • electronic device 200 receives the indication information sent by electronic device 100 based on the intention information, it can perform the corresponding intention operation: display the topic 712A in exercise 712, and the specific example is similar to the user interface 730 shown in the lower figure of FIG. 7A .
  • the prompt box 743 is the title 713A of the exercise 713 in the user interface 740 shown in the left diagram of FIG. 7B .
  • the electronic device 100 may respond to a touch operation (such as a click operation) acting on the prompt box 743 , determine that the business information to be recognized is the exercise 713 in the user interface 740 , and identify the exercise 713 to obtain the intention information: the practice exercise 713 .
  • electronic device 200 After electronic device 200 receives the indication information sent by electronic device 100 based on the intention information, it can perform the corresponding intention operation: display the topic 713A in exercise 713, and the specific example is similar to the user interface 730 shown in the lower figure of FIG. 7A .
  • the electronic device 200 may also be a device such as a learning machine.
  • the user operation that triggers the flow of intent is a shake operation.
  • the trigger operation can also be an operation of sliding a knuckle.
  • the triggering operation can also be a two-finger sliding operation.
  • the triggering operation can also be a gesture operation.
  • see (C) in FIG. 8 is not limited thereto, and may also refer to knuckle tapping, hand shaking, etc.
  • the application does not limit the specific types of trigger operations.
  • a display method provided in the embodiment of the present application is introduced below.
  • FIG. 9 exemplarily shows a schematic flowchart of a display method provided in the embodiment of the present application.
  • the display method can be applied to the above-mentioned communication system 10 , and the above-mentioned communication system 10 may include the electronic device 100 , the electronic device 200 and the network device 300 .
  • the display method may include but not limited to the following steps:
  • S101 The electronic device 100 and the electronic device 200 establish a connection.
  • the electronic device 100 and the electronic device 200 can be directly connected through wired and/or wireless means, such as through Bluetooth, Wi-Fi connection, in other embodiments, the electronic device 100 and the electronic device 200 can be connected through a network
  • the connection of the device 300 refer to the description of the connection between the electronic device 100 and the electronic device 200 in FIG. 1A for details.
  • the electronic device 100 displays a first interface including first service information.
  • the first service information corresponds to the first service
  • different service information corresponds to different services. Specific examples are as follows.
  • the first service information is address information corresponding to the navigation service.
  • the message 3122 in the user interface 310 shown in FIG. 3A , the message 342 in the user interface 340 shown in FIG. 3B , or the location control 352 in the user interface 350 shown in FIG. 3C is the first service information.
  • the first service information is video information corresponding to a video service (such as playing video).
  • the information included in the message 414 in the user interface 410 shown in FIG. 4A or in the user interface 510 shown in FIG. 5 (such as the name 521 ) is the first service information.
  • the first service information is recipe information corresponding to a cooking service (such as cooking according to a recipe).
  • the information included in the user interface 610 shown in FIG. 6 (such as the title 611) is the first service information.
  • the first service information is learning information corresponding to a learning service (such as exercise questions).
  • the information included in the user interface 710 shown in FIG. 7A (eg exercise 712, exercise 713) is the first service information.
  • S103 The electronic device 100 receives a first user operation.
  • the form of the first user operation may include, but is not limited to, touch operations acting on the display screen, voice, motion gestures (such as gestures), brain waves, and the like.
  • the first user operation is an operation of shaking the electronic device 100 .
  • the first user operation is an operation of sliding a finger joint as shown in (A) of FIG. 8 .
  • the first user operation is the two-finger sliding operation shown in (B) of FIG. 8 .
  • the first user operation is a gesture operation shown in (C) of FIG. 8 .
  • the present application does not limit the specific type of the first user's operation.
  • the electronic device 100 may detect the first user operation through the detection module shown in FIG. 1B .
  • the electronic device 100 can detect the first user operation through the sensor module 180 shown in FIG. 2A .
  • the electronic device 100 can detect the user operation through the sensor module 180 in FIG. 2A .
  • the electronic device 100 can train a fusion model by itself, and the fusion model is used to identify user intentions, for example, to perform S107.
  • the fusion model is trained by the network device 300 .
  • training the fusion model refer to the description of training the fusion model, training the interface analysis model and/or the intent analysis model in FIG. 1B , which will not be repeated here.
  • the electronic device 100 has received the fusion model sent by the network device 300 before S103, then after S103, the display method may also include but not limited to the following three steps:
  • S104 The electronic device 100 sends a first request message to the network device 300 .
  • the first request message is used to request to acquire configuration information of the fusion model.
  • the network device 300 sends a first configuration message to the electronic device 100 .
  • the first configuration message includes configuration information of the fusion model.
  • S106 The electronic device 100 updates the fusion model based on the first configuration message.
  • the electronic device 100 has not received the fusion model sent by the network device 300 before S103, and the electronic device 100 may request the network device to acquire the fusion model.
  • the specific process is similar to the above steps S104-S106 and will not be repeated here.
  • the electronic device 100 identifies the first interface based on the fusion model, and determines intent information corresponding to the first service information.
  • the electronic device 100 may use the interface content of the first interface as an input of the fusion model to obtain an output: intent information.
  • intent information is shown as an example:
  • the first interface is user interface 310 shown in FIG. 3A or user interface 340 shown in FIG. It is the address information representing the geographic location named "Beijing Station".
  • the intent information corresponding to the first service information is: perform navigation aiming at the geographic location "Beijing Station”.
  • the first interface is the user interface 350 shown in FIG. 3C , where the location control 352 in the user interface 350 is the first business information, and the first business information is address information representing a place named "Capital Museum".
  • the intent information corresponding to the first business information is: perform navigation for the location "Capital Museum”.
  • the first interface is the user interface 410 shown in FIG. 4A , where the message 414 in the user interface 410 is first service information, and the first service information may represent a video named "My Day”.
  • the intent information corresponding to the first service information is: play a video named "My Day”.
  • the first interface is the user interface 510 shown in FIG. 5 , where the information included in the user interface 510 (such as the name 521 ) is first business information, and the first business information may represent a movie named "Movie 1".
  • the intent information corresponding to the first service information is: play the movie named "movie 1".
  • the first interface is the user interface 610 shown in FIG. 6 , where the information included in the user interface 610 (such as title 611 ) is the first service information, and the first service information may represent a recipe named "crispy pork belly".
  • the intention information corresponding to the first service information is: cooking the dish corresponding to the recipe.
  • the first interface is the user interface 710 shown in FIG. 7A, wherein the information included in the user interface 710 (such as exercise 712, exercise 713) is the first business information, and the first business information can represent one or more exercises (named as The test paper of "English test paper" includes at least one exercise, such as exercise 712, exercise 713).
  • the intention information corresponding to the first business information is: practicing the one or more exercises.
  • S108 The electronic device 100 sends indication information to the electronic device 200 based on the intent information.
  • the electronic device 100 may perform the intended operation based on the intention information, and send multimedia data corresponding to the intended operation to the electronic device 200, and the instruction information may instruct the electronic device 200 to output the multimedia data.
  • the intent analysis module of the electronic device 100 sends intent information to the intent trigger module, and the intent trigger module executes the intent operation based on the intent information, and sends the audio and video stream corresponding to the intent operation to the display module of the electronic device 200 for output. .
  • the instruction information sent by the electronic device 100 to the electronic device 200 includes intent information, and the instruction information may instruct the electronic device 200 to realize the intent information.
  • the intent trigger module of the device 200 sends intent information.
  • S109 The electronic device 200 outputs multimedia data.
  • the electronic device 200 when the electronic device 200 receives the multimedia data and instruction information sent by the electronic device 100, it may output the multimedia data according to the instruction information, such as the embodiment shown in FIG. 1B .
  • the electronic device 200 receives the instruction information sent by the electronic device 100, the instruction information includes intent information, the intent operation can be performed based on the intent information, and multimedia data corresponding to the execution of the intent operation is output, such as shown in FIG. 1C example.
  • the above intention operation corresponds to the first service information in the first interface.
  • the electronic device 200 outputs the multimedia data corresponding to the intended operation, which may also be referred to as outputting the multimedia data corresponding to the first service information.
  • the first interface is user interface 310 shown in FIG. 3A or user interface 340 shown in FIG. It is the address information representing the geographic location named "Beijing Station".
  • the intended operation corresponding to the first business information is: set the destination as the location information of the geographic location "Beijing Railway Station” and perform navigation, and the electronic device 200 outputs the multimedia data corresponding to the intended operation.
  • the user shown in the lower figure of FIG. 3A For the interface 330, for the specific scene description, please refer to the description of FIG. 3A or FIG. 3B.
  • the first interface is the user interface 350 shown in FIG. 3C , where the location control 352 in the user interface 350 is the first business information, and the first business information is address information representing a place named "Capital Museum".
  • the intended operation corresponding to the first business information is: set the destination as the location information of the location “Capital Museum” and perform navigation, and the electronic device 200 outputs the multimedia data corresponding to the intended operation and the user interface 330 shown in the lower figure of FIG. 3A
  • the difference is that the destinations of the navigation are different, and the specific scene description may refer to the description of FIG. 3C .
  • the first interface is the user interface 410 shown in FIG. 4A , where the message 414 in the user interface 410 is first service information, and the first service information may represent a video named "My Day”.
  • the intended operation corresponding to the first business information is: play the video named "My Day”, and the electronic device 200 outputs the multimedia data corresponding to the intended operation.
  • the user interface 430 shown in the lower figure of FIG. See description of Figure 4B.
  • the first interface is the user interface 510 shown in FIG. 5 , where the information included in the user interface 510 (such as the name 521 ) is first business information, and the first business information may represent a movie named "Movie 1".
  • the intended operation corresponding to the first service information is: play the movie named "Movie 1", and the electronic device 200 outputs the multimedia data corresponding to the intended operation.
  • the first interface is the user interface 610 shown in FIG. 6 , where the information included in the user interface 610 (such as title 611 ) is the first business information, and the first business information may represent a recipe named "crispy pork belly".
  • the intended operation corresponding to the first business information is: work according to the recipe, and the electronic device 200 outputs the multimedia data corresponding to the intended operation.
  • the first interface is the user interface 710 shown in FIG. 7A, wherein the information included in the user interface 710 (such as exercise 712, exercise 713) is the first business information, and the first business information can represent one or more exercises (named as The test paper of "English test paper" includes at least one exercise, such as exercise 712, exercise 713).
  • the intended operation corresponding to the first business information is: display the questions in the one or more exercises (without displaying the answer), and the electronic device 200 outputs the multimedia data corresponding to the intended operation.
  • the user interface 730 shown in the lower figure of FIG. 7A for specific scene description, please refer to the description of FIG. 7A.
  • the electronic device 100 if the first interface does not include the first business information, the electronic device 100 cannot recognize the intent information corresponding to the first business information, and therefore will not send an instruction to the electronic device 200 information, the electronic device 200 will not perform the intended operation corresponding to the first business information.
  • the electronic device 100 and the electronic device 200 keep displaying the current interface unchanged. transfer of business, etc.
  • the user interface 410 (first interface) shown in Figure 4A only includes message 411 and message 413, does not include message 412 (address information) and message 414 (video information), then electronic device 100 and electronic device 200 can keep displaying The current interface remains unchanged.
  • FIGS. 3A-3C, 4A-4C, 5-6 and 7A-7B Examples of the display method shown in FIG. 9 can be found in FIGS. 3A-3C, 4A-4C, 5-6 and 7A-7B.
  • the electronic device 100 when the electronic device 100 receives the first user operation, it can perform intent recognition based on the currently displayed user interface, and realize the recognized intent through the electronic device 200, without the need for the user to manually trigger the realization of the intent, reducing User operation and interaction are more efficient and convenient.
  • the electronic device 100 may identify the first interface to obtain the interface recognition result.
  • the electronic device 100 may obtain the interface recognition result based on the interface analysis model.
  • the manner in which the electronic device 100 obtains the interface analysis model is similar to the manner in which the fusion model shown in FIG. 9 is obtained.
  • the electronic device 100 can perform intent recognition based on the interface recognition result and obtain intent information.
  • the electronic device 100 can obtain intent information based on an intent parsing model.
  • the electronic The manner in which the device 100 acquires the intent parsing model is similar to the manner in which the fusion model shown in FIG. 9 is acquired.
  • the electronic device 100 may not execute S107-S108, and after receiving the first user operation, the electronic device 100 may identify the first interface to obtain an interface identification result, and send the electronic device 200 The interface identification result and indication information are sent, and the indication information may instruct the electronic device 200 to realize the intention information corresponding to the interface identification result.
  • the electronic device 200 may perform intent recognition based on the interface recognition result and obtain intent information, then execute the intent operation based on the intent information, and output multimedia data corresponding to the intent operation.
  • electronic device 100 includes a detection module and an interface parsing module shown in FIG. 1B
  • electronic device 200 includes an intent parsing module and an intent trigger module shown in FIG. 1B .
  • the electronic device 200 may obtain the intent information based on the intent resolution model.
  • the manner in which the electronic device 200 obtains the intent resolution model is similar to the manner in which the electronic device 100 obtains the fusion model shown in FIG. 9 .
  • the electronic device 100 may not execute S107-S108, and after receiving the first user operation, the electronic device 100 may send the interface content and instructions displayed by the electronic device 100 to the electronic device 200 information, the indication information may instruct the electronic device 200 to realize the intention information corresponding to the interface content.
  • the electronic device 200 may execute S107 in FIG. 9 to obtain the intent information, then execute the intent operation based on the intent information, and output multimedia data corresponding to the intent operation.
  • the electronic device 100 includes the detection module shown in FIG. 1B
  • the electronic device 200 includes the interface parsing module, intent parsing module and intent triggering module shown in FIG. 1B .
  • the electronic device 200 may obtain the interface identification result based on the interface analysis model.
  • the electronic device 200 may obtain the intent information based on the intent analysis model.
  • the electronic device 200 obtains the interface analysis model and/or the intent analysis model
  • the manner of obtaining the fusion model is similar to that of the electronic device 100 shown in FIG. 9 .
  • the electronic device 200 may obtain the intent information according to the interface content based on the fusion model.
  • the manner in which the electronic device 200 obtains the fusion model is similar to the manner in which the electronic device 100 obtains the fusion model shown in FIG. 9 .
  • FIG. 10 is a schematic flowchart of another display method provided by an embodiment of the present application.
  • the first device in this method may be the above-mentioned electronic device 100
  • the second device in this method may be the above-mentioned electronic device 200.
  • the method may include, but is not limited to, the following steps:
  • the first device displays a first interface.
  • the first interface includes first information, and the first information is related to the first service.
  • the first information refer to the example of the first service information in S102 of FIG. 9 .
  • S202 The first device receives a first user operation.
  • S202 is similar to S103 in FIG. 9 .
  • S103 in FIG. 9 refers to the description of S103 in FIG. 9 .
  • the first device identifies the first interface to determine intent information in response to a first user operation.
  • the intent information indicates to execute the first instruction, and the first instruction is used to realize the first service.
  • the first instruction is an instruction obtained by analyzing the intent information, and in other embodiments, the first instruction is an instruction included in the intent information.
  • the intention information includes first information, for example, the first information is information indicating a first location, and the intention information indicates that navigation is performed with respect to the first location.
  • the intent information includes information related to the first information, for example, the first information is information indicating the first video, and the way to play the first video (such as the source of the first video) can be acquired according to the first information , the intent information indicates to play the first video in the manner obtained above.
  • the description of the first device identifying the first information to determine the intent information may refer to the description of S107 in FIG. 9 .
  • S204 The first device sends intent information to the second device.
  • S205 The second device executes the first instruction according to the intention information, and generates second information.
  • the execution of the first instruction by the second device may correspond to the above-mentioned execution of the intended operation.
  • the intended operation refer to the intended operation shown in FIG. 9 above.
  • the second information is multimedia data generated by executing the first instruction, such as audio data, video data, image data and the like.
  • the second device displays a second interface according to the second information.
  • the second device may output the second information, such as playing audio data included in the second information, displaying image data included in the second information, or playing video data included in the second information.
  • for an example of displaying the second interface by the second device refer to an example of the electronic device 200 outputting multimedia data corresponding to the intended operation in the description of the intended operation illustrated in FIG. 9 above.
  • the first information is the information indicating the first location, for example, the message 3122 in the user interface 310 shown in FIG.
  • the message 342 in the user interface 340 shown in FIG. 3B is another example of the location control 352 in the user interface 350 shown in FIG. 3C .
  • the first location indicated by the location control 352 is the location "Capital Museum".
  • the first service is a navigation service.
  • the second information is the display information generated by executing the navigation operation for the first location, such as the multimedia data generated when the destination is set as the location information of the geographic location "Beijing Station" and the navigation is performed.
  • the second device displays the information based on the second information.
  • the second interface is the user interface 330 shown in the lower figure of FIG. 3A , and for example, the destination is set as the location information of the location "Capital Museum" and the multimedia data generated when the navigation is performed.
  • FIG. 3A , FIG. 3B , or FIG. 3C For specific scenario descriptions, refer to descriptions in FIG. 3A
  • the first information is information indicating the first video, for example, message 414 in the user interface 410 shown in FIG. 4A , the name of the first video indicated by the message 414 is "My Day", and
  • the information included in the user interface 510 shown in FIG. 5 indicates that the name of the first video is "Movie 1".
  • the first service is a video playing service.
  • the second information is the display information generated by playing the first video, such as the multimedia data generated by playing the video "My Day”, and the second interface displayed by the second device according to the second information is the user interface shown in the lower figure of Figure 4B 430.
  • play the multimedia data generated by the video "Movie 1", and the second interface displayed by the second device according to the second information is the user interface 520 shown in the lower figure of FIG. 5 .
  • play the multimedia data generated by the video "Movie 1” and the second interface displayed by the second device according to the second information is the user interface 520 shown in the lower figure of FIG. 5 .
  • FIG. 4B or FIG. 5 refer to the descriptions of FIG. 4B or FIG. 5 .
  • the first information is information indicating the first recipe, for example, the information included in the user interface 610 shown in FIG. 6 (such as the title 611), and the indicated first recipe is named "crispy pork belly" .
  • the first business is cooking business.
  • the second information is the display information generated by the cooking service corresponding to the first recipe, for example, the multimedia data generated by working according to the recipe "crispy pork belly”.
  • the second interface displayed by the second device according to the second information is as shown in Figure 6
  • the user interface 630 is shown in the figure below. For specific scenario descriptions, refer to the description of FIG. 6 .
  • the first information is information indicating the first question and the answer to the first question, for example, the exercise 712 in the user interface 710 shown in FIG. 7A , and the exercise 712 includes a question 712A and an answer 712B.
  • the first service is the test paper generation service.
  • the test paper is information that includes at least one question and does not include an answer as an example.
  • the second interface includes the first question, but does not include the answer to the first question.
  • the second interface is, for example, the user interface 730 shown in the lower figure of FIG. 7A.
  • the user interface 730 includes the above-mentioned question 712A (the question information 733 in the user interface 730) , but excluding answer 712B above.
  • FIG. 7A For specific scenario descriptions, refer to the description of FIG. 7A .
  • the first interface further includes third information, the third information is related to the second service, and the description of the third information and the second service is similar to the above description of the first information and the first service.
  • S203 may be specifically: the first device identifies the first information to determine the fourth information, identifies the third information to determine the fifth information, and determines that the above intention information is the fourth information from the fourth information and the fifth information based on the first preset rule. information.
  • the fourth information indicates the execution of the above-mentioned first instruction
  • the fifth information indicates the execution of the second instruction
  • the second instruction is used to realize the second service.
  • the description of the second instruction is similar to the description of the above-mentioned first instruction.
  • the first preset rule may include: the device type of the second device is a preset device type, which can be understood to mean that the first device can determine the intention information to be realized according to the device type of the connected second device.
  • the first interface is a chat interface
  • the first information and the third information are respectively message 412 and message 414 in the user interface 410 shown in the upper diagram of FIG. 4A
  • the first information is location information
  • the third information is The third information is video information.
  • the first service corresponding to the first information is a navigation service
  • the fourth information indicates navigation for the geographic location "Beijing Station”
  • the second service corresponding to the third information is a video playback service
  • the fifth information indicates that the playback name is "My Day". " video.
  • the first device is an electronic device 100 (smart phone), and the second device is an electronic device 200. If the second device is a vehicle-mounted computer, the first device can determine that the intent information is the above-mentioned fourth information. For a scene example, see FIG. 4A.
  • the second device is a smart TV, and it can be determined that the intent information is the fifth information above. For a scene example, refer to FIG. 4B .
  • the first preset rule may include: the service supported by the second device includes the first service, for example, the first service is a navigation service, and if the second device is installed with a map application, it can execute the navigation service based on the map application device, the first device may determine the above intent information as the first information.
  • the first preset rule may include: the priority of the first service is higher than the priority of the second service.
  • the first information and the third information are instant messaging messages, and the first preset rule may include: the receiving time of the first information is later than the receiving time of the third information.
  • the first interface is a chat interface
  • the first information and the third information are respectively message 412 and message 414 in the user interface 410 shown in the upper figure of FIG. 4A . Since message 414 is received later , so the first device may determine that the intent information is the fifth information corresponding to the message 414, and the fifth information indicates to play a video named "My Day". For a scene example, see FIG. 4B.
  • the method shown in FIG. 10 is applied to the communication system 10 shown in FIG. 1C , the first device is the electronic device 100 , and the second device is the electronic device 200 .
  • the first device is the electronic device 100
  • the second device is the electronic device 200 .
  • FIG. 1C refers to the description of FIG. 1C .
  • FIG. 11 is a schematic flowchart of another display method provided by an embodiment of the present application.
  • the first device in this method may be the above-mentioned electronic device 100
  • the second device in this method may be the above-mentioned electronic device 200 .
  • the method may include, but is not limited to, the following steps:
  • the first device displays a first interface.
  • the first device receives a first user operation.
  • the first device identifies the first interface to determine intent information in response to a first user operation.
  • S301-S303 are consistent with S201-S203 in FIG. 10 , for details, please refer to the description of S201-S203 in FIG. 10 .
  • S304 The first device executes the first instruction according to the intention information, and generates second information.
  • S304 is similar to S205 in FIG. 10 , except that the execution device in S304 is the first device, not the second device.
  • S305 The first device sends second information to the second device.
  • S306 The second device displays a second interface according to the second information.
  • S306 is consistent with S206 in FIG. 10 , and details may refer to the description of S206 in FIG. 10 .
  • the example in FIG. 11 is similar to the example in FIG. 10 , except that in FIG. 11 , it is not the second device that executes the first instruction and generates the second information, but the first device.
  • it is not the second device that executes the first instruction and generates the second information, but the first device.
  • the first device For details, refer to the example in FIG. 10 .
  • the method shown in FIG. 11 is applied to the communication system 10 shown in FIG. 1B , the first device is the electronic device 100 , and the second device is the electronic device 200 .
  • the first device is the electronic device 100
  • the second device is the electronic device 200 .
  • FIG. 1B refers to the description of FIG. 1B .
  • the device that recognizes the first interface to determine the intent information may not be the first device, but the second device, for example, the first device responds to the first
  • the user operates to send multimedia data (such as image data) related to the first interface to the second device, and the second device performs intent identification based on the received data.
  • multimedia data such as image data
  • the processor may include but not limited to at least one of the following: a central processing unit (central processing unit, CPU), a microprocessor, a digital signal processor (DSP), a microcontroller (microcontroller unit, MCU), or artificial intelligence
  • a central processing unit central processing unit, CPU
  • a microprocessor central processing unit
  • DSP digital signal processor
  • MCU microcontroller unit
  • artificial intelligence Various types of computing devices that run software such as processors, each computing device may include one or more cores for executing software instructions to perform calculations or processing.
  • the processor can be a separate semiconductor chip, or it can be integrated with other circuits into a semiconductor chip.
  • SoC on-chip chip
  • other circuits such as codec circuits, hardware acceleration circuits, or various bus and interface circuits. system
  • the processor can further include necessary hardware accelerators, such as field programmable gate array (field programmable gate array, FPGA), PLD (programmable logic device) , or a logic circuit that implements a dedicated logic operation.
  • FPGA field programmable gate array
  • PLD programmable logic device
  • the hardware can be CPU, microprocessor, DSP, MCU, artificial intelligence processor, ASIC, SoC, FPGA, PLD, dedicated digital circuit, hardware accelerator or non-integrated discrete device Any one or any combination of them, which can run necessary software or not depend on software to execute the above method flow.
  • the processes can be completed by computer programs or hardware related to the computer programs.
  • the computer programs can be stored in computer-readable storage media.
  • the computer programs During execution, it may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: various media capable of storing computer program codes such as read-only memory (ROM) or random access memory (RAM), magnetic disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例提供一种显示方法及电子设备,包括:第一设备显示第一界面,第一界面包括第一信息,第一信息和第一业务相关;第一设备接收第一用户操作,响应于第一用户操作,识别第一界面以确定意图信息,意图信息指示执行第一指令,第一指令用于实现第一业务;第一设备向第二设备发送意图信息,意图信息用于第二设备执行第一指令并生成第二信息,第二信息用于第二设备显示第二界面,或者,第一设备向第二设备发送第二信息,第二信息是第一设备执行第一指令生成的,第二信息用于第二设备显示第二界面。本申请实施例能够在多设备互联的场景下简化交互方式,减少用户操作,提升效率。

Description

一种显示方法及电子设备
本申请要求于2021年12月08日提交中国专利局、申请号为202111493706.2、申请名称为“一种显示方法及电子设备”的中国专利申请的优先权,本申请要求于2022年01月26日提交中国专利局、申请号为202210093485.8、申请名称为“一种显示方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种显示方法及电子设备。
背景技术
多个设备之间互相连接和通信的场景(例如分布式场景)下,用户不仅可以独立地使用其中任意一个设备,而且可以同时使用多个设备(这多个设备的业务可以有联系,例如将智能手机上的视频投屏到智能电视上播放)。但是,这种场景下的电子设备缺乏简单高效的交互方式,用户操作繁琐复杂,例如,智能手机和车载电脑连接的场景下,若用户通过智能手机接收到包括位置信息的通讯消息,用户需要打开车载电脑上的地图应用,将目的地设置为该位置信息指示的地点,以实现针对该位置信息进行导航,操作繁琐,若用户处于行车过程还会影响驾驶的安全性,用户体验不佳。
发明内容
本申请实施例公开了一种显示方法及电子设备,能够在多设备互联的场景下简化交互方式,减少用户操作,提升效率。
第一方面,本申请实施例提供了一种显示方法,应用于第一设备,上述第一设备和第二设备连接,该方法包括:显示第一界面,上述第一界面包括第一信息,上述第一信息和第一业务相关;接收第一用户操作;响应于第一用户操作,识别上述第一界面以确定意图信息,上述意图信息指示执行第一指令,上述第一指令用于实现上述第一业务;向上述第二设备发送上述意图信息,上述意图信息用于上述第二设备执行上述第一指令并生成第二信息,上述第二信息用于上述第二设备显示第二界面。
在一些实施例中,上述第一指令是根据上述意图信息解析得到的指令,在另一些实施例中,上述第一指令是上述意图信息包括的指令。
在一些实施例中,上述第二信息用于上述第二设备显示上述第二界面和播放第一音频。在另一些实施例中,上述第二信息用于上述第二设备播放上述第一音频,上述第二设备不显示上述第二界面。
在上述方法中,第一设备接收到第一用户操作时,可以基于当前显示的第一界面进行用户意图的识别,并通过第二设备执行第一指令,第一指令用于实现识别出的意图信息对应的第一业务,无需用户手动操作第一设备或第二设备来触发实现第一业务,减少用户操作,让多设备互联的场景下的交互方式更加高效便捷。
在一种可能的实现方式中,上述第一界面还包括第三信息,上述第三信息和第二业务相关;上述识别上述第一界面以确定意图信息,包括:识别上述第一信息以确定上述第四信息, 识别上述第三信息以确定上述第五信息,上述第四信息指示执行上述第一指令,上述第五信息指示执行第二指令,上述第二指令用于实现上述第二业务;基于第一预设规则,从上述第四信息和上述第五信息中确定上述意图信息为上述第四信息,上述第一预设规则包括以下至少一项:上述第二设备的设备类型为预设设备类型、上述第二设备支持的业务包括上述第一业务、上述第一业务的优先级高于上述第二业务的优先级。
在一些实施例中,上述第一信息和上述第三信息为即时通讯消息,上述第一预设规则包括上述第一信息的接收时间晚于上述第三信息的接收时间。
在上述方法中,第一设备还可以使用第一预设规则来确定当前场景下更符合用户需求的意图信息,进一步提升交互的准确性,用户体验感更好。
在一种可能的实现方式中,上述第一信息为位置信息,上述第一业务为导航业务,上述第二业务和上述第一业务不同,上述第一预设规则包括上述第二设备的设备类型为上述预设设备类型,上述预设设备类型为车载设备。
在一种可能的实现方式中,上述第一信息为视频信息,上述第一业务为视频播放业务,上述第二业务和上述第一业务不同,上述第一预设规则包括上述第二设备的设备类型为上述预设设备类型,上述预设设备类型包括智能电视、智慧屏。
在一种可能的实现方式中,上述第一信息为指示第一位置的信息,上述第一业务为导航业务,上述第二信息是执行针对上述第一位置的导航操作生成的显示信息。
在一些实施例中,上述第一设备为智能手机,上述第二设备为车载设备。
在上述方法中,第一设备显示包括位置信息的第一界面时,若接收到第一用户操作,可以通过第二设备实现针对上述位置信息的导航业务,无需用户手动在第二设备上输入上述位置信息和手动触发上述导航操作,让多设备互联的场景下的交互方式更加高效便捷。
在一种可能的实现方式中,上述第一信息为指示第一视频的信息,上述第一业务为视频播放业务,上述第二信息是播放上述第一视频生成的显示信息。
在一些实施例中,上述第一设备为智能手机,上述第二设备为智能电视。
在上述方法中,第一设备显示包括视频信息的第一界面时,若接收到第一用户操作,可以通过第二设备实现播放上述视频信息的业务,无需用户手动在第二设备上查找上述视频信息和手动触发视频播放业务,让多设备互联的场景下的交互方式更加高效便捷。
在一种可能的实现方式中,上述第一信息为指示第一菜谱的信息,上述第一业务为烹饪业务,上述第二信息是实现上述第一菜谱对应的烹饪业务生成的显示信息。
在一些实施例中,上述第一设备为智能手机,上述第二设备为智能料理机。
在上述方法中,第一设备显示包括菜谱信息的第一界面时,若接收到第一用户操作,可以通过第二设备实现该菜谱信息对应的烹饪业务,无需用户手动在第二设备上查找上述菜谱信息和手动触发烹饪业务,让多设备互联的场景下的交互方式更加高效便捷。
在一种可能的实现方式中,上述第一信息为指示第一题目和上述第一题目的答案的信息,上述第一业务是试卷生成业务,上述第二界面包括上述第一题目,不包括上述第一题目的答案。
在一些实施例中,上述第一设备为智能手机,上述第二设备为平板电脑或学习机。
在上述方法中,第一设备显示包括题目和答案的第一界面时,若接收到第一用户操作,可以通过第二设备显示题目但不显示答案,让孩子可以使用第二设备练习题目,无需家长手动在第二设备上查找上述题目和手动触发试卷生成业务,交互方式便捷精准,能够很好地满足家长和孩子的需求。
在一种可能的实现方式中,上述第一用户操作为摇一摇操作、甩动操作、指关节敲击操作、指关节滑动操作、多指敲击操作或多指滑动操作等。
在上述方法中,第一用户操作简单方便,用户不再需要执行繁琐的操作才能触发实现第一业务,交互门槛低,用户使用更加方便。
第二方面,本申请提供了又一种显示方法,应用于第一设备,上述第一设备和第二设备连接,该方法包括:显示第一界面,上述第一界面包括第一信息,上述第一信息和第一业务相关;接收第一用户操作;响应于第一用户操作,识别上述第一界面以确定意图信息;根据上述意图信息执行第一指令,生成第二信息,上述第一指令用于实现上述第一业务;向上述第二设备发送上述第二信息,上述第二信息用于上述第二设备显示第二界面。
在一些实施例中,上述第一指令是根据上述意图信息解析得到的指令,在另一些实施例中,上述第一指令是上述意图信息包括的指令。
在一些实施例中,上述第二信息用于上述第二设备显示上述第二界面和播放第一音频。在另一些实施例中,上述第二信息用于上述第二设备播放上述第一音频,上述第二设备不显示上述第二界面。
在上述方法中,第一设备接收到第一用户操作时,可以基于当前显示的第一界面进行用户意图的识别,以及执行识别出的意图信息指示的第一指令,并且通过第二设备输出执行第一指令生成的多媒体数据,可以理解为是通过第二设备实现第一指令对应的第一业务,无需用户手动操作第一设备或第二设备来触发实现第一业务,减少用户操作,让多设备互联的场景下的交互方式更加高效便捷。
在一种可能的实现方式中,上述第一界面还包括第三信息,上述第三信息和第二业务相关;上述识别上述第一界面以确定意图信息,包括:识别上述第一信息以确定上述第四信息,识别上述第三信息以确定上述第五信息,上述第四信息指示执行上述第一指令,上述第五信息指示执行第二指令,上述第二指令用于实现上述第二业务;基于第一预设规则,从上述第四信息和上述第五信息中确定上述意图信息为上述第四信息,上述第一预设规则包括上述第二设备的设备类型为预设设备类型,和/或上述第一业务的优先级高于上述第二业务的优先级。
在一些实施例中,上述第一信息和上述第三信息为即时通讯消息,上述第一预设规则包括上述第一信息的接收时间晚于上述第三信息的接收时间。
在上述方法中,第一设备还可以使用第一预设规则来确定当前场景下更符合用户需求的意图信息,进一步提升交互的准确性,用户体验感更好。
在一种可能的实现方式中,上述第一信息为位置信息,上述第一业务为导航业务,上述第二业务和上述第一业务不同,上述第一预设规则包括上述第二设备的设备类型为上述预设设备类型,上述预设设备类型为车载设备。
在一种可能的实现方式中,上述第一信息为指示第一位置的信息,上述第一业务为导航业务,上述第二信息是执行针对上述第一位置的导航操作生成的显示信息。
在一种可能的实现方式中,上述第一信息为指示第一视频的信息,上述第一业务为视频播放业务,上述第二信息是播放上述第一视频生成的显示信息。
在一种可能的实现方式中,上述第一信息为指示第一菜谱的信息,上述第一业务为烹饪业务,上述第二信息是实现上述第一菜谱对应的烹饪业务生成的显示信息。
在一种可能的实现方式中,上述第一信息为指示第一题目和上述第一题目的答案的信息,上述第一业务是试卷生成业务,上述第二界面包括上述第一题目,不包括上述第一题目的答 案。
在一种可能的实现方式中,上述第一用户操作为摇一摇操作、甩动操作、指关节敲击操作、指关节滑动操作、多指敲击操作或多指滑动操作等。
第三方面,本申请提供了又一种显示方法,应用于第二设备,上述第二设备和第一设备连接,该方法包括:接收上述第一设备发送的意图信息,上述意图信息是上述第一设备接收到第一用户操作的情况下识别显示的第一界面确定的,上述第一界面包括第一信息,上述第一信息和第一业务相关;根据上述意图信息执行第一指令,生成第二信息,上述第一指令用于实现上述第一业务;根据上述第二信息显示第二界面。
在一些实施例中,上述第一指令是根据上述意图信息解析得到的指令,在另一些实施例中,上述第一指令是上述意图信息包括的指令。
在一些实施例中,上述第二信息用于上述第二设备显示上述第二界面和播放第一音频。在另一些实施例中,上述第二信息用于上述第二设备播放上述第一音频,上述第二设备不显示上述第二界面。
在上述方法中,第一设备接收到第一用户操作时,可以基于当前显示的第一界面进行用户意图的识别,并向第二设备发送识别出的意图信息。第二设备可以执行意图信息指示的第一指令以实现第一业务,无需用户手动操作第一设备或第二设备来触发实现第一业务,减少用户操作,让多设备互联的场景下的交互方式更加高效便捷。
在一种可能的实现方式中,上述第一信息为指示第一位置的信息,上述第一业务为导航业务,上述第二信息是执行针对上述第一位置的导航操作生成的显示信息。
在一种可能的实现方式中,上述第一信息为指示第一视频的信息,上述第一业务为视频播放业务,上述第二信息是播放上述第一视频生成的显示信息。
在一种可能的实现方式中,上述第一信息为指示第一菜谱的信息,上述第一业务为烹饪业务,上述第二信息是实现上述第一菜谱对应的烹饪业务生成的显示信息。
在一种可能的实现方式中,上述第一信息为指示第一题目和上述第一题目的答案的信息,上述第一业务是试卷生成业务,上述第二界面包括上述第一题目,不包括上述第一题目的答案。
在一种可能的实现方式中,上述第一用户操作为摇一摇操作、甩动操作、指关节敲击操作、指关节滑动操作、多指敲击操作或多指滑动操作等。
第四方面,本申请提供了又一种显示方法,应用于第二设备,上述第二设备和第一设备连接,该方法包括:接收上述第一设备发送的第一信息,上述第一信息是执行第一指令生成的信息,上述第一指令用于实现第一业务,上述第一指令是意图信息指示执行的指令,上述意图信息是上述第一设备接收到第一用户操作的情况下识别显示的第一界面确定的,上述第一界面包括第二信息,上述第二信息和上述第一业务相关;根据上述第一信息显示第二界面。
在一些实施例中,上述第一指令是根据上述意图信息解析得到的指令,在另一些实施例中,上述第一指令是上述意图信息包括的指令。
在一些实施例中,上述第二信息用于上述第二设备显示上述第二界面和播放第一音频。在另一些实施例中,上述第二信息用于上述第二设备播放上述第一音频,上述第二设备不显示上述第二界面。
在上述方法中,第一设备接收到第一用户操作时,可以基于当前显示的第一界面进行用 户意图的识别,以及执行识别出的意图信息指示的第一指令,并且通过第二设备输出执行第一指令生成的多媒体数据,可以理解为是通过第二设备实现第一指令对应的第一业务,无需用户手动操作第一设备或第二设备来触发实现第一业务,减少用户操作,让多设备互联的场景下的交互方式更加高效便捷。
在一种可能的实现方式中,上述第二信息为指示第一位置的信息,上述第一业务为导航业务,上述第一信息是执行针对上述第一位置的导航操作生成的显示信息。
在一种可能的实现方式中,上述第二信息为指示第一视频的信息,上述第一业务为视频播放业务,上述第一信息是播放上述第一视频生成的显示信息。
在一种可能的实现方式中,上述第二信息为指示第一菜谱的信息,上述第一业务为烹饪业务,上述第一信息是实现上述第一菜谱对应的烹饪业务生成的显示信息。
在一种可能的实现方式中,上述第二信息为指示第一题目和上述第一题目的答案的信息,上述第一业务是试卷生成业务,上述第二界面包括上述第一题目,不包括上述第一题目的答案。
在一种可能的实现方式中,上述第一用户操作为摇一摇操作、甩动操作、指关节敲击操作、指关节滑动操作、多指敲击操作或多指滑动操作等。
第五方面,本申请实施例提供了一种电子设备,包括一个或多个处理器和一个或多个存储器。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得通信装置执行上述任一方面任一项可能的实现方式中的显示方法。
第六方面,本申请实施例提供了一种计算机存储介质,该计算机存储介质存储有计算机程序,该计算机程序被处理器执行时,实现执行上述任一方面任一项可能的实现方式中的显示方法。
第七方面,本申请实施例提供了一种计算机程序产品,当该计算机程序产品在电子设备上运行时,使得该电子设备执行上述任一方面任一项可能的实现方式中的显示方法。
第八方面,本申请实施例提供一种电子设备,该电子设备包括执行本申请任一实施例所介绍的方法或装置。上述电子设备例如为芯片。
应当理解的是,本申请中对技术特征、技术方案、有益效果或类似语言的描述并不是暗示在任意的单个实施例中可以实现所有的特点和优点。相反,可以理解的是对于特征或有益效果的描述意味着在至少一个实施例中包括特定的技术特征、技术方案或有益效果。因此,本说明书中对于技术特征、技术方案或有益效果的描述并不一定是指相同的实施例。进而,还可以任何适当的方式组合本实施例中所描述的技术特征、技术方案和有益效果。本领域技术人员将会理解,无需特定实施例的一个或多个特定的技术特征、技术方案或有益效果即可实现实施例。在其他实施例中,还可在没有体现所有实施例的特定实施例中识别出额外的技术特征和有益效果。
附图说明
以下对本申请实施例用到的附图进行介绍。
图1A是本申请实施例提供的一种通信系统10的架构示意图;
图1B是本申请实施例提供的又一种通信系统10的架构示意图;
图1C是本申请实施例提供的又一种通信系统10的架构示意图;
图2A是本申请实施例提供的一种电子设备100的硬件结构示意图;
图2B是本申请实施例提供的一种电子设备200的硬件结构示意图;
图2C是本申请实施例提供的一种网络设备300的硬件结构示意图;
图2D是本申请实施例提供的一种电子设备100的软件架构示意图;
图3A-图3C是本申请实施例提供的一些用户界面实施例的示意图;
图4A-图4B是本申请实施例提供的又一些用户界面实施例的示意图;
图4C是本申请实施例提供的又一种用户界面实施例的示意图;
图5是本申请实施例提供的又一种用户界面实施例的示意图;
图6是本申请实施例提供的又一种用户界面实施例的示意图;
图7A是本申请实施例提供的又一种用户界面实施例的示意图;
图7B是本申请实施例提供的又一种用户界面实施例的示意图;
图8是本申请实施例提供的一种用户操作的示意图;
图9是本申请实施例提供的一种显示方法的流程示意图;
图10是本申请实施例提供的又一种显示方法的流程示意图;
图11是本申请实施例提供的又一种显示方法的流程示意图。
具体实施方式
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请实施例可以应用于多个设备之间互相连接和通信的场景,例如分布式场景。在这种场景下,用户可以同时使用多个设备,此时这多个设备的业务可以有联系,例如将智能手机上的视频投屏到智能电视上播放。但是,这种场景下的电子设备缺乏简单高效的交互方式,用户操作繁琐复杂,具体示例如下所示:
示例一:智能手机和车载电脑连接的场景下,若用户通过智能手机接收到包括位置信息的通讯消息,用户需要打开车载电脑上的地图应用,将目的地设置为该位置信息指示的地点,以实现针对该位置信息进行导航,操作繁琐,若用户处于行车过程还会影响驾驶的安全性,用户体验不佳。
示例二:智能手机和智能电视连接的场景下,若用户在智能手机上看到某个影片的信息(例如简介、影评等),想要在智能电视上观看该影片,用户需要在智能电视上搜索该影片以进行播放,或者用户需要先打开智能手机上的视频应用和该影片的播放界面,然后操作投屏控件,并选择要投屏的设备(即智能电视),以实现将该影片投屏到智能电视上观看,操作繁 琐,交互效率低。
示例三:智能手机和智能料理机连接的场景下,若用户在智能手机上看到某个菜谱的信息,想要使用智能料理机制作相应的料理,用户需要在智能料理机上搜索该菜谱以进行烹饪,操作繁琐,交互效率低。
示例四:智能手机和平板电脑连接的场景下,孩子可以使用平板电脑或学习机进行学习,家长可以使用智能手机查找相关习题,若家长在智能手机上查找到想让孩子解答的习题,需要在平板电脑或学习机上重新查找习题,操作繁琐,交互效率低。
本申请实施例提供了一种显示方法,第一设备可以响应于用户操作,识别当前显示的第一界面,并确定出意图信息,第一设备可以通过第二设备实现该意图信息指示的业务。无需用户手动触发第二设备实现该意图信息指示的业务,提供了一种应用在多设备互联的场景下的高效便捷的交互方式,减少用户操作,提升用户体验感。
例如,智能手机(第一设备)可以响应于摇一摇操作(用户操作),识别包括位置卡片(以卡片形式展示地理位置的消息)的聊天界面(第一界面),并确定出意图信息,该意图信息指示针对该位置卡片表征的地点进行导航的导航业务,该意图信息可以是基于位置卡片得到的。智能手机可以基于该意图信息指示车载电脑执行该导航业务,可选地执行操作:在地图应用中将该位置卡片表征的地点设置为目的地并进行导航。
下面介绍本申请实施例涉及的一种通信系统10。
图1A示例性示出了本申请实施例提供的一种通信系统10的架构示意图。
如图1A所示,通信系统10可以包括电子设备100、电子设备200和网络设备300,其中:
在一些实施例中,电子设备100可以通过有线方式和/或无线方式连接至少一个电子设备200,有线方式例如包括高清多媒体接口(high definition multimedia interface,HDMI)、通用串行总线(universal serial bus,USB)、同轴电缆、光纤等,无线方式例如包括蓝牙、无线保真(wireless fidelity,Wi-Fi)、近距离无线通信技术(near field communication,NFC)、超宽带(ultra wide band,UWB)等。电子设备100和电子设备200可以通过连接线路(例如蓝牙、Wi-Fi)进行通信,这种情况下,电子设备100和电子设备200之间传输信息的速率较快,可传输的信息较多。
在另一些实施例中,电子设备100可以通过有线方式和/或无线方式连接网络设备300,网络设备300可以通过有线方式和/或无线方式连接至少一个电子设备200。电子设备100可以通过网络设备300和电子设备200进行通信,例如,电子设备100为智能手机,电子设备200为汽车,网络设备300为提供HUAWEI HiCar功能的云服务器,电子设备100和电子设备200可以通过HUAWEI HiCar功能连接和投屏。
在另一些实施例中,电子设备100虽然未连接电子设备200,但可以和电子设备200建立连接再通信,可以理解为是电子设备200为电子设备100未连接但可通信的电子设备,可选地,电子设备100可以存储有至少一个电子设备200的连接信息(例如蓝牙地址和密码、Wi-Fi名称和密码等),通过连接信息连接至少一个电子设备200(例如向蓝牙地址对应的电子设备200发送包括密码的信息,以请求建立连接),可选地,电子设备200的连接信息可以是电子设备100之前连接电子设备200时获取的,可选地,电子设备200的连接信息可以是电子设备100通过网络设备300获取的,例如电子设备100登录某个账号后,可以获取到之 前登录过该账号的电子设备200的连接信息,本申请对电子设备100获取电子设备200的连接信息的方式不作限定。
图1A所示的电子设备和网络设备仅为示例,具体设备形态不作限定。
本申请中,电子设备100可以是手机、平板电脑、手持计算机、个人数字助理(Personal Digital Assistant,PDA)等移动终端,智能电视、智能摄像头、智能料理机等智能家居设备,智能手环、智能手表、智能眼镜等可穿戴设备,或其他桌面型、膝上型、笔记本电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、上网本、智慧屏、学习机等设备。电子设备200的描述类似,不再赘述。本申请实施例对电子设备100和电子设备200的具体类型不作特殊限制。
本申请中,网络设备300可以包括至少一个服务器,在一些实施例中,任意一个服务器可以为硬件服务器,在一些实施例中,任意一个服务器可以为云服务器。
图1B示例性示出了本申请实施例提供的又一种通信系统10的架构示意图。
如图1B所示,通信系统10中的电子设备100可以包括界面解析模块、意图解析模块和意图触发模块,通信系统10中的电子设备200可以包括输出模块,其中:
当电子设备100检测到用户操作时,例如通过下图2A所示的传感器模块180检测用户操作时,可以向界面解析模块上报该用户操作对应的事件(可称为触发事件)。
电子设备100的界面解析模块接收到触发事件时,可以识别电子设备100显示的用户界面,并得到界面识别结果,在一些实施例中,界面解析模块可以通过关键词提取、自然语言理解(natural language understanding,NLU)等手段,对当前界面的图层结构、文本进行识别解析。界面识别结果例如包括文本信息,表征用户界面中的结构的结构信息。界面识别结果例如是xml格式的数据、json格式的数据或其他已有格式的数据,不限于此,还可以是自定义格式的数据等。界面解析模块可以向意图解析模块发送界面识别结果。
在一些实施例中,界面解析模块可以识别显示的用户界面中的部分页面,并得到界面识别结果。例如,电子设备100显示的用户界面为分屏界面,假设该分屏界面包括第一应用的页面和第二应用的页面,假设用户最近一次操作的应用为第一应用,界面解析模块可以识别第一应用的页面并得到对应的界面识别结果。不限于此,界面解析模块可以识别用户选择的应用的页面等,本申请对确定用户界面中需识别的信息的方式不作限定。
电子设备100的意图解析模块可以基于界面识别结果进行意图识别,并得到意图信息,其中,意图信息可以是对电子设备100显示的用户界面进行界面识别和意图识别得到的特定的数据,意图信息例如是xml格式的数据、json格式的数据或其他已有格式的数据,不限于此,还可以是自定义格式的数据等。在一些实施例中,从用户角度而言,意图信息指示的是需达成的目标,可选地,意图信息指示实现的业务对应电子设备100显示的用户界面中的部分业务信息。在一些实施例中,界面识别结果包括第一结构信息和第一文本信息,意图解析模块可以识别第一结构信息并确定第一结构信息指示的界面结构,然后基于第一文本信息和确定的界面结构得到意图信息。例如,意图解析模块识别出位置卡片的界面结构和文本框的界面结构,基于位置卡片的界面结构确定位置卡片包括的文本信息“北京站”的类型为地址信息,基于文本框的界面结构确定文本框包括的文本信息“这里见”的类型为聊天信息,基于地址信息“北京站”和聊天信息“这里见”得到指示导航至地理位置“北京站”的意图信息。意图解析模块可以向意图触发模块发送意图信息。
在一些实施例中,意图解析模块可以进一步判断意图信息是否有效,在确认意图信息有 效的情况下,意图解析模块才会向意图触发模块发送意图信息。例如,意图信息指示导航至地理位置“北京站”时,判断意图信息中的地址信息“北京站”是否对应地图中真实有效的地理位置,若判断结果为是意图解析模块才会向意图触发模块发送该意图信息。又例如,意图信息指示播放名称为“影片1”的影片时,判断意图信息中的视频信息“影片1”是否对应真实可被播放的视频,若判断结果为是意图解析模块才会向意图触发模块发送该意图信息。
电子设备100的意图触发模块可以基于意图信息执行意图操作,在一些实施例中,意图触发模块可以对意图信息进行解析得到特定的指令,并调用该指令来执行意图操作。在一些实施例中,从用户角度而言,意图信息指示的是需达成的目标,意图操作可以对应用户需达成该目标所需执行的用户操作,也就是说,用户执行多个用户操作才能控制电子设备100执行意图操作。在一些实施例中,意图触发模块可以调用相应的业务模块来执行意图操作,例如,意图信息指示导航至地理位置“北京站”时,意图触发模块可以调用地图应用的导航模块来执行意图操作:将目的地设置为地理位置“北京站”并进行导航。意图触发模块执行意图操作后可以将对应的多媒体数据(例如导航业务对应的音频流和视频流)发送给电子设备200的输出模块。
电子设备200的输出模块接收到电子设备100的意图触发模块发送的多媒体数据后,可以输出该多媒体数据,例如播放导航业务对应的音频流,显示导航业务对应的视频流。
在一些实施例中,电子设备100的界面解析模块可以包括界面解析模型,通过界面解析模型来识别显示的用户界面并得到界面识别结果。可选地,界面解析模块可以将电子设备100显示的用户界面的内容作为界面解析模型的输入,得到输出的界面识别结果,例如,将包括文本形式的地址信息的界面内容作为输入,得到输出的文本结构和/或该地址信息,或者将包括卡片形式的地址信息(例如上述位置卡片)的界面内容作为输入,得到输出的卡片结构和/或该地址信息。
在一些实施例中,电子设备100的意图解析模块可以包括意图解析模型,通过意图解析模块进行意图识别。可选地,意图解析模块可以将界面识别结果作为意图解析模型的输入,得到输出的意图信息。
不限于上述示例的情况,电子设备100的界面解析模块和意图解析模块可以设置于同一个融合模块中,融合模块可以包括融合模型,通过融合模型来基于显示的用户界面确定出意图信息。可选地,融合模块可以将显示的界面内容作为融合模型的输入,得到输出的意图信息。例如,将包括地址信息的界面内容作为融合模型的输入,得到输出的意图信息,该意图信息指示针对该地址信息表征的地点进行导航。
在一些实施例中,电子设备100可以自行训练界面解析模型和/或意图解析模型,或者电子设备100可以自行训练融合模型。在另一些实施例中,通信系统10中的网络设备300可以训练界面解析模块和/或意图解析模型并发送给电子设备100,或者网络设备300可以训练融合模型并发送给电子设备100。本申请对网络设备300向电子设备100发送界面解析模块和/或意图解析模型,或者融合模型的方式不作限定,例如,电子设备100可以在接收到用户操作后,向网络设备300发送请求消息,以请求获取上述模型,又例如,网络设备300可以每隔预设时长向电子设备100发送上述模型,如每周发送一次,又例如,模型的版本更新时网络设备300可以向电子设备100发送版本更新后的模型。
在一些实施例中,电子设备100或者网络设备300可以将用户界面的内容作为输入,将该用户界面包括的结构和文本作为输入,来训练界面解析模型,输入和输出的示例和上述通过界面解析模型来识别显示的用户界面的示例类似,不再赘述。
在一些实施例中,电子设备100或者网络设备300可以将界面识别结果作为输入,将对应的意图操作和/或意图信息作为输出,来训练意图解析模型。
在一些实施例中,电子设备100或者网络设备300可以将用户界面的内容作为输入,将对应的意图操作和/或意图信息作为输出,来训练融合模型。例如,将包括地址信息的用户界面的内容作为输入,将意图操作(即将该地址信息表征的地点设置为目的地并进行导航)作为输出,来训练融合模型。将不包括地址信息的用户界面的内容作为输入,将对应的用户操作(如电子设备100显示该用户界面时用户执行的操作)作为输出,来训练融合模型,不限于此,也可以将不包括地址信息的用户界面的内容作为输入,将指示无导航意图的信息作为输出,来训练融合模型。
不限于此图1B示例的情况,在另一些实施例中,界面解析模块、意图解析模块和意图触发模块中至少一个模块可以不是电子设备100包括的模块,而是电子设备200包括的模块,例如,意图触发模块是电子设备200包括的模块,具体示例可参见图1C,如图1C所示,电子设备200的意图触发模块接收到电子设备100的意图解析模块发送的意图信息后,可以基于意图信息执行意图操作,并且将执行意图操作对应的多媒体数据发送至输出模块,由输出模块输出该多媒体数据,其他描述和图1B类似,不再赘述。
下面介绍本申请实施例涉及的电子设备100、电子设备200和网络设备300。
图2A示例性示出了电子设备100的硬件结构示意图。
下面以电子设备100为例对实施例进行具体说明。应该理解的是,图2A所示的电子设备100仅是一个范例,并且电子设备100可以具有比图2A中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图2A中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
如图2A所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号 调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度等进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感 光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。
视频编解码器用于对数字视频压缩或解压缩。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。
耳机接口170D用于连接有线耳机。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B还可以用于拍摄防抖、导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。
环境光传感器180L用于感知环境光亮度。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。
在一些实施例中,电子设备100可以通过传感器模块180检测用户操作,处理器110可以响应于该用户操作,基于显示屏194显示的用户界面进行意图识别,基于识别出的意图信息通过移动通信模块150和/或无线通信模块向电子设备200发送指示信息,电子设备200接收到指示信息后可以输出意图信息对应的多媒体数据,例如显示导航意图对应的导航界面。
例如,电子设备100通过压力传感器180A和/或触摸传感器180K检测用户作用于电子设备100的触摸操作,如指关节敲击显示屏194,指关节、双指或三指在显示屏194上滑动等。又例如,电子设备100通过陀螺仪传感器180B和/或加速度传感器180E检测用户的摇一摇操作、甩手操作。又例如,电子设备100通过摄像头193检测用户的手势操作,本申请对检测用户操作的模块不作限定。
图2B示例性示出了电子设备200的硬件结构示意图。
下面以电子设备200为例对实施例进行具体说明。应该理解的是,图2B所示的电子设备200仅是一个范例,并且电子设备200可以具有比图2B中所示的更多或更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。
如图2B所示,电子设备200可以包括处理器201、存储器202、无线通信模块203、天线204和显示屏205。可选地,电子设备200还可以包括有线通信模块(未示出)。其中:
处理器201可用于读取和执行计算机可读指令。具体实现中,处理器201可主要包括控制器、运算器和寄存器。其中,控制器主要负责指令译码,并为指令对应的操作发出控制信号。运算器主要负责保存指令执行过程中临时存放的寄存器操作数和中间操作结果等。具体实现中,处理器201的硬件架构可以是专用集成电路(ASIC)架构、MIPS架构、ARM架构或者NP架构等等。在一些实施例中,处理器201还可用于生成无线通信模块203向外发送的信号,如蓝牙广播信号、信标信号。
存储器202与处理器201耦合,用于存储各种软件程序和/或多组指令。具体实现中,存储器202可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器202可以存储操作系统,例如uCOS,VxWorks、RTLinux等嵌入式操作系统。存储器202还可以存储通信程序,该通信程序可用于与电子设备100,或其他设备进行通信。
无线通信模块203可以包括WLAN通信模块203A、蓝牙通信模块203B中的一项或多项。可选地,蓝牙通信模块203B可以与其他通信模块(例如,WLAN通信模块203A)集成为一体。
在一些实施例中,WLAN通信模块203A、蓝牙通信模块203B中的一项或多项可以监听到其他设备发射的信号,如测量信号、扫描信号等等,并可以发送响应信号,如测量响应、扫描响应等,使得其他设备可以发现电子设备200,并通过蓝牙、WLAN中的一种或多种或其他近距离无线通信技术与其他设备建立无线通信连接,来进行数据传输。
在另一些实施例中,WLAN通信模块203A中可以发射信号,如广播探测信号、信标信号,使得路由器可以发现电子设备200,并通过WLAN与路由器建立无线通信连接,来连接上电子设备100、网络设备300。
有线通信模块(未示出),可用用于通过网线与路由器等设备建立连接,并通过路由器连接上电子设备100、网络设备300。
天线204可用于发射和接收电磁波信号。不同通信模块的天线可以复用,也可以相互独立,以提高天线的利用率。例如:可以将蓝牙通信模块203B的天线复用为WLAN通信模块203A的天线。
显示屏205可用于显示图像、视频等。显示屏205包括显示面板。显示面板可以采用液晶显示屏,有机发光二极管,有源矩阵有机发光二极体或主动矩阵有机发光二极体,柔性发光二极管,量子点发光二极管等。在一些实施例中,电子设备200可以包括1个或N个显示屏205,N为大于1的正整数。
在一些实施例中,电子设备200还可以包括传感器,具体示例可参见上图2A所示的传感器模块180,不再赘述。
在一些实施例中,电子设备200可以通过无线通信模块203和/或有线通信模块(未示出)接收电子设备100发送的指示信息,处理器201可以基于该指示信息通过显示屏205显示意图信息对应的用户界面,例如显示导航意图对应的导航界面。
图2C示例性示出了网络设备300的硬件结构示意图。
如图2C所示,网络设备300可以包括一个或多个处理器301、通信接口302、存储器303,其中处理器301、通信接口302、存储器303可通过总线或者其它方式连接,本申请实施例以通过总线304连接为例。其中:
处理器301可以由一个或者多个通用处理器构成,例如CPU。处理器301可用于运行设 备控制方法的相关的程序代码。
通信接口302可以为有线接口(例如以太网接口)或无线接口(例如蜂窝网络接口或使用无线局域网接口),用于与其他节点进行通信。本申请实施例中,通信接口302具体可用于与电子设备100、电子设备200进行通信。
存储器303可以包括易失性存储器(volatile memory),例如RAM;存储器也可以包括非易失性存储器(non-vlatile memory),例如ROM、快闪存储器(flash memory)、HDD或固态硬盘SSD。存储器303还可以包括上述种类的存储器的组合。存储器303可用于存储一组程序代码,以便于处理器301调用存储器303中存储的程序代码以实现本申请实施例的在服务器的实现方法。在本申请实施例中,存储器303还可以是存储阵列,等等。
在一些实施例中,网络设备300可以包括多个服务器,例如,网页服务器、后台服务器、下载服务器等,这多个服务器的硬件结构都可参照图2C所示的网络设备300的硬件结构。
需要说明的,图2C所示的网络设备300仅仅是本申请实施例的一种实现方式,实际应用中,网络设备300还可以包括更多或更少的部件,这里不作限制。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。例如,分层架构的软件系统可以是安卓(Android)系统,也可以是华为移动服务(huawei mobile services,HMS)系统,或其它软件系统。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图2D示例性示出一种电子设备100的软件架构示意图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图2D所示,应用程序包可以包括相机、地图、HiCar、音乐、聊天应用、娱乐应用、家居应用、学习应用等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图2D所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器,意图流转服务等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。
意图流转服务可以基于应用程序层的应用程序进行意图识别,在一些实施例中,意图流转服务可以基于电子设备100显示的应用程序的用户界面进行意图识别。电子设备100可以通过电子设备200实现识别出的意图,在一种情况下,可以将电子设备100上用于实现意图的业务流转到电子设备200上,在另一种情况下,电子设备100可以将识别出的意图发送到电子设备200上,由电子设备200实现。
在一些实施例中,意图流转服务可以为应用程序层的系统应用提供服务,以实现对应用程序层的第三方应用进行意图识别。例如,系统应用为HiCar应用,第三方应用为地图应用、聊天应用、娱乐应用、家居应用、学习应用等。
不限于此,在另一些实施例中,意图流转服务可以为应用程序层的应用内置的服务,例如该应用对应的服务器(简称应用服务器)可以为该应用提供意图流转服务。电子设备100接收到用户操作时可以向上述应用服务器发送当前显示的用户界面的内容,应用服务器基于界面内容进行意图识别,并将识别出的意图信息发送给电子设备100,电子设备100通过电子设备200实现该意图信息。
在一些实施例中,意图流转服务可以对应图1B所示的意图解析模块,可选地以及页面解析模块,可选地以及意图触发模块,具体可参见图1B的描述,不再赘述。
在一些实施例中,应用程序层的应用程序可以对应图1B所示的意图触发模块。在一些实施例中,应用程序层的应用程序可以对应图1B所示的显示模块。
Android runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。在一些实施例中,传感器驱动可以对应图1B所示的检测模块。
下面结合导航场景,示例性说明电子设备100软件以及硬件的工作流程。
假设显示屏194显示聊天应用的用户界面,且该用户界面用于显示地点1的地址信息。当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为导航控件为例,地图应用调用应用框架层的接口,启动地图应用,进而通过调用内核层启动显示驱动,通过显示屏194显示导航界面,该导航界面中的目的地为上述地点1。
电子设备200的软件架构和电子设备100的软件架构类似,具体示例可参见图2D。
下面结合应用场景介绍本申请实施例中涉及的显示方法。
场景1:电子设备100为智能手机,电子设备200为车载电脑。电子设备100显示包括地址信息的用户界面时,若接收到用户操作(如摇一摇操作),可以执行意图操作:将该地址信息表征的地点设置为目的地并进行导航,并将执行意图操作对应的音频流和视频流(简称音视频流)发送给电子设备200输出。这样,用户无需手动在电子设备200上的地图应用中输入该地址信息并操作导航控件,即无需用户手动触发执行意图操作,交互更加高效便捷。
图3A-图3C示例性示出了一种应用场景(如上述场景1)下的用户界面实施例。
如图3A的上图所示,电子设备100可以显示聊天应用的用户界面310,用户界面310可以包括会话名称311和聊天窗口312。假设当前会话为双人对话,则会话名称311可以包括聊天对象的名称“小王”,不限于此,若当前会话为多人会话时,会话名称311可以包括当前会话的名称,例如群名。聊天窗口312可以用于显示当前会话的聊天记录,例如聊天对象发送的消息3121和消息3122,消息3121包括文本“这里见”,消息3122包括地点名称3122A(包括文本“北京站”),以及“北京站”的位置信息3122B(包括文本“北京市东城区毛家湾胡同甲13号”),消息3122为表征地理位置“北京站”的位置卡片。
如图3A的上图所示,电子设备100可以和电子设备200连接,例如通过HUAWEI HiCar功能连接。电子设备200可以显示桌面320,桌面320可以包括一个或多个应用图标,例如,地图应用图标、通话应用图标、音乐应用图标、收音机应用图标、行车记录仪应用图标和设置应用图标等。桌面320还可以包括主菜单控件,主菜单控件可以用于返回到桌面320。
如图3A的上图所示,电子设备100可以接收用户操作(例如摇一摇电子设备100),响应于该用户操作,识别当前显示的用户界面310,在一些实施例中,电子设备100识别消息3122得到地理位置“北京站”的位置信息,并基于此确定意图信息:针对地理位置“北京站”进行导航,可选地,电子设备100还可以结合消息3121确定用户想要前往地理位置“北京站”,并基于此确定上述意图信息。可选地,可以理解为是该意图信息对应导航业务,也可以理解为是该意图信息对应消息3122(位置卡片)。电子设备100可以基于得到的意图信息执行该意图信息对应的意图操作:将目的地设置为地理位置“北京站”的位置信息并进行导航,然后将执行该意图操作对应的音视频流发送给电子设备200输出,具体可参见图3A的下图。可选地,可以理解为是该意图操作用于实现导航业务,也可以理解为是该意图操作对应消息3122(位置卡片)。
如图3A的下图所示,电子设备200可以显示地图应用的用户界面330,用户界面330用于显示导航业务的相关信息。用户界面330可以包括地图窗口331、路线窗口332和提示框333。其中:
地图窗口331用于在地图上展示被选择的导航线路的示意图。
路线窗口332包括导航信息332A、线路332B、线路332C和导航控件332D。导航信息332A包括文本“去北京市东城区毛家…”,用于表征导航的目的地的位置信息,导航信息332A仅示出目的地的部分位置信息,电子设备200可以响应于作用于导航信息332A的触摸操作(例如点击操作),显示目的地的全部位置信息。线路332B和线路332C可以表征两条导航线路,相比线路332C,线路332B被突出显示(例如线路332B的文本加粗、高亮,但线路332C的文本未加粗、未高亮等),表征当前选择的导航线路为线路332B指示的导航线路,此时地图窗口331用于在地图上展示线路332B指示的导航线路的示意图。电子设备200可以响应于作用于线路332C的触摸操作(例如点击操作),取消突出显示线路332B,突出显示线 路332C,此时选择的导航线路为线路332C指示的导航线路,地图窗口331会在地图上展示线路332C指示的导航线路的示意图。导航控件332D可以用于开启导航功能,电子设备200可以响应于作用于导航控件332D的触摸操作(例如点击操作),基于当前选择的线路(用户界面330中是线路332B指示的导航线路)进行导航。
提示框333用于提示当前进行的导航业务的信息。提示框333包括文本“正在为您导航到与小王的聊天中的北京市东城区毛家湾胡同甲13号”,可以指示:该导航业务的目的地的详细位置信息,该导航业务是在聊天应用中和聊天对象“小王”的聊天会话触发的,该目的地的详细位置信息是从该聊天会话获取到的。
图3A所示示例中,电子设备100显示的用户界面310包括的地址信息(即消息3122)为卡片形式的,不限于此,在另一些示例中,还可以是文本形式的地址信息,具体示例可参见下图3B,本申请对此不作限定。
如图3B所示,电子设备100可以显示聊天应用的用户界面340,用户界面340和图3A的上图所示的用户界面310类似,区别在于当前会话的聊天记录不同,用户界面340可以包括消息341和消息342,消息341包括文本“在哪里见”,消息342包括文本“到北京站见”。电子设备100可以和电子设备200连接,电子设备200可以显示图3A的上图所示的桌面320。电子设备100可以接收用户操作(例如摇一摇电子设备100),响应于该用户操作,识别当前显示的用户界面340,在一些实施例中,电子设备100识别消息342得到用户想要前往地理位置“北京站”,并基于此确定意图信息:针对地理位置“北京站”进行导航。可选地,可以理解为是该意图信息对应消息342。电子设备100可以基于得到的意图信息执行该意图信息对应的意图操作:将目的地设置为地理位置“北京站”的位置信息并进行导航,然后将执行该意图操作对应的音视频流发送给电子设备200输出,具体可参见图3A的下图,可选地,可以理解为是该意图操作对应消息342。
以上示例中,电子设备100显示的地址信息为聊天消息形式的(即用户界面310中的消息3122,用户界面340中的消息342),不限于此,在另一些示例中,还可以显示在地点介绍中,具体示例可参见下图3C,本申请对此不作限定。
如图3C所示,电子设备100可以显示娱乐应用的用户界面350。用户界面350包括地点名称351和位置控件352。地点名称351包括文本“首都博物馆”,为用户界面350显示的地点的名称,位置控件352包括文本“西城区复兴门外大街16号”,为用户界面350显示的地点的位置信息,可以表征地点“首都博物馆”的地址信息。电子设备100可以和电子设备200连接,电子设备200可以显示图3A的上图所示的桌面320。电子设备100可以接收用户操作(例如摇一摇电子设备100),响应于该用户操作,识别当前显示的用户界面350得到名称为“首都博物馆”的地点的位置信息,并基于此确定意图信息:针对地点“首都博物馆”进行导航。可选地,可以理解为是该意图信息对应位置控件352。电子设备100可以基于得到的意图信息向电子设备200发送指示信息,电子设备200可以基于指示信息执行该意图信息对应的意图操作:将目的地设置为地点“首都博物馆”的位置信息并进行导航,具体示例和图3A的下图类似,区别在于目的地不同,因此导航线路也不同,可选地,可以理解为是该意图操作对应位置控件352。
图3A-图3B所示实施例中,电子设备100自行执行意图信息对应的意图操作,然后将执行该意图操作对应的音视频流发送给电子设备200输出,可以理解为是将电子设备100的内容投屏到电子设备200上输出,实际触发的是电子设备100上的业务。图3C所示实施例中,电子设备100指示电子设备200执行意图信息对应的意图操作,实际触发的是电子设备200 上的业务。不限于此,在具体实现中,图3A-图3B所示实施例触发的也可以是电子设备200上的业务,图3C所示实施例触发的也可以是电子设备100上的业务。以下实施例以触发电子设备200上的业务为例进行说明,但具体实现中不作限定。
在一种可能的实现方式中,电子设备100确定的意图信息对应的业务类型,和电子设备100连接的电子设备200的设备类型有关。
场景2:电子设备100为智能手机,电子设备100显示包括地址信息和视频信息的用户界面时,接收到用户操作(如摇一摇操作)。若电子设备100连接的电子设备200为车载电脑,则电子设备100响应于该用户操作,向电子设备200发送指示信息,电子设备200可以基于该指示信息执行导航业务对应的意图操作:将上述地址信息表征的地点设置为目的地并进行导航,具体示例可参见图4A。若电子设备100连接的电子设备200为智能电视,则电子设备100响应于该用户操作,向电子设备200发送指示信息,电子设备200可以基于该指示信息执行视频业务对应的意图操作:播放上述视频信息表征的视频,具体示例可参见图4B。这样更加符合实际应用场景下用户的需求,进一步提升交互的精准性。
图4A-图4B示例性示出又一种应用场景(如上述场景2)下的用户界面实施例。
如图4A的上图所示,电子设备100可以显示聊天应用的用户界面410,用户界面410和图3A的上图所示的用户界面310类似,区别在于当前会话的聊天记录不同,用户界面410可以包括消息411、消息412、消息413和消息414。其中,消息411和消息412分别为图3A的上图所示的用户界面310中的消息3121和消息3122,不再赘述。消息413包括文本“看这个”,消息414为以卡片形式展示视频的消息,消息414包括文本“我的一天”,为展示的视频的名称。电子设备100可以和电子设备200(车载电脑)连接,电子设备200(车载电脑)可以显示图3A的上图所示的桌面320。电子设备100可以接收用户操作(例如摇一摇电子设备100),响应于该用户操作,识别当前显示的用户界面410,在一些实施例中,电子设备100识别用户界面410得到消息412对应导航业务和消息414对应视频业务,电子设备100可以根据连接设备的设备类型(车载电脑)确定对应的导航业务,例如预设有车载电脑和导航业务的对应关系,因此电子设备100识别消息412并确定出导航业务对应的意图信息:针对地理位置“北京站”进行导航。电子设备100可以基于得到的意图信息向电子设备200(车载电脑)发送指示信息,电子设备200(车载电脑)可以基于指示信息执行该意图信息对应的意图操作:将目的地设置为地理位置“北京站”的位置信息并进行导航,具体可参见图4A的下图,图4A的下图中电子设备200显示的用户界面和图3A的下图中电子设备200显示的用户界面一致。
如图4B的上图所示,电子设备100可以显示图4A的上图所示的用户界面410。电子设备100可以和电子设备200(智能电视)连接,电子设备200(智能电视)可以显示桌面420,桌面420可以包括一个或多个分类,例如电视剧分类、电影分类、动画分类、少儿分类和游戏分类等。电子设备100可以接收用户操作(例如摇一摇电子设备100),响应于该用户操作,识别当前显示的用户界面410,在一些实施例中,电子设备100识别用户界面410得到消息412对应导航业务和消息414对应视频业务,电子设备100可以根据连接设备的设备类型(智能电视)确定对应的视频业务,例如预设有智能电视和视频业务的对应关系,因此电子设备100识别消息414并确定出视频业务对应的意图信息:播放名称为“我的一天”的视频。电子设备100可以基于得到的意图信息向电子设备200(智能电视)发送指示信息,电子设备200(智能电视)可以基于指示信息执行该意图信息对应的意图操作:播放名称为“我的一天”的视频,具体可参见图4B的下图。如图4B的下图所示,电子设备200可以显示用户界面430, 用户界面430包括标题431,标题431包括文本“我的一天”,为当前播放视频的名称。
不限于上述示例的情况,在另一些示例场景中,可以由用户自行选择待识别的业务信息,根据用户选择确定意图信息对应的业务类型,具体示例可参见下图4C。
如图4C的左图所示,电子设备100可以显示图4A的上图所示的用户界面410。电子设备100可以接收用户操作(例如摇一摇电子设备100),响应于该用户操作,显示图4C的右图所示的用户界面440。
如图4C的右图所示,用户界面440可以包括提示信息441、提示框442和提示框443,提示信息441包括文本“请选择需流转的业务”,用于提示用户选择待识别的业务信息。
提示框442包括业务名称442A和业务信息442B,业务名称442A包括文本“地图导航”,业务信息442B为图4C的左图所示的用户界面410中的消息412。电子设备100可以响应于作用于提示框442的触摸操作(如点击操作),确定待识别的业务信息为用户界面410中的消息412,并识别消息412以得到导航业务对应的意图信息:针对地理位置“北京站”进行导航。电子设备100可以基于得到的意图信息向连接的电子设备200发送指示信息,电子设备200可以基于指示信息执行该意图信息对应的意图操作,电子设备200接收到指示信息前后的界面示例分别可参见图4A的上图所示的用户界面320、图4A的下图所示的用户界面330。
提示框443包括业务名称443A和业务信息443B,业务名称443A包括文本“播放视频”,业务信息443B为图4C的左图所示的用户界面410中的消息414。电子设备100可以响应于作用于提示框443的触摸操作(如点击操作),确定待识别的业务信息为用户界面410中的消息414,并识别消息414以得到视频业务对应的意图信息:播放名称为“我的一天”的视频。电子设备100可以基于得到的意图信息向连接的电子设备200发送指示信息,电子设备200可以基于指示信息执行该意图信息对应的意图操作,电子设备200接收到指示信息前后的界面示例可参见图4B的上图所示的用户界面420、图4B的下图所示的用户界面430。
场景3:电子设备100为智能手机,电子设备200为智能电视。电子设备100显示包括视频信息的用户界面时,若接收到用户操作(如摇一摇操作),可以向电子设备200发送指示信息。电子设备200可以基于该指示信息执行意图操作:播放该视频信息表征的视频。这样无需用户手动触发执行意图操作,交互更加高效便捷。
图5示例性示出又一种应用场景(如上述场景3)下的用户界面实施例。
如图5的上图所示,电子设备100可以显示娱乐应用的用户界面510,用户界面510包括名称521,名称521包括文本“影片1”,为用户界面510显示的影片的名称,用户界面510用于显示“影片1”的详细信息,例如相关视频、剧照和影评等。电子设备100可以和电子设备200连接,电子设备200可以显示图4B的上图所示的桌面420,桌面420还包括查找控件421,查找控件421用于实现查找功能,用户可以基于查找功能输入想要查看的视频并进行播放。电子设备100可以接收用户操作(例如摇一摇电子设备100),响应于该用户操作,识别当前显示的用户界面510得到名称为“影片1”的影片的信息,并基于此确定意图信息:播放名称为“影片1”的影片。电子设备100可以基于得到的意图信息向电子设备200发送指示信息,电子设备200可以基于指示信息执行该意图信息对应的意图操作:播放名称为“影片1”的影片,具体可参见图5的下图。
如图5的下图所示,电子设备200可以显示用户界面520,用户界面520包括标题521,标题521包括文本“影片1”,为当前播放视频的名称。在一种情况下,电子设备100可以通过视频应用获取到“影片1”的视频流,并将视频流持续发送给电子设备200进行播放,可 以理解为是将电子设备100上的视频投屏到电子设备200上播放,这样,用户无需打开电子设备100(智能手机)上的视频应用和该视频的播放界面,操作投屏控件以及选择要投屏的设备(智能电视)。在另一种情况下,电子设备200接收到指示信息后,查找该视频并进行播放,可以理解为是播放电子设备200上的视频,这样,用户无需在电子设备200(智能电视)上查找该视频(例如通过图5的上图所示的用户界面420中的查找控件421实现)。因此简化了用户操作,交互效率大大提升。
图5所示示例中,电子设备100显示的视频信息显示在影片介绍中,不限于此,在另一些示例中,视频信息还可以为聊天消息形式的,例如图4B所示的用户界面410中的消息414,具体场景示例可参见图4B,本申请对此不作限定。
场景4:电子设备100为智能手机,电子设备200为智能料理机。电子设备100显示包括菜谱信息的用户界面时,若接收到用户操作(如摇一摇操作),可以向电子设备200发送指示信息。电子设备200可以基于该指示信息执行意图操作:按照该菜谱信息进行工作。这样,用户无需在智能料理机上搜索该菜谱以进行烹饪,即无需用户手动触发执行意图操作,交互更加高效便捷。
图6示例性示出又一种应用场景(如上述场景4)下的用户界面实施例。
如图6的上图所示,电子设备100可以显示家居应用的用户界面610,用户界面610包括标题611,标题611包括文本“脆皮五花肉”,为用户界面610显示的菜谱的名称。用户界面610用于显示名称为“脆皮五花肉”的菜谱的详细信息,例如食材信息612和制作步骤613。电子设备100可以和电子设备200连接,电子设备200可以显示首页620,首页620可以包括一个或多个分类,例如每日菜谱分类、中式分类和西式分类等。电子设备100可以接收用户操作(例如摇一摇电子设备100),响应于该用户操作,识别当前显示的用户界面610得到名称为“脆皮五花肉”的菜谱的信息,并基于此确定意图信息:烹饪该菜谱对应的菜肴。电子设备100可以基于得到的意图信息向电子设备200发送指示信息,电子设备200可以基于指示信息执行该意图信息对应的意图操作:按照该菜谱进行工作,具体可参见图6的下图。
如图6的下图所示,电子设备200可以显示用户界面630,用户界面630包括标题631和步骤信息632。标题631包括文本“脆皮五花肉”,为当前正在使用的菜谱的名称。步骤信息632为当前正在使用的菜谱的制作步骤,对应图6的上图所示的用户界面610中的制作步骤613。用户界面630可以表征电子设备200当前正在按照名称为“脆皮五花肉”的菜谱进行工作。
不限于上述示例的情况,在另一些示例中,电子设备100识别当前显示的用户界面610时可以仅识别菜谱上的菜名“脆皮五花肉”,并基于此确定意图信息:烹饪菜名为“脆皮五花肉”的菜肴。电子设备200接收到指示信息后,可以执行该意图信息对应的意图操作:搜索该菜名得到对应的菜谱,按照搜索得到的菜谱进行工作。
场景5:电子设备100为智能手机,用于家长使用,电子设备200为平板电脑,用于孩子进行学习。电子设备100显示包括学习信息的用户界面时,若接收到用户操作(如摇一摇操作),可以向电子设备200发送指示信息。电子设备200可以基于该指示信息执行意图操作:显示全部或部分的学习信息。这样,家长无需在平板电脑上搜索该学习信息用于孩子学习使用,即无需用户手动触发执行意图操作,交互更加高效便捷。
图7A示例性示出又一种应用场景(如上述场景5)下的用户界面实施例。
如图7A的上图所示,电子设备100可以显示学习应用的用户界面710,用户界面710包括标题711,标题711包括文本“英语试卷”,表征用户界面710用于显示名称为“英语试卷”的试卷的详细信息。用户界面710还包括多个习题的详细信息,例如习题712和习题713,习题712包括题目712A和答案712B,习题713包括题目713A和答案713B。用户界面710还包括考试控件714,考试控件714用于提供针对当前试卷的模拟考试的功能。电子设备100可以和电子设备200连接,电子设备200可以显示桌面720,桌面720可以包括一个或多个应用图标,例如时钟应用图标、日历应用图标、图库应用图标和设置应用图标等。电子设备100可以接收用户操作(例如摇一摇电子设备100),响应于该用户操作,识别当前显示的用户界面720得到名称为“英语试卷”的试卷的信息,并基于此确定意图信息:针对该试卷进行模拟考试。电子设备100可以基于得到的意图信息向电子设备200发送指示信息,电子设备200可以基于指示信息执行该意图信息对应的意图操作:开启该试卷的模拟考试的功能,具体可参见图7A的下图。
如图7A的下图所示,电子设备200可以显示用户界面730,用户界面730包括标题731、提交控件732、题目信息733和切换控件734。标题731包括文本“英语试卷”,为当前正在进行的模拟考试的试卷的名称。提交控件732用于结束当前进行的模拟考试,并展示模拟考试的结果。题目信息733展示当前正在查看的题目的信息,切换控件734用于切换当前正在查看的题目的信息。用户界面730可以表征当前已开启名称为“英语试卷”的试卷的模拟考试的功能。
在一些实施例中,电子设备200可以响应于作用于提交控件732的触摸操作(如点击操作),显示模拟考试的结果,并将模拟考试的结果发送给至电子设备100,让家长可以高效便捷地掌握到孩子的学习情况。
上图7A所示示例中,电子设备100显示的用户界面710包括题目和答案,但电子设备200接收到指示信息后显示的用户界面730仅包括题目,不包括答案,家长不仅无需到电子设备200上查找对应的习题,而且无需手动开启模拟考试的功能,进一步减少用户操作,提高交互的高效性。
不限于上图7A所示的示例,在另一些示例中,电子设备100可以响应于摇一摇电子设备100的操作,识别当前显示的用户界面710中的习题712和/或习题713,并基于此确定意图信息:练习习题712和/或习题713。此时电子设备200接收到电子设备100基于该意图信息发送的指示信息后,可以执行对应的意图操作:显示习题712中的题目712A和/或习题713中的题目713A,以用于孩子进行练习,具体示例和图7A的下图所示的用户界面730类似。
不限于上述示例的情况,在另一些示例场景中,可以由用户自行选择待识别的业务信息,根据用户选择确定意图信息对应的业务内容,具体示例可参见下图7B。
如图7B的左图所示,电子设备100可以显示图7A的上图所示的用户界面710。电子设备100可以接收用户操作(例如摇一摇电子设备100),响应于该用户操作,显示图7B的右图所示的用户界面740。
如图7B的右图所示,用户界面740可以包括提示信息741、提示框742和提示框743,提示信息741包括文本“请选择需流转的内容”,用于提示用户选择待识别的业务信息。
提示框742为图7B的左图所示的用户界面740中的习题712的题目712A。电子设备100可以响应于作用于提示框742的触摸操作(如点击操作),确定待识别的业务信息为用户界面740中的习题712,并识别习题712以得到意图信息:练习习题712。此时电子设备200接收到电子设备100基于该意图信息发送的指示信息后,可以执行对应的意图操作:显示习题712 中的题目712A,具体示例和图7A的下图所示的用户界面730类似。
提示框743为图7B的左图所示的用户界面740中的习题713的题目713A。电子设备100可以响应于作用于提示框743的触摸操作(如点击操作),确定待识别的业务信息为用户界面740中的习题713,并识别习题713以得到意图信息:练习习题713。此时电子设备200接收到电子设备100基于该意图信息发送的指示信息后,可以执行对应的意图操作:显示习题713中的题目713A,具体示例和图7A的下图所示的用户界面730类似。
不限于此,在上述场景5中电子设备200也可以是学习机等设备。
以上示例的触发意图流转的用户操作(简称触发操作)为摇一摇操作,在另一些示例中,触发操作也可以为指关节滑动的操作,具体示例可参见图8的(A),在另一些示例中,触发操作也可以为双指滑动的操作,具体示例可参见图8的(B),在另一些示例中,触发操作也可以为手势操作,具体示例可参见图8的(C),不限于此,还可以是指关节敲击、甩手等,本申请对触发操作的具体类型不作限定。
下面介绍本申请实施例中提供的一种显示方法。
图9示例性的示出了本申请实施例中提供的一种显示方法的流程示意图。
该显示方法可应用于上述通信系统10,上述通信系统10可以包括电子设备100、电子设备200和网络设备300。
如图9所示,该显示方法可以包括但不限于如下步骤:
S101:电子设备100和电子设备200建立连接。
在一些实施例中,电子设备100和电子设备200可以通过有线和/或无线方式直接连接,例如通过蓝牙、Wi-Fi连接,在另一些实施例中,电子设备100和电子设备200可以通过网络设备300连接,具体可参见图1A中电子设备100和电子设备200连接的说明。
S102:电子设备100显示包括第一业务信息的第一界面。
在一些实施例中,第一业务信息对应第一业务,不同的业务信息对应的业务不同,具体示例如下所示。
例如,第一业务信息为对应导航业务的地址信息。图3A所示的用户界面310中的消息3122、图3B所示的用户界面340中的消息342或图3C所示的用户界面350中的位置控件352为第一业务信息。
例如,第一业务信息为对应视频业务(如播放视频)的视频信息。图4A所示的用户界面410中的消息414或图5所示的用户界面510包括的信息(如名称521)为第一业务信息。
例如,第一业务信息为对应烹饪业务(如按照菜谱进行烹饪)的菜谱信息。图6所示的用户界面610包括的信息(如标题611)为第一业务信息。
例如,第一业务信息为对应学习业务(如练习题目)的学习信息。图7A所示的用户界面710包括的信息(如习题712、习题713)为第一业务信息。
S103:电子设备100接收第一用户操作。
其中,第一用户操作的形式可以但不限于包括作用于显示屏的触摸操作、语音、运动姿态(如手势)、脑电波等。例如,第一用户操作为摇一摇电子设备100的操作。又例如,第一用户操作为图8的(A)所示的指关节滑动的操作。又例如,第一用户操作为图8的(B)所示的双指滑动的操作。又例如,第一用户操作为图8的(C)所示的手势操作。本申请对第一用户操作的具体类型不作限定。
在一些实施例中,电子设备100可以通过图1B所示的检测模块检测第一用户操作。
在一些实施例中,电子设备100可以通过图2A所示的传感器模块180检测第一用户操作,具体示例可参见图2A中电子设备100可以通过传感器模块180检测用户操作的说明。
在一些实施例中,电子设备100可以自行训练融合模型,融合模型用于识别用户意图,例如用于执行S107。
在另一些实施例中,由网络设备300训练融合模型。其中,训练融合模型的说明可参见图1B中训练融合模型、训练界面解析模型和/或意图解析模型的说明,不再赘述。
在一种情况下,电子设备100在S103之前已接收到网络设备300发送的融合模型,则S103之后,该显示方法还可以包括但不限于如下三个步骤:
S104:电子设备100向网络设备300发送第一请求消息。
在一些实施例中,第一请求消息用于请求获取融合模型的配置信息。
S105:网络设备300向电子设备100发送第一配置消息。
在一些实施例中,第一配置消息包括融合模型的配置信息。
S106:电子设备100基于第一配置消息更新融合模型。
在另一种情况下,电子设备100在S103之前未接收到网络设备300发送的融合模型,电子设备100可以向网络设备请求获取融合模型,具体过程和上述步骤S104-S106类似,不再赘述。
S107:电子设备100基于融合模型识别第一界面,并确定第一业务信息对应的意图信息。
在一些实施例中,电子设备100可以将第一界面的界面内容作为融合模型的输入,以得到输出:意图信息。接下来示例性示出一些意图信息:
例如,第一界面为图3A所示的用户界面310或图3B所示的用户界面340,其中用户界面310中的消息3122或用户界面340中的消息342为第一业务信息,第一业务信息是表征名称为“北京站”的地理位置的地址信息。第一业务信息对应的意图信息为:针对地理位置“北京站”进行导航。
例如,第一界面为图3C所示的用户界面350,其中用户界面350中的位置控件352为第一业务信息,第一业务信息是表征名称为“首都博物馆”的地点的地址信息。第一业务信息对应的意图信息为:针对地点“首都博物馆”进行导航。
例如,第一界面为图4A所示的用户界面410,其中用户界面410中的消息414为第一业务信息,第一业务信息可以表征名称为“我的一天”的视频。第一业务信息对应的意图信息为:播放名称为“我的一天”的视频。
例如,第一界面为图5所示的用户界面510,其中用户界面510包括的信息(如名称521)为第一业务信息,第一业务信息可以表征名称为“影片1”的影片。第一业务信息对应的意图信息为:播放名称为“影片1”的影片。
例如,第一界面为图6所示的用户界面610,其中用户界面610包括的信息(如标题611)为第一业务信息,第一业务信息可以表征名称为“脆皮五花肉的”菜谱。第一业务信息对应的意图信息为:烹饪该菜谱对应的菜肴。
例如,第一界面为图7A所示的用户界面710,其中用户界面710包括的信息(如习题712、习题713)为第一业务信息,第一业务信息可以表征一个或多个习题(名称为“英语试卷”的试卷包括的至少一个习题,如习题712、习题713)。第一业务信息对应的意图信息为:练习这一个或多个习题。
S108:电子设备100基于意图信息向电子设备200发送指示信息。
在一些实施例中,电子设备100可以基于意图信息执行意图操作,并将执行意图操作对 应的多媒体数据发送给电子设备200,该指示信息可以指示电子设备200输出该多媒体数据。例如,图1B中,电子设备100的意图解析模块向意图触发模块发送意图信息,意图触发模块基于意图信息执行意图操作,并将执行意图操作对应的音视频流发送给电子设备200的显示模块输出。
在另一些实施例中,电子设备100向电子设备200发送的指示信息包括意图信息,该指示信息可以指示电子设备200实现该意图信息,例如,图1C中,电子设备100的意图解析模块向电子设备200的意图触发模块发送意图信息。
S109:电子设备200输出多媒体数据。
在一些实施例中,电子设备200接收到电子设备100发送的多媒体数据和指示信息时,可以按照指示信息输出多媒体数据,例如图1B所示实施例。
在另一些实施例中,电子设备200接收到电子设备100发送的指示信息,该指示信息包括意图信息,可以基于该意图信息执行意图操作,并输出执行意图操作对应的多媒体数据,例如图1C所示实施例。在一些实施例中,上述意图操作对应第一界面中的第一业务信息。在一些实施例中,电子设备200输出执行意图操作对应的多媒体数据,也可称为输出第一业务信息对应的多媒体数据。
接下来示例性示出一些意图操作:
例如,第一界面为图3A所示的用户界面310或图3B所示的用户界面340,其中用户界面310中的消息3122或用户界面340中的消息342为第一业务信息,第一业务信息是表征名称为“北京站”的地理位置的地址信息。第一业务信息对应的意图操作为:将目的地设置为地理位置“北京站”的位置信息并进行导航,电子设备200输出该意图操作对应的多媒体数据可参见图3A的下图所示的用户界面330,具体场景说明可参见图3A或图3B的说明。
例如,第一界面为图3C所示的用户界面350,其中用户界面350中的位置控件352为第一业务信息,第一业务信息是表征名称为“首都博物馆”的地点的地址信息。第一业务信息对应的意图操作为:将目的地设置为地点“首都博物馆”的位置信息并进行导航,电子设备200输出该意图操作对应的多媒体数据和图3A的下图所示的用户界面330类似,区别在于导航的目的地不同,具体场景说明可参见图3C的说明。
例如,第一界面为图4A所示的用户界面410,其中用户界面410中的消息414为第一业务信息,第一业务信息可以表征名称为“我的一天”的视频。第一业务信息对应的意图操作为:播放名称为“我的一天”的视频,电子设备200输出该意图操作对应的多媒体数据可参见图4B的下图所示的用户界面430,具体场景说明可参见图4B的说明。
例如,第一界面为图5所示的用户界面510,其中用户界面510包括的信息(如名称521)为第一业务信息,第一业务信息可以表征名称为“影片1”的影片。第一业务信息对应的意图操作为:播放名称为“影片1”的影片,电子设备200输出该意图操作对应的多媒体数据可参见图5的下图所示的用户界面520,具体场景说明可参见图5的说明。
例如,第一界面为图6所示的用户界面610,其中用户界面610包括的信息(如标题611)为第一业务信息,第一业务信息可以表征名称为“脆皮五花肉”菜谱。第一业务信息对应的意图操作为:按照该菜谱进行工作,电子设备200输出该意图操作对应的多媒体数据可参见图6的下图所示的用户界面630,具体场景说明可参见图6的说明。
例如,第一界面为图7A所示的用户界面710,其中用户界面710包括的信息(如习题712、习题713)为第一业务信息,第一业务信息可以表征一个或多个习题(名称为“英语试卷”的试卷包括的至少一个习题,如习题712、习题713)。第一业务信息对应的意图操作为: 显示这一个或多个习题中的题目(不显示答案),电子设备200输出该意图操作对应的多媒体数据可参见图7A的下图所示的用户界面730,具体场景说明可参见图7A的说明。
不限于上述示例的情况,在另一些实施例中,若第一界面不包括第一业务信息,则电子设备100无法识别出第一业务信息对应的意图信息,因此不会向电子设备200发送指示信息,电子设备200也不会执行第一业务信息对应的意图操作,例如电子设备100和电子设备200保持显示当前界面不变,不限于此,电子设备100还可以显示提示信息,例如当前无可流转的业务等。例如,图4A所示的用户界面410(第一界面)仅包括消息411和消息413,不包括消息412(地址信息)和消息414(视频信息),则电子设备100和电子设备200可以保持显示当前界面不变。
图9所示的显示方法的示例可参见图3A-图3C、图4A-图4C、图5-图6和图7A-图7B。
在图9所示的方法中,电子设备100接收到第一用户操作时,可以基于当前显示的用户界面进行意图识别,并通过电子设备200实现识别出的意图,无需用户手动触发实现意图,减少用户操作,交互更加高效便捷。
不限于图9示例的情况,在另一些实施例中,S107中,电子设备100可以识别第一界面得到界面识别结果,可选地,电子设备100可以基于界面解析模型得到界面识别结果,具体可参见图1B中的界面解析模块的说明,可选地,电子设备100获取界面解析模型的方式和图9所示的获取融合模型的方式类似。电子设备100可以基于界面识别结果进行意图识别并得到意图信息,可选地,电子设备100可以基于意图解析模型得到意图信息,具体可参见图1B中的意图解析模块的说明,可选地,电子设备100获取意图解析模型的方式和图9所示的获取融合模型的方式类似。
不限于图9示例的情况,在另一些实施例中,电子设备100可以不执行S107-S108,电子设备100接收第一用户操作后,可以识别第一界面得到界面识别结果,并向电子设备200发送界面识别结果和指示信息,该指示信息可以指示电子设备200实现该界面识别结果对应的意图信息。电子设备200可以基于界面识别结果进行意图识别并得到意图信息,然后基于该意图信息执行意图操作,并输出执行意图操作对应的多媒体数据。例如,电子设备100包括图1B所示的检测模块和界面解析模块,电子设备200包括图1B所示的意图解析模块和意图触发模块。可选地,电子设备200可以基于意图解析模型得到意图信息,可选地,电子设备200获取意图解析模型的方式和图9所示的电子设备100获取融合模型的方式类似。
不限于图9示例的情况,在另一些实施例中,电子设备100可以不执行S107-S108,电子设备100接收第一用户操作后,可以向电子设备200发送电子设备100显示的界面内容和指示信息,该指示信息可以指示电子设备200实现该界面内容对应的意图信息。电子设备200可以执行图9的S107得到意图信息,然后基于该意图信息执行意图操作,并输出执行意图操作对应的多媒体数据。例如,电子设备100包括图1B所示的检测模块,电子设备200包括图1B所示的界面解析模块、意图解析模块和意图触发模块。可选地,电子设备200可以基于界面解析模型得到界面识别结果,可选地,电子设备200可以基于意图解析模型得到意图信息,可选地,电子设备200获取界面解析模型和/或意图解析模型的方式和图9所示的电子设备100获取融合模型的方式类似。可选地,电子设备200可以基于融合模型根据界面内容得到意图信息,可选地,电子设备200获取融合模型的方式和图9所示的电子设备100获取融合模型的方式类似。
请参见图10,图10是本申请实施例提供的又一种显示方法的流程示意图。该方法中的 第一设备可以是上述电子设备100,该方法中的第二设备可以是上述电子设备200。该方法可以包括但不限于如下步骤:
S201:第一设备显示第一界面。
在一些实施例中,第一界面包括第一信息,第一信息和第一业务相关,第一信息的示例可参见图9的S102中的第一业务信息的示例。
S202:第一设备接收第一用户操作。
在一些实施例中,S202和图9的S103类似,具体可参见图9的S103的说明。
S203:第一设备响应于第一用户操作,识别第一界面以确定意图信息。
在一些实施例中,意图信息指示执行第一指令,第一指令用于实现第一业务。
在一些实施例中,第一指令是根据意图信息解析得到的指令,在另一些实施例中,第一指令是意图信息包括的指令。
在一些实施例中,意图信息包括第一信息,例如第一信息为指示第一位置的信息,意图信息指示针对第一位置进行导航。在一些实施例中,意图信息包括第一信息相关的信息,例如第一信息为指示第一视频的信息,可以根据第一信息获取到播放第一视频的方式(如第一视频的播放源),意图信息指示通过上述获取到的播放第一视频的方式播放第一视频。
在一些实施例中,第一设备识别第一信息以确定意图信息的说明可参见图9的S107的说明。
S204:第一设备向第二设备发送意图信息。
S205:第二设备根据意图信息执行第一指令,生成第二信息。
在一些实施例中,第二设备执行第一指令可以对应上述执行意图操作,意图操作的示例可参见上图9示出的意图操作。
在一些实施例中,第二信息是执行第一指令生成的多媒体数据,例如音频数据、视频数据、图像数据等。
S206:第二设备根据第二信息显示第二界面。
在一些实施例中,第二设备可以输出第二信息,例如播放第二信息包括的音频数据,又例如显示第二信息包括的图像数据,又例如播放第二信息包括的视频数据。
在一些实施例中,第二设备显示第二界面的示例可参见上图9示例的意图操作的说明中,电子设备200输出意图操作对应的多媒体数据的示例。
在一些实施例中,第一信息为指示第一位置的信息,例如为图3A所示的用户界面310中的消息3122,消息3122指示的第一位置为地理位置“北京站”,或者为图3B所示的用户界面340中的消息342,又例如图3C所示的用户界面350中的位置控件352,位置控件352指示的第一位置为地点“首都博物馆”。第一业务为导航业务。第二信息是执行针对第一位置的导航操作生成的显示信息,例如将目的地设置为地理位置“北京站”的位置信息并进行导航时生成的多媒体数据,第二设备根据该第二信息显示的第二界面为图3A的下图所示的用户界面330,又例如将目的地设置为地点“首都博物馆”的位置信息并进行导航时生成的多媒体数据。具体场景说明可参见图3A、图3B或图3C的说明。
在另一些实施例中,第一信息为指示第一视频的信息,例如为图4A所示的用户界面410中的消息414,消息414指示的第一视频的名称为“我的一天”,又例如图5所示的用户界面510包括的信息(如名称521),指示的第一视频的名称为“影片1”。第一业务为视频播放业务。第二信息是播放第一视频生成的显示信息,例如播放视频“我的一天”生成的多媒体数据,第二设备根据该第二信息显示的第二界面为图4B的下图所示的用户界面430,又例如播 放视频“影片1”生成的多媒体数据,第二设备根据该第二信息显示的第二界面为图5的下图所示的用户界面520。具体场景说明可参见图4B或图5的说明。
在另一些实施例中,第一信息为指示第一菜谱的信息,例如为图6所示的用户界面610包括的信息(如标题611),指示的第一菜谱的名称为“脆皮五花肉”。第一业务为烹饪业务。第二信息是实现第一菜谱对应的烹饪业务生成的显示信息,例如按照菜谱“脆皮五花肉”进行工作而生成的多媒体数据,第二设备根据该第二信息显示的第二界面为图6的下图所示的用户界面630。具体场景说明可参见图6的说明。
在另一些实施例中,第一信息为指示第一题目和第一题目的答案的信息,例如图7A所示的用户界面710中的习题712,习题712包括题目712A和答案712B。第一业务为试卷生成业务,本申请以试卷是包括至少一个题目且不包括答案的信息为例进行说明。第二界面包括第一题目,不包括第一题目的答案,第二界面例如为图7A的下图所示的用户界面730,用户界面730包括上述题目712A(用户界面730中的题目信息733),但不包括上述答案712B。具体场景说明可参见图7A的说明。
在一些实施例中,第一界面还包括第三信息,第三信息和第二业务相关,第三信息和第二业务的说明与上述第一信息和第一业务的说明类似。S203可以具体为:第一设备识别第一信息以确定第四信息,识别第三信息以确定第五信息,基于第一预设规则从第四信息和第五信息中确定上述意图信息为第四信息。其中,第四信息指示执行上述第一指令,第五信息指示执行第二指令,第二指令用于实现第二业务,第二指令的说明和上述第一指令的说明类似。
可选地,第一预设规则可以包括:第二设备的设备类型为预设设备类型,可以理解为是第一设备可以根据连接的第二设备的设备类型确定所要实现的意图信息。例如,上述场景2中,第一界面为聊天界面,第一信息、第三信息分别为图4A的上图所示的用户界面410中的消息412、消息414,第一信息为位置信息,第三信息为视频信息。第一信息对应的第一业务为导航业务,第四信息指示针对地理位置“北京站”进行导航,第三信息对应的第二业务为视频播放业务,第五信息指示播放名称为“我的一天”的视频。第一设备为电子设备100(智能手机),第二设备为电子设备200,若第二设备为车载电脑,第一设备可以确定意图信息为上述第四信息,场景示例可参见图4A,若第二设备为智能电视,可以确定意图信息为上述第五信息,场景示例可参见图4B。
可选地,第一预设规则可以包括:第二设备支持的业务包括第一业务,例如,第一业务为导航业务,若第二设备为安装有地图应用,能基于地图应用执行导航业务的设备,第一设备可以确定上述意图信息为第一信息。
可选地,第一预设规则可以包括:第一业务的优先级高于第二业务的优先级。
可选地,第一信息和第三信息为即时通讯消息,第一预设规则可以包括:第一信息的接收时间晚于第三信息的接收时间。例如,上述场景2中,第一界面为聊天界面,第一信息、第三信息分别为图4A的上图所示的用户界面410中的消息412、消息414,由于消息414的接收时间更晚,因此第一设备可以确定意图信息为消息414对应的第五信息,第五信息指示播放名称为“我的一天”的视频,场景示例可参见图4B。
图10所示的方法例如应用图1C所示的通信系统10,第一设备为电子设备100,第二设备为电子设备200,具体可参见图1C的说明。
请参见图11,图11是本申请实施例提供的又一种显示方法的流程示意图。该方法中的第一设备可以是上述电子设备100,该方法中的第二设备可以是上述电子设备200。该方法可 以包括但不限于如下步骤:
S301:第一设备显示第一界面。
S302:第一设备接收第一用户操作。
S303:第一设备响应于第一用户操作,识别第一界面以确定意图信息。
S301-S303和图10的S201-S203一致,具体可参见图10的S201-S203的说明。
S304:第一设备根据意图信息执行第一指令,生成第二信息。
S304和图10的S205类似,区别在于,S304中的执行设备为第一设备,而非第二设备。
S305:第一设备向第二设备发送第二信息。
S306:第二设备根据第二信息显示第二界面。
S306和图10的S206一致,具体可参见图10的S206的说明。
图11的示例和图10的示例类似,区别在于,图11中执行第一指令和生成第二信息的不是第二设备,而是第一设备,具体可参见图10的示例。
图11所示的方法例如应用图1B所示的通信系统10,第一设备为电子设备100,第二设备为电子设备200,具体可参见图1B的说明。
不限于上图10和图11所示的情况,在另一些实施例中,识别第一界面以确定意图信息的设备可以不是第一设备,而是第二设备,例如第一设备响应于第一用户操作向第二设备发送第一界面相关的多媒体数据(如图像数据),第二设备基于接收到的数据进行意图识别,具体过程和上述第一设备识别第一界面以确定意图信息的过程类似,不再赘述。
当以上任一模块或单元以软件实现的时候,所述软件以计算机程序指令的方式存在,并被存储在存储器中,处理器可以用于执行所述程序指令以实现以上方法流程。所述处理器可以包括但不限于以下至少一种:中央处理单元(central processing unit,CPU)、微处理器、数字信号处理器(DSP)、微控制器(microcontroller unit,MCU)、或人工智能处理器等各类运行软件的计算设备,每种计算设备可包括一个或多个用于执行软件指令以进行运算或处理的核。该处理器可以是个单独的半导体芯片,也可以跟其他电路一起集成为一个半导体芯片,例如,可以跟其他电路(如编解码电路、硬件加速电路或各种总线和接口电路)构成一个SoC(片上系统),或者也可以作为一个ASIC的内置处理器集成在所述ASIC当中,该集成了处理器的ASIC可以单独封装或者也可以跟其他电路封装在一起。该处理器除了包括用于执行软件指令以进行运算或处理的核外,还可进一步包括必要的硬件加速器,如现场可编程门阵列(field programmable gate array,FPGA)、PLD(可编程逻辑器件)、或者实现专用逻辑运算的逻辑电路。
当以上模块或单元以硬件实现的时候,该硬件可以是CPU、微处理器、DSP、MCU、人工智能处理器、ASIC、SoC、FPGA、PLD、专用数字电路、硬件加速器或非集成的分立器件中的任一个或任一组合,其可以运行必要的软件或不依赖于软件以执行以上方法流程。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来计算机程序相关的硬件完成,该计算机程序可存储于计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:只读存储器(read-only memory,ROM)或随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可存储计算机程序代码的介质。

Claims (13)

  1. 一种显示方法,其特征在于,应用于第一设备,所述第一设备和第二设备连接,所述方法包括:
    显示第一界面,所述第一界面包括第一信息,所述第一信息和第一业务相关;
    接收第一用户操作;
    响应于第一用户操作,识别所述第一界面以确定意图信息,所述意图信息指示执行第一指令,所述第一指令用于实现所述第一业务;
    向所述第二设备发送所述意图信息,所述意图信息用于所述第二设备执行所述第一指令并生成第二信息,所述第二信息用于所述第二设备显示第二界面。
  2. 如权利要求1所述的方法,其特征在于,所述第一界面还包括第三信息,所述第三信息和第二业务相关;
    所述识别所述第一界面以确定意图信息,包括:
    识别所述第一信息以确定所述第四信息,识别所述第三信息以确定所述第五信息,所述第四信息指示执行所述第一指令,所述第五信息指示执行第二指令,所述第二指令用于实现所述第二业务;
    基于第一预设规则,从所述第四信息和所述第五信息中确定所述意图信息为所述第四信息,所述第一预设规则包括以下至少一项:所述第二设备的设备类型为预设设备类型、所述第二设备支持的业务包括所述第一业务、所述第一业务的优先级高于所述第二业务的优先级。
  3. 如权利要求2所述的方法,所述第一信息为位置信息,所述第一业务为导航业务,所述第二业务和所述第一业务不同,所述第一预设规则包括所述第二设备的设备类型为所述预设设备类型,所述预设设备类型为车载设备。
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述第一信息为指示第一位置的信息,所述第一业务为导航业务,所述第二信息是执行针对所述第一位置的导航操作生成的显示信息。
  5. 如权利要求1或2所述的方法,其特征在于,所述第一信息为指示第一视频的信息,所述第一业务为视频播放业务,所述第二信息是播放所述第一视频生成的显示信息。
  6. 如权利要求1或2所述的方法,其特征在于,所述第一信息为指示第一菜谱的信息,所述第一业务为烹饪业务,所述第二信息是实现所述第一菜谱对应的烹饪业务生成的显示信息。
  7. 如权利要求1或2所述的方法,其特征在于,所述第一信息为指示第一题目和所述第一题目的答案的信息,所述第一业务是试卷生成业务,所述第二界面包括所述第一题目,不包括所述第一题目的答案。
  8. 一种显示方法,其特征在于,应用于第一设备,所述第一设备和第二设备连接,所述方法包括:
    显示第一界面,所述第一界面包括第一信息,所述第一信息和第一业务相关;
    接收第一用户操作;
    响应于第一用户操作,识别所述第一界面以确定意图信息;
    根据所述意图信息执行第一指令,生成第二信息,所述第一指令用于实现所述第一业务;
    向所述第二设备发送所述第二信息,所述第二信息用于所述第二设备显示第二界面。
  9. 如权利要求8所述的方法,其特征在于,所述第一界面还包括第三信息,所述第三信息和第二业务相关;
    所述识别所述第一界面以确定意图信息,包括:
    识别所述第一信息以确定所述第四信息,识别所述第三信息以确定所述第五信息,所述第四信息指示执行所述第一指令,所述第五信息指示执行第二指令,所述第二指令用于实现所述第二业务;
    基于第一预设规则,从所述第四信息和所述第五信息中确定所述意图信息为所述第四信息,所述第一预设规则包括所述第二设备的设备类型为预设设备类型,和/或所述第一业务的优先级高于所述第二业务的优先级。
  10. 一种显示方法,其特征在于,应用于第二设备,所述第二设备和第一设备连接,所述方法包括:
    接收所述第一设备发送的意图信息,所述意图信息是所述第一设备接收到第一用户操作的情况下识别显示的第一界面确定的,所述第一界面包括第一信息,所述第一信息和第一业务相关;
    根据所述意图信息执行第一指令,生成第二信息,所述第一指令用于实现所述第一业务;
    根据所述第二信息显示第二界面。
  11. 一种显示方法,其特征在于,应用于第二设备,所述第二设备和第一设备连接,所述方法包括:
    接收所述第一设备发送的第一信息,所述第一信息是执行第一指令生成的信息,所述第一指令用于实现第一业务,所述第一指令是意图信息指示执行的指令,所述意图信息是所述第一设备接收到第一用户操作的情况下识别显示的第一界面确定的,所述第一界面包括第二信息,所述第二信息和所述第一业务相关;
    根据所述第一信息显示第二界面。
  12. 一种电子设备,其特征在于,包括收发器、处理器和存储器,所述存储器用于存储计算机程序,所述处理器调用所述计算机程序,用于执行如权利要求1-11任一项所述的方法。
  13. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序被处理器执行时,实现权利要求1-11任一项所述的方法。
PCT/CN2022/136529 2021-12-08 2022-12-05 一种显示方法及电子设备 WO2023103948A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202111493706 2021-12-08
CN202111493706.2 2021-12-08
CN202210093485.8 2022-01-26
CN202210093485.8A CN116301557A (zh) 2021-12-08 2022-01-26 一种显示方法及电子设备

Publications (1)

Publication Number Publication Date
WO2023103948A1 true WO2023103948A1 (zh) 2023-06-15

Family

ID=86729635

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/136529 WO2023103948A1 (zh) 2021-12-08 2022-12-05 一种显示方法及电子设备

Country Status (1)

Country Link
WO (1) WO2023103948A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106055327A (zh) * 2016-05-27 2016-10-26 联想(北京)有限公司 一种显示方法及电子设备
CN107493311A (zh) * 2016-06-13 2017-12-19 腾讯科技(深圳)有限公司 实现操控设备的方法、装置和系统
CN110708086A (zh) * 2019-08-26 2020-01-17 华为技术有限公司 一种分屏显示方法与电子设备
CN111182145A (zh) * 2019-12-27 2020-05-19 华为技术有限公司 显示方法及相关产品
CN111443884A (zh) * 2020-04-23 2020-07-24 华为技术有限公司 投屏方法、装置和电子设备
WO2021164631A1 (zh) * 2020-02-20 2021-08-26 华为技术有限公司 投屏方法及终端设备
WO2021218707A1 (zh) * 2020-04-26 2021-11-04 华为技术有限公司 一种确定操控目标的方法、移动设备及网关

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106055327A (zh) * 2016-05-27 2016-10-26 联想(北京)有限公司 一种显示方法及电子设备
CN107493311A (zh) * 2016-06-13 2017-12-19 腾讯科技(深圳)有限公司 实现操控设备的方法、装置和系统
CN110708086A (zh) * 2019-08-26 2020-01-17 华为技术有限公司 一种分屏显示方法与电子设备
CN111182145A (zh) * 2019-12-27 2020-05-19 华为技术有限公司 显示方法及相关产品
WO2021164631A1 (zh) * 2020-02-20 2021-08-26 华为技术有限公司 投屏方法及终端设备
CN111443884A (zh) * 2020-04-23 2020-07-24 华为技术有限公司 投屏方法、装置和电子设备
WO2021218707A1 (zh) * 2020-04-26 2021-11-04 华为技术有限公司 一种确定操控目标的方法、移动设备及网关

Similar Documents

Publication Publication Date Title
WO2021063343A1 (zh) 语音交互方法及装置
WO2020177622A1 (zh) Ui组件显示的方法及电子设备
WO2021000807A1 (zh) 一种应用程序中等待场景的处理方法和装置
WO2020151387A1 (zh) 一种基于用户运动状态的推荐方法及电子设备
WO2021027476A1 (zh) 语音控制设备的方法及电子设备
WO2022052776A1 (zh) 一种人机交互的方法、电子设备及系统
WO2021185244A1 (zh) 一种设备交互的方法和电子设备
WO2020233556A1 (zh) 一种通话内容处理方法和电子设备
US20230308534A1 (en) Function Switching Entry Determining Method and Electronic Device
WO2020029094A1 (zh) 一种语音控制命令生成方法及终端
WO2023030099A1 (zh) 跨设备交互的方法、装置、投屏系统及终端
CN112130788A (zh) 一种内容分享方法及其装置
WO2022007707A1 (zh) 家居设备控制方法、终端设备及计算机可读存储介质
WO2022160991A1 (zh) 权限控制方法和电子设备
WO2022135157A1 (zh) 页面显示的方法、装置、电子设备以及可读存储介质
CN113852714A (zh) 一种用于电子设备的交互方法和电子设备
WO2022127130A1 (zh) 一种添加操作序列的方法、电子设备和系统
WO2021238371A1 (zh) 生成虚拟角色的方法及装置
CN115016697A (zh) 投屏方法、计算机设备、可读存储介质和程序产品
CN114124980B (zh) 一种启动应用的方法、设备、系统、终端及存储介质
CN114444000A (zh) 页面布局文件的生成方法、装置、电子设备以及可读存储介质
WO2023045597A1 (zh) 大屏业务的跨设备流转操控方法和装置
WO2022143310A1 (zh) 一种双路投屏的方法及电子设备
WO2022033432A1 (zh) 内容推荐方法、电子设备和服务器
WO2022152174A1 (zh) 一种投屏的方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22903370

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022903370

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022903370

Country of ref document: EP

Effective date: 20240522