CN117572991A - Input interface display method and device, electronic equipment and readable storage medium - Google Patents

Input interface display method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117572991A
CN117572991A CN202311602743.1A CN202311602743A CN117572991A CN 117572991 A CN117572991 A CN 117572991A CN 202311602743 A CN202311602743 A CN 202311602743A CN 117572991 A CN117572991 A CN 117572991A
Authority
CN
China
Prior art keywords
input
input area
area
page
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311602743.1A
Other languages
Chinese (zh)
Inventor
陈峥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311602743.1A priority Critical patent/CN117572991A/en
Publication of CN117572991A publication Critical patent/CN117572991A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses an input interface display method, an input interface display device, electronic equipment and a readable storage medium, and belongs to the technical field of communication. The method comprises the following steps: displaying a first page, wherein the first page comprises N input areas, and N is a positive integer; receiving a first input of a user, the first input being used to wake up an input interface; determining a target input area from the N input areas based on the operation sequence of the first page and the function description information of each input box in response to the first input; the operation sequence is used for representing the operation behavior of the user on the first page; and displaying an input interface matched with the target input area.

Description

Input interface display method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to an input interface display method, an input interface display device, electronic equipment and a readable storage medium.
Background
At present, with the continuous development of display screens of electronic devices, the larger the screen of the electronic device, the more and more contents are displayed on the screen, and the better the effect of displaying the contents to users is. However, the large screen may also present some trouble to the user's operational experience while displaying more content.
For example, when a user wants to input content in a certain input area, the user needs to click on the input area on the screen to wake up the input interface. However, if a plurality of input areas are displayed on the screen, the user needs to select an input area from the plurality of input areas, in which the user wants to input the content, before waking up the input interface corresponding to the input area.
In this way, when a plurality of input areas are displayed on the screen of the electronic device, the user needs to select the input area from the plurality of input areas each time to wake up the input interface, and thus the whole input process is excessively complicated and the input efficiency is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide an input interface display method, an input interface display device, an electronic device, and a readable storage medium, which can simplify an input process and improve input efficiency.
In a first aspect, an embodiment of the present application provides an input interface display method, where the input interface display method includes: displaying a first page, wherein the first page comprises N input areas, and N is a positive integer; receiving a first input of a user, the first input being used to wake up an input interface; determining a target input area from the N input areas based on the operation sequence of the first page and the function description information of each input area in response to the first input; the operation sequence is used for representing the operation behavior of the user on the first page; and displaying an input interface matched with the target input area.
In a second aspect, embodiments of the present application provide an input interface display device, including: the device comprises a display module, a receiving module and a processing module; the display module is used for displaying a first page, wherein the first page comprises N input areas, and N is a positive integer; the receiving module is used for receiving a first input of a user, wherein the first input is used for waking up an input interface; the processing module is used for responding to the first input received by the receiving module, and determining target input areas from N input areas based on the operation sequence of the first page displayed by the display module and the function description information of each input area; the operation sequence is used for representing the operation behavior of the user on the first page; and the display module is also used for displaying an input interface matched with the target input area determined by the processing module.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the electronic device displays a first page, where the first page includes N input areas, and N is a positive integer; receiving a first input of a user, the first input being used to wake up an input interface; determining a target input area from the N input areas based on the operation sequence of the first page and the function description information of each input area in response to the first input; the operation sequence is used for representing the operation behavior of the user on the first page; and displaying an input interface matched with the target input area. In the scheme, under the condition that the N input areas are displayed on the page, the user can directly predict the input area which the user wants to input by means of the operation behaviors of the user in the page and the function description information of the N input areas without manual selection, so that the electronic equipment can wake up the input interface corresponding to the input area according to the input of the user, the input process is simplified, and the input efficiency is improved.
Drawings
FIG. 1 is a schematic flow chart of an input interface display method according to an embodiment of the present application;
FIG. 2 is a second flow chart of a method for displaying an input interface according to the embodiment of the present application;
FIG. 3 is a schematic diagram of a page display of a browser application according to an embodiment of the present application;
FIG. 4 is a third flow chart of a method for displaying an input interface according to the embodiment of the present application;
FIG. 5 is a flowchart of a method for displaying an input interface according to an embodiment of the present disclosure;
FIG. 6 is a basic functional dependency graph provided by an embodiment of the present application;
FIG. 7 is a flowchart of a method for displaying an input interface according to an embodiment of the present disclosure;
FIG. 8 is a first functional dependency graph provided by an embodiment of the present application;
FIG. 9 is a second functional dependency graph provided by an embodiment of the present application;
FIG. 10 is a flowchart of a method for displaying an input interface according to an embodiment of the present disclosure;
FIG. 11 is a flowchart of a method for displaying an input interface according to an embodiment of the present disclosure;
FIG. 12 is a schematic structural diagram of an input interface display device according to an embodiment of the present disclosure;
FIG. 13 is a second schematic diagram of an input interface display device according to an embodiment of the present disclosure;
fig. 14 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 15 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The method, the device, the electronic equipment and the readable storage medium for displaying the input interface provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
In the related art, taking a web page as an example, when a user wants to input an account in an input area in the web page, the user needs to select the input area in which the account needs to be input, click the selected operation, wake up an input interface matched with the input area, and then perform an input operation on the input interface. However, if multiple input areas are displayed in the web page at the same time, the user needs to select an input area needing to input an account number from the multiple input areas, and then can wake up the input interface matched with the input area.
In this way, when a plurality of input areas are displayed on the screen of the electronic device, the user needs to select the input area from the plurality of input areas each time to wake up the input interface, and thus the whole input process is excessively complicated and the input efficiency is low.
In the embodiment of the application, when the first page displayed by the electronic device includes N input areas, N is a positive integer; receiving a first input of a user wake-up input interface; the electronic equipment responds to the first input, and determines a target input area from N input areas based on an operation sequence of the first page and the function description information of each input area; displaying an input interface matched with the target input area; the operation sequence is used for representing the operation behavior of the user on the first page. In the scheme, under the condition that the N input areas are displayed on the page, the user can directly predict the input area which the user wants to input by means of the operation behaviors of the user in the page and the function description information of the N input areas without manual selection, so that the electronic equipment can wake up the input interface corresponding to the input area according to the input of the user, the input process is simplified, and the input efficiency is improved.
The execution body of the input interface display method provided in this embodiment may be an input interface display device, and the input interface display device may be an electronic device, or may be a control module or a processing module in the electronic device. The technical solutions provided in the embodiments of the present application are described below by taking an electronic device as an example.
An embodiment of the application provides an input interface display method, and fig. 1 shows a flowchart of the input interface display method provided by the embodiment of the application, and the method can be applied to electronic equipment. As shown in fig. 1, the method for displaying an input interface provided in the embodiment of the present application may include the following steps 201 to 204.
Step 201, the electronic device displays a first page.
In some embodiments of the present application, the first page includes N input areas, where N is a positive integer.
In some embodiments of the present application, the first page may be a page displayed in any application in the electronic device. For example, a web site page in a browser application, or an account login page of an application.
In some embodiments of the present application, the input area is where the user can input the content using the input interface.
Illustratively, the input area is typically a rectangular control, with various elements attached. Common are pages such as forms, chat software, registration log-in, search areas, etc.
Step 202, the electronic device receives a first input from a user.
In some embodiments of the present application, the first input is used to wake up the input interface.
In some embodiments of the present application, the first input may include a touch input of the user to the first page, or a voice command input by the user, or a specific gesture input by the user, which may be specifically determined according to an actual use requirement, which is not limited in embodiments of the present application.
Illustratively, the touch input includes a user sliding input, or a click input, or the like, on the first page. The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the present application may be a single click input, a double click input, or any number of click inputs, and the click input may also be a long press input or a short press input.
In one possible example, the first input is a preset wake shortcut gesture, and the electronic device may monitor and wake the input interface after receiving the preset wake shortcut gesture input by the user.
Step 203, the electronic device determines a target input area from the N input areas based on the operation sequence of the first page and the function description information of each input area in response to the first input.
In some embodiments of the present application, the operation sequence of the first page is used to characterize an operation behavior of the user on the first page.
In some embodiments of the present application, the operation sequence of the first page is obtained according to a time sequence of operation behaviors of the user on the first page.
In some embodiments of the present application, the operation sequence includes M operation data, where each operation data corresponds to one operation behavior.
In some embodiments of the present application, the above-described operation data includes at least one of: the operation behaviors of the user on the page, the operation time corresponding to the operation behaviors of the user on the page and the operation coordinates corresponding to the operation behaviors of the user on the page.
For example, the operation behavior of the user on the page may be pressing, lifting or sliding.
In some embodiments of the present application, the electronic device may sort the M operation data according to the operation time of the operation behavior corresponding to each of the M operation data, so as to obtain the operation sequence.
In some embodiments of the present application, the function description information of each input area is used to characterize the input content type in the input area.
Example 1: if the function description information of the input area is a user name, the input content type in the input area may be a social account number, a phone number, an identification card number, or the like.
Example 2: if the function description information of the input area is a password, the input content type in the input area is indicated to be a user password.
Example 3: if the function description information of the input area is a website, the input content type in the input area is indicated as the website.
Example 4: if the function description information of the input area is chat, the input content type in the input area is comment or bullet screen.
In some embodiments of the present application, the electronic device determines a possible next operation behavior of the user by analyzing the above operation sequence, and finds, from N input areas, an input area corresponding to the next operation behavior of the user in combination with the function description information of each input area, and determines the input area as a target input area.
Step 204, the electronic device displays an input interface that matches the target input area.
In some embodiments of the present application, the input interface may include a soft keyboard.
In some embodiments of the present application, the soft keyboard refers to a virtual keyboard that is drawn and displayed on a user electronic device by software, and a user may input content through the soft keyboard. For devices without a physical keyboard, a soft keyboard is typically provided by the input method software.
In some embodiments of the present application, after receiving a first input from a user, the electronic device wakes up an input interface matching the target input area in response to the first input.
It can be understood that, after the electronic device wakes up the input interface in response to the first input, a connection relationship is established between the input interface and the target input area, and the content input by the user on the input interface can be displayed in the target input area; and after the user finishes the input operation, the input interface is disconnected with the input area and hidden.
In one possible embodiment, the electronic device detects the number of input areas in the first page after receiving the first input of the user, so that the electronic device can determine how to wake up the input interface according to the number of input areas.
The electronic device detects the number of input areas in the first page after receiving the first input of the user, and if the number of input areas is equal to 1, directly wakes up the input interface corresponding to the input area. If the number of the input areas is greater than 1, determining a target input area according to step 201 and step 203, and then the electronic device wakes up the input interface corresponding to the target input area, so that the user can wake up the input interface of the target input area for input by the user without manually selecting the input area.
In the method for displaying an input interface provided by the embodiment of the application, an electronic device displays a first page, where the first page includes N input areas, and N is a positive integer; receiving a first input of a user, the first input being used to wake up an input interface; determining a target input area from the N input areas based on the operation sequence of the first page and the function description information of each input area in response to the first input; the operation sequence is used for representing the operation behavior of the user on the first page; and displaying an input interface matched with the target input area. In the scheme, under the condition that the N input areas are displayed on the page, the user can directly predict the input area which the user wants to input by means of the operation behaviors of the user in the page and the function description information of the N input areas without manual selection, so that the electronic equipment can wake up the input interface corresponding to the input area according to the input of the user, the input process is simplified, and the input efficiency is improved.
Optionally, in this embodiment of the present application, as shown in fig. 2 in conjunction with fig. 1, before the electronic device determines, in response to the first input, the target input area from the N input areas based on the operation sequence of the first page and the function description information of each input area, the input interface display method provided in this embodiment of the present application further includes steps 301 and 302:
step 301, the electronic device obtains page element information in a first area in a first page.
In some embodiments of the present application, the first area is an area within a predetermined range of the N input areas.
In some embodiments of the present application, the predetermined range may be a range centered on a center coordinate point of the input area and having a radius of the first distance.
In some embodiments of the present application, the first distance may be set by default for the electronic device, or may be set autonomously for a user. For example, it may be set to 0.5cm,0.6cm, etc., i.e., around the input area.
In some embodiments of the present application, the page element information may be text information, such as "user name, login" or the like, icon information, such as "+", or barrier-free descriptor information.
Illustratively, the barrier-free descriptive information is control information for assisting the handicapped in acquiring the application information. It should be noted that the barrier-free description mechanism is determined by the device operating system, the corresponding function on android is talk back, and the corresponding function on ios is voiceover.
In a possible example, the first area may further include N input areas.
The page element information may also be prompt information of the input area.
Step 302, the electronic device determines function description information of the N input areas based on the page element information.
In some embodiments of the present application, the electronic device may analyze and obtain the function description information of the input area according to the acquired page element information of the input area.
In a possible example, the electronic device uses a machine learning algorithm to summarize the function description information of the input area according to the page element information of the input area.
In another possible example, the electronic device may also directly select the function description information of the input area from the preset correspondence between the page element information and the function description information.
For example, the machine learning algorithm may use a commonly used multi-classification model to input the page element information of the input area into the multi-classification model, output the function description information of the input area with a predefined type, or use a large language model to input the page element information of the input area into the large language model, and summarize and output the function description information of the input area.
For example, as shown in fig. 3, taking the first page as an example of the web page 21 of the browser, the web page includes four input areas, namely, an input area 22, an input area 23, an input area 24 and an input area 25. The electronic equipment obtains the function description information of the input area 22 as a website according to the content in the input area 22; according to the page element information of the text form of the user name on the left side of the input area 23, the function description information of the input area 23 is obtained as the user name; according to the page element information of the text form of the password on the left side of the input area 24, obtaining the function description information of the input area 24 as the password; and obtaining the function description information of the input area 25 as the verification code according to the page element information of the text form of the verification code on the left side of the input area 25.
In the following, a process of the electronic device acquiring the page element information and determining the function description information of the input area according to the page element information is described in detail in one possible embodiment, specifically, as shown in fig. 4, including steps A1 to A6:
and step A1, the electronic equipment judges whether the input area is surrounding, namely whether the preset range provides the barrier-free description language, if the barrier-free description language exists, the barrier-free description language is used as page element information in a text form, step A6 is executed, and otherwise, step A3 is executed.
And A2, judging whether a text type control exists in the distance of 0.5 cm on the left side of the input area, namely, the preset range, if so, using the text type control as page element information in a text form, executing the step A6, and otherwise, executing the step A3.
And A3, the electronic equipment judges whether a text type control exists in the distance of 0.5 cm on the right side of the input area, namely the preset range, if so, the control is used as page element information in a text form, the step A6 is executed, and otherwise, the step A4 is executed.
And A4, the electronic equipment judges whether a prompt type icon exists in the input area, namely the preset range, if so, the prompt type icon is used as page element information in an icon form, the step A6 is executed, and otherwise, the step A5 is executed.
And step A5, the electronic equipment acquires prompt message information displayed in the input area and uses the prompt message information as page element information in a text form, and the step A6 is executed.
And A6, taking the obtained page element information in the text form or the icon form of the input area as input, and using a machine learning algorithm to summarize the function description of the input area so as to obtain the function description of the input area.
In this way, the electronic device can determine the function description information of different input areas according to the element information of different input areas, so that the electronic device can determine the target input area according to the function description information of the input areas.
Optionally, in an embodiment of the present application, as shown in fig. 5, in response to the first input, the step 203 "determining, by the electronic device, the target input area from the N input areas based on the operation sequence of the first page and the function description information of each input area, and specifically includes steps 203a to 203c:
step 203a, the electronic device generates a basic function dependency graph of the N input areas based on the function description information of each input area.
In some embodiments of the present application, the basic function dependency graph includes at least one dependency edge.
In some embodiments of the present application, the basic function dependency graph includes the N input regions.
In some embodiments of the present application, the dependent edge is a directed edge.
In some embodiments of the present application, each of the above-mentioned dependency edges is used to characterize a functional dependency between two input regions.
In some embodiments of the present application, the functional dependency relationship between the two input areas is used to represent a causal relationship between the two input areas, and if the content in one input area is changed, the content in the other input area is also changed.
For example, if the input area a is changed, the input area B needs to be changed accordingly, so that it can be considered that the input area B depends on the input area a, in other words, there is a dependency relationship between the input area a and the input area B.
In some embodiments of the present application, the electronic device uses the N input areas as vertices, and uses the function dependency relationship between each input area as a dependency edge, to generate a basic function dependency graph.
It should be noted that, a dependency edge is established between two input areas having a function dependency relationship, and an input area having no function dependency relationship does not need to establish a dependency edge.
In some embodiments of the present application, after the electronic device obtains the function description information of all the input areas in the first page, according to the function description information of all the input areas, a function dependency relationship between each input area and other input areas is obtained through a dependency relationship analysis method, and according to the function dependency relationship between the input areas, a basic function dependency graph is generated.
In some embodiments of the present application, the dependency analysis method may be a method of presetting a dependency mapping table, or may be a large language model method.
For example, the preset dependency mapping table may be a dependency relationship between each input area preset for the electronic device.
The electronic device generates a dependency mapping table according to the preset dependency relationship between each input area, so that the electronic device can directly search the dependency mapping table when analyzing the dependency relationship between the input areas.
Illustratively, the large language model method described above may employ GhatGPT. ChatGPT gives task description instructions: "for two items of data content" user name, password ", please confirm whether there is a dependency relationship between them" or "for two items of data content" user name, password ", please confirm whether there is a dependency relationship between them is the former dependent latter", the user only needs to answer yes or no, then the ChatGPT replies according to the answer of the user, for example, the password depends on the user name, at this time, the electronic device can generate the dependency relationship according to the ChatGPT reply.
For example, in connection with fig. 3, taking the first page as an example of a web site page of the browser, since there is a dependency relationship between the input area 23 and the input area 24, there is a dependency edge L1 between the input area 23 and the input area 24, and the dependency edge L1 indicates that the input area 24 depends on the input area 23; there is also a dependency relationship between the input area 23 and the input area 25, so that there is a dependency edge L2 between the input area 23 and the input area 25, which indicates that the input area 25 depends on the input area 23, and that the input area 22 does not have any dependency relationship, and there is no dependency edge. The electronic device generates a basic function dependency graph with each input region as a vertex, and the dependent edges L1 and L2, as shown in fig. 6.
Step 203b, the electronic device determines at least one input area group based on the basic function dependency graph.
In some embodiments of the present application, each input region group corresponds to a connected component in the basic function dependency graph.
In some embodiments of the present application, one input region group in each input region group includes input regions corresponding to all vertices in one connected component. In other words, the electronic apparatus takes as one input area group the input areas corresponding to all vertices in the connected component.
In some embodiments of the present application, the above-mentioned connection relationship is that there is a dependency edge between two input regions.
For example, referring to fig. 6, since the input area 23, the input area 24, and the input area 25 all belong to the same communication component, the electronic device sets the input area 23, the input area 24, and the input area 25 as one input area group corresponding to one communication component, and since the input area 22 has no communication relation with any one input area, the input area 22 is one input area group.
In some embodiments of the present application, the electronic device uses input areas corresponding to all vertices in one connected component in the basic function dependency graph as one input area group, and if the basic function dependency graph includes multiple connected components, determines multiple input area groups.
In some embodiments of the present application, each of the input area groups corresponds to one function description information.
In some embodiments of the present application, the electronic device obtains the function description information of one input area group according to the function description information of each input area in the input area group through summary analysis.
Illustratively, the above method for summarizing and analyzing the function description information of the input area group may employ a method of grouping function descriptions in a predetermined manner; the electronic equipment can also adopt a multi-category weight weighted voting method according to the function description information of each input area; or summarized using other NLP techniques.
For example, referring to fig. 6, taking the first page as an example of a web page of the browser, the input area 23, the input area 24 and the input area 25 are an input area group, and according to the function information description of each input area, for example, a user name, a password and a verification code, the function description of the input area group obtained by summarizing and analyzing may be login.
In some embodiments of the present application, the electronic device may generate the input area group list according to the at least one input area group and the functional description of each input area group.
In some embodiments of the present application, the input area group list is composed of at least one input area group, and each input area group is composed of an input area element portion, that is, an input area of the N input areas, and a functional description of the input area group.
It should be noted that, each input area in the first page is unique to an input area element portion belonging to one input area group.
Illustratively, the specific contents of the above-described input region group list may be expressed by the following formula group:
LG={AG 1 ,AG 2 ,…,AG i }
AG i =<G i ,D Gi >
wherein LG is an input area group list, AG is an input area group, G is an input area, and D is function description information.
Step 203c, the electronic device determines a first input area group from the at least one input area group based on the operation sequence.
In some embodiments of the present application, the electronic device pre-determines, according to the operation sequence, an operation behavior that may be performed by the user next, in other words, pre-determines which function may be performed by the user next, and then, in combination with the function description of each input area group, selects, from at least one input area group, an input area group corresponding to a function that may be used by the user, and uses the input area group as the first input area group.
Optionally, in an embodiment of the present application, after determining the first input area group, the electronic device uses, as the target input area, a shallowest dependent input area whose input state is an "unfinished" state in the first input area group.
In some embodiments of the present application, the target input area is one input area in the first input area group.
In some embodiments of the present application, the shallowest dependent input area is an input corresponding to a vertex with an input degree of 0 in the second functional dependency graph. If a plurality of shallowest dependent input areas exist, the input area at the uppermost part of the page is taken as a target input area.
Therefore, the electronic equipment groups the input areas according to the function description and the position distribution of the input areas in the page, and performs function clustering on the input targets in the multi-input-area scene, so that the accuracy of hit of the input intention of the user is improved.
Optionally, in an embodiment of the present application, as shown in fig. 7 in conjunction with fig. 5, the step 203b "the electronic device determines, based on the basic function dependency graph, at least one input area group" specifically includes steps 203b1 to 203b3:
in step 203b1, the electronic device removes the first dependency edge in the basic function dependency graph to obtain the first function dependency graph when the first dependency edge exists in the basic function dependency graph.
In some embodiments of the present application, the distance between the two input areas connected by the first dependency edge is greater than a first preset distance. For example, the first preset distance is typically 1/6 of the screen width.
In some embodiments of the present application, the first preset distance may be set by default for the electronic device, or may be set autonomously for a user.
For example, referring to fig. 3, taking the first page as an example of a web page of the browser, since the distance between the input area 25 and the input area 23 exceeds the first preset distance, the electronic device needs to remove the dependency edge L2 between the input area 25 and the input area 23, so as to generate the first function dependency graph, as shown in fig. 8.
In step 203b2, if the distance between the first input area and the second input area is smaller than the second preset distance, the electronic device increases the second dependency edge between the first input area and the third input area, so as to obtain a second function dependency graph.
In some embodiments of the present application, the second preset distance may be set by default for the electronic device, or may be set autonomously for a user.
In some embodiments of the present application, the first input area is an input area where no dependency edge exists in the N input areas.
In some embodiments of the present application, the second input area and the third input area belong to the same connected component.
In some embodiments of the present application, the second input area is an input area closest to the first input area in the connected component in the first function dependency graph.
In some embodiments of the present application, the third input area is an input area having a functional dependency relationship with the first input area.
In some embodiments of the present application, the electronic device first searches an input area without a dependency edge, uses the input area as a first input area, then detects whether a connected component exists around the first input area, if so, uses an input area in the connected component closest to the first input area as a second input area, and determines whether the distance between the first input area and the second input area is smaller than a second preset distance, if so, increases the dependency edge between the first input area and a third input area with a functional dependency relationship in the connected component, namely, the second dependency edge. It is understood that the third input area may be the second input area, or may be another input area in the connected component to which the second input area belongs.
Illustratively, referring to fig. 3 and 8, taking the first page as an example of a web site page of a browser, the input area 22 and the input area 25 are both input areas where no dependent edges exist. Since the connected component in the first function dependency graph includes the input area 23 and the input area 24, where the input area 22 is closest to the input area 23 in the web page, the input area 25 is closest to the input area 24, that is, the second input area, and the distance between the input area 22 and the input area 23 in the web page is greater than the second preset distance, the electronic device does not need to add a dependency edge to the input area 22. The distance between the input area 25 and the input area 23 in the first page is smaller than the second preset distance, and at this time, the electronic device needs to increase the input area 25, that is, the dependency edge L3 between the first input area and the input area 23, that is, the third input area, to finally generate a second function dependency graph, as shown in fig. 9
Step 203b3, the electronic device determines at least one input area group based on the second functional dependency graph.
In some embodiments of the present application, the electronic device uses input areas corresponding to all vertices in one connected component in the second functional dependency graph as one input area group, and if the second functional dependency graph includes multiple connected components, determines multiple input area groups.
For example, referring to fig. 9, the second functional dependency graph has two connected components, connected component 1: an input area 23, an input area 24, and an input area 25; connected component 2: an input area 22. The electronic device divides the four input areas of the first page into two groups, G 1 = { input area 23, input area 24, input area 25}, G 2 = { input area 22}, and based on the function description information of each input area, performing function summary analysis to obtain a function description part D of the input area group G1 = "login"; d (D) G2 = "web address", the input area packet list lg= { AG is finally obtained 1 ,AG 2 }={<{ input area 23, input area 24, input area25, "Login }," Login ">,<Input area 22>"web site" }.
Therefore, the electronic device removes or adds the dependent edges according to the distance between the input areas, namely the position of the input areas in the page, so that the dependency relationship between the input areas is more accurate, and the accuracy of the electronic device in determining the target input areas is further ensured.
Optionally, in an embodiment of the present application, as shown in fig. 10 in conjunction with fig. 5, the determining, by the electronic device, the first input area group "from at least one input area group based on the operation sequence, specifically includes step 203c1 and step 203c2:
Step 203c1, the electronic device determines the confidence coefficient of the first input area group based on the operation sequence and the function description information of the first input area group, so as to obtain M confidence coefficients, where M is a positive integer.
In some embodiments of the present application, a confidence level corresponds to a set of input regions.
In some embodiments of the present application, the confidence is used to characterize a probability that a user desires to implement a function corresponding to the function description information of the input area group.
In some embodiments of the present application, the electronic device may evaluate the confidence of each input region group with a machine learning algorithm.
Illustratively, the electronic device uses the operational sequence and the functional descriptive information of the first set of input regions as features, evaluates the confidence level of the first set of input regions using a machine learning algorithm, which may typically be performed using an LSTM algorithm.
It should be noted that, the electronic device may repeat the above steps for each input region group to obtain a confidence level.
Step 203c2, the electronic device determines a first input region group based on the M confidence levels.
In some embodiments of the present application, the electronic device may determine the set of input regions having a confidence level greater than or equal to a predetermined threshold as the first set of input regions. Or the electronic device re-determines the first input region group if the confidence degrees corresponding to all the input region groups are smaller than the preset threshold value.
Therefore, the electronic equipment further considers the operation behavior of the user and the filling condition of the content of the input area by combining the operation sequence and the function description information of the input area group, so that the accuracy of the electronic equipment is higher when the electronic equipment automatically selects the target input area.
In one possible embodiment, the electronic device will evaluate the confidence level for all the input field sets.
In another possible embodiment, the electronic device performs the confidence evaluation only on the input region group in which the input region having the input state being the "unfinished" state is located.
In some embodiments of the present application, the input status being "incomplete" indicates that no text exists in the input area, or that the existing text is incomplete.
In some embodiments of the present application, the electronic device may use an input completion checking method to check the content in the input area to determine the input state of the input area. The embodiment of the application does not limit the input completion degree checking method.
The input completion degree checking method may be checked by using a preset regular expression, or may be checked by using an NLP technique.
Therefore, the electronic equipment can only evaluate the confidence coefficient of the input area group in the unfinished state, and because a part of input areas are screened out, the accuracy of the electronic equipment for determining the target input area is improved, and the input efficiency is improved.
Optionally, in an embodiment of the present application, as shown in fig. 11 in conjunction with fig. 10, the step 203C2 "the electronic device determines, based on M confidence levels, that the first input area group" specifically includes step 203C1 or step 203C2:
in step 203C1, when at least one confidence coefficient of the M confidence coefficients is greater than or equal to a predetermined threshold value, the electronic device uses an input region group corresponding to the one confidence coefficient of the M confidence coefficients greater than or equal to the predetermined threshold value as the first input region group.
In some embodiments of the present application, the predetermined threshold may be set by default for the electronic device, or may be set by user-definition, and typically, the threshold may be set to 0.5.
In some embodiments of the present application, the "at least one confidence coefficient of the M confidence coefficients is greater than or equal to the predetermined threshold" indicates that the function corresponding to the function description information of the input area group corresponding to the at least one confidence coefficient is a function that the user desires to use, and therefore, the input area group corresponding to the at least one confidence coefficient is determined as the first input area group.
In step 203C2, the electronic device uses the input area group of the sixth input area as the first input area group when the M confidence degrees are all smaller than the preset threshold.
In some embodiments of the present application, the sixth input area is the input area closest to the last operation action.
In some embodiments of the present application, the electronic device selects, as the first input region group, an input region group where an input region having a "incomplete" input state is located, which is closest to a coordinate position of an operation corresponding to a last group of operation data in the operation sequence.
In one possible embodiment, the electronic device may not have acquired the operation sequence of the first page, in other words, the user has not operated on the first page, and may acquire, as the first input region group, the input region group including the input region whose input state is the "incomplete" state and having the largest area.
In some embodiments of the present application, the method of calculating the area of the input region group by the electronic device may be to sum the area of each input region in the input region group.
Therefore, the electronic equipment further considers the operation behavior of the user and the filling condition of the content of the input area by combining the operation sequence and the function description information of the input area group, so that the accuracy of the electronic equipment is higher when the electronic equipment automatically selects the target input area.
Illustratively, taking a video APP page as an example, there is an input area "bulletin" and an input area "comment" in the page. If the user slides the comment list after entering the page, the confidence coefficient of the comment making of the input area is higher than that of the comment making barrage of the input area, and if the confidence coefficient is higher than a threshold value, the input area group where the comment making of the input area is located is selected as the input area group matched with the user behavior. Since the input region group includes only the input region "comment" in the "unfinished" state, the shallowest dependent input region is the input region group "comment". The electronic device can directly wake up the input interface corresponding to the comment posting in the input area, so that the user can directly input the content on the input interface. Therefore, the function condition and the position distribution of the input area in the page can be integrated, the input area which the user wants to interact with in the page can be automatically determined by combining the operation behaviors of the user, the user can wake up the input method keyboard quickly, and the convenience of user input is improved
It should be noted that, in the input interface display method provided in the embodiment of the present application, the execution body may be an input interface display device, or an electronic device, or may be a functional module or entity in the electronic device. In the embodiment of the present application, an input interface display device provided in the embodiment of the present application is described by taking an example of an input interface display method performed by an input interface display device.
Fig. 12 shows a schematic diagram of one possible configuration of an input interface display device involved in an embodiment of the present application. As shown in fig. 12, the input interface display device 700 may include: a display module 701, a receiving module 702 and a processing module 703.
The display module 701 is configured to display a first page, where the first page includes N input areas, and N is a positive integer; the receiving module 702 is configured to receive a first input from a user, where the first input is used to wake up an input interface; the processing module 703 is configured to determine, in response to the first input received by the receiving module 702, a target input area from the N input areas based on the operation sequence of the first page displayed by the display module 701 and the function description information of each input area; the operation sequence is used for representing the operation behavior of the user on the first page; the display module 701 is further configured to display an input interface that matches the target input area determined by the processing module 703.
Optionally, in the embodiment of the present application, the processing module 703 is specifically configured to: generating basic function dependency graphs of N input areas based on the function description information of each input area, wherein the basic function dependency graphs comprise at least one dependency edge, and each dependency edge is used for representing the function dependency relationship between two input areas; determining at least one input region group based on the basic function dependency graph, wherein each input region group corresponds to one connected component in the basic function dependency graph, and one input region group comprises input regions corresponding to all vertexes in one connected component; determining a first input region group from the at least one input region group based on the operation sequence; the target input area is one input area in the first input area group.
Optionally, in the embodiment of the present application, the processing module 703 is specifically configured to: under the condition that a first dependency edge exists in the basic function dependency graph, removing the first dependency edge in the basic function dependency graph to obtain the first function dependency graph, wherein the distance between two input areas connected by the first dependency edge is larger than a first preset distance; if the distance between the first input area and the second input area is smaller than the second preset distance, adding a second dependency edge between the first input area and the third input area in the first function dependency graph to obtain a second function dependency graph; determining at least one input region group based on the second functional dependency graph; the second input area and the third input area belong to the same connected component, the first input area is an input area without a dependent edge in the N input areas, the second input area is an input area closest to the first input area in each connected component in the first function dependency graph, and the third input area has a function dependency relationship with the first input area.
Optionally, in an embodiment of the present application, in conjunction with fig. 12, as shown in fig. 13, the apparatus 700 further includes: an acquisition module 704 and a determination module 705; the obtaining module 704, configured to obtain, by the determining module 705, page element information in a first area in the first page, where the first area is an area in a predetermined range of the N input areas, from the N input areas, before determining the target input area, based on the operation sequence of the first page and the function description information of each input area displayed by the display module 701; the determining module 705 is configured to determine functional description information of N input areas based on the page element information acquired by the acquiring module 704; wherein the fourth input area is one of the N input areas.
Optionally, in the embodiment of the present application, the processing module 703 is specifically configured to: based on the operation sequence and the function description information of the first input area group, determining the confidence coefficient of the first input area group to obtain M confidence coefficients, wherein M is a positive integer, and the confidence coefficient is used for representing the probability that a user expects to realize the function corresponding to the function description information of the input area group; a first set of input regions is determined based on the M confidence levels.
Optionally, in the embodiment of the present application, the processing module 703 is specifically configured to: when at least one confidence coefficient of the M confidence coefficients is larger than or equal to a preset threshold value, an input area group corresponding to the confidence coefficient of which the confidence coefficient is larger than or equal to the preset threshold value is used as a first input area group; or under the condition that the M confidence degrees are smaller than a preset threshold value, taking the input area group of the sixth input area as the first input area group, wherein the sixth input area is the input area closest to the last operation behavior.
In the input interface display device provided by the embodiment of the application, the input interface display device displays a first page, wherein the first page comprises N input areas, and N is a positive integer; receiving a first input of a user, the first input being used to wake up an input interface; determining a target input area from the N input areas based on the operation sequence of the first page and the function description information of each input area in response to the first input; the operation sequence is used for representing the operation behavior of the user on the first page; and displaying an input interface matched with the target input area. In the scheme, under the condition that the N input areas are displayed on the page, the user can directly predict the input area which the user wants to input by means of the operation behaviors of the user in the page and the function description information of the N input areas without manual selection, so that the electronic equipment can wake up the input interface corresponding to the input area according to the input of the user, the input process is simplified, and the input efficiency is improved.
The input interface display device in the embodiment of the application may be an electronic device, or may be a component in the electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The input interface display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The input interface display device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 12, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 14, the embodiment of the present application further provides an electronic device 800, including a processor 801 and a memory 802, where a program or an instruction capable of running on the processor 801 is stored in the memory 802, and the program or the instruction implements each step of the embodiment of the input interface display method when executed by the processor 801, and the steps can achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 15 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 15 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
The display unit 106 is configured to display a first page, where the first page includes N input areas, and N is a positive integer; the user input unit 107 is configured to receive a first input from a user, where the first input is used to wake up an input interface; the processor 110 is configured to determine a target input area from the N input areas based on the operation sequence of the first page displayed by the display unit 106 and the function description information of each input area in response to the first input received by the user input unit 107; the operation sequence is used for representing the operation behavior of the user on the first page; the display unit 106 is further configured to display an input interface that matches the target input area determined by the processor 110.
Optionally, in an embodiment of the present application, the processor 110 is specifically configured to: generating basic function dependency graphs of N input areas based on the function description information of each input area, wherein the basic function dependency graphs comprise at least one dependency edge, and each dependency edge is used for representing the function dependency relationship between two input areas; determining at least one input region group based on the basic function dependency graph, wherein each input region group corresponds to one connected component in the basic function dependency graph, one input region group comprises input regions corresponding to all vertexes in one connected component, and determining a first input region group from the at least one input region group; the target input area is one input area in the first input area group.
Optionally, in an embodiment of the present application, the processor 110 is specifically configured to: under the condition that a first dependency edge exists in the basic function dependency graph, removing the first dependency edge in the basic function dependency graph to obtain the first function dependency graph, wherein the distance between two input areas connected by the first dependency edge is larger than a first preset distance; if the distance between the first input area and the second input area is smaller than the second preset distance, adding a second dependency edge between the first input area and the third input area in the first function dependency graph to obtain a second function dependency graph; determining at least one input region group based on the second functional dependency graph; the second input area and the third input area belong to the same connected component, the first input area is an input area without a dependent edge in the N input areas, the second input area is an input area closest to the first input area in each connected component in the first function dependency graph, and the third input area has a function dependency relationship with the first input area.
Optionally, in this embodiment of the present application, the processor 110 is further configured to, based on the operation sequence of the first page displayed by the display unit 106 and the function description information of each input area, obtain, from the N input areas, page element information in a first area in the first page, where the first area is an area in a predetermined range of the N input areas, before determining the target input area; the processor 110 is further configured to determine functional description information of the N input areas based on the page element information; wherein the fourth input area is one of the N input areas.
Optionally, in an embodiment of the present application, the processor 110 is specifically configured to: based on the operation sequence and the function description information of the first input area group, determining the confidence coefficient of the first input area group to obtain M confidence coefficients, wherein M is a positive integer, and the confidence coefficient is used for representing the probability that a user expects to realize the function corresponding to the function description information of the input area group; a first set of input regions is determined based on the M confidence levels.
Optionally, in an embodiment of the present application, the processor 110 is specifically configured to: when at least one confidence coefficient of the M confidence coefficients is larger than or equal to a preset threshold value, an input area group corresponding to the confidence coefficient of which the confidence coefficient is larger than or equal to the preset threshold value is used as a first input area group; or under the condition that the M confidence degrees are smaller than a preset threshold value, taking the input area group of the sixth input area as the first input area group, wherein the sixth input area is the input area closest to the last operation behavior.
In the electronic device provided by the embodiment of the application, the electronic device displays a first page, wherein the first page comprises N input areas, and N is a positive integer; receiving a first input of a user, the first input being used to wake up an input interface; determining a target input area from the N input areas based on the operation sequence of the first page and the function description information of each input area in response to the first input; the operation sequence is used for representing the operation behavior of the user on the first page; and displaying an input interface matched with the target input area. In the scheme, under the condition that the N input areas are displayed on the page, the user can directly predict the input area which the user wants to input by means of the operation behaviors of the user in the page and the function description information of the N input areas without manual selection, so that the electronic equipment can wake up the input interface corresponding to the input area according to the input of the user, the input process is simplified, and the input efficiency is improved.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory 109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the application further provides a readable storage medium, on which a program or an instruction is stored, where the program or the instruction realizes each process of the embodiment of the input interface display method when executed by a processor, and the same technical effects can be achieved, so that repetition is avoided and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or an instruction, implementing each process of the input interface display method embodiment, and achieving the same technical effect, so as to avoid repetition, and no redundant description is provided herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the input interface display method, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (14)

1. An input interface display method, characterized in that the method comprises:
displaying a first page, wherein the first page comprises N input areas, and N is a positive integer;
receiving a first input of a user, wherein the first input is used for waking up an input interface;
determining a target input area from the N input areas based on an operation sequence of the first page and function description information of each input area in response to the first input; the operation sequence is used for representing the operation behavior of the user on the first page;
and displaying an input interface matched with the target input area.
2. The method of claim 1, wherein the determining a target input area from the N input areas based on the operation sequence of the first page and the function description information of each input area comprises:
generating a basic function dependency graph of the N input areas based on the function description information of each input area, wherein the basic function dependency graph comprises at least one dependency edge, and each dependency edge is used for representing the function dependency relationship between the two input areas;
determining at least one input region group based on the basic function dependency graph, wherein each input region group corresponds to one connected component in the basic function dependency graph, and one input region group comprises input regions corresponding to all vertexes in one connected component;
Determining a first set of input regions from the at least one set of input regions based on the sequence of operations;
wherein the target input area is one input area in the first input area group.
3. The method of claim 2, wherein the determining at least one set of input regions based on the base functional dependency graph comprises:
removing a first dependency edge in the basic function dependency graph to obtain a first function dependency graph under the condition that the first dependency edge exists in the basic function dependency graph, wherein the distance between two input areas connected by the first dependency edge is larger than a first preset distance;
if the distance between the first input area and the second input area is smaller than the second preset distance, adding a second dependency edge between the first input area and the third input area in the first function dependency graph to obtain a second function dependency graph;
determining the at least one input region group based on the second functional dependency graph;
the second input area and the third input area belong to the same communication component, the first input area is an input area without a dependency edge in the N input areas, the second input area is an input area closest to the first input area in each communication component in the first function dependency graph, and the third input area and the first input area have a function dependency relationship.
4. The method of claim 2, wherein the determining a target input area from the N input areas based on the operation sequence of the first page and the function description information of each input area, the method further comprises:
acquiring page element information in a first area in the first page, wherein the first area is an area within a preset range from the N input areas;
and determining the function description information of the N input areas based on the page element information.
5. The method according to claim 2 or 4, wherein said determining a first set of input areas from said at least one set of input areas based on said sequence of operations comprises:
determining the confidence coefficient of the first input region group based on the operation sequence and the function description information of the first input region group to obtain M confidence coefficients, wherein M is a positive integer, and the confidence coefficients are used for representing the probability that a user expects to realize the function corresponding to the function description information of the input region group;
the first set of input regions is determined based on the M confidence levels.
6. The method of claim 5, wherein the determining the first set of input regions based on the M confidence levels comprises:
When at least one confidence coefficient of the M confidence coefficients is larger than or equal to a preset threshold value, an input area group corresponding to the confidence coefficient of which the confidence coefficient is larger than or equal to the preset threshold value is used as the first input area group; or,
and under the condition that the M confidence degrees are smaller than the preset threshold value, taking an input area group of a sixth input area as the first input area group, wherein the sixth input area is the input area closest to the last operation behavior.
7. An input interface display device, characterized in that the input interface display device comprises: the device comprises a display module, a receiving module and a processing module;
the display module is used for displaying a first page, wherein the first page comprises N input areas, and N is a positive integer;
the receiving module is used for receiving a first input of a user, and the first input is used for waking up an input interface;
the processing module is used for responding to the first input received by the receiving module, and determining a target input area from the N input areas based on the operation sequence of the first page displayed by the display module and the function description information of each input area; the operation sequence is used for representing the operation behavior of the user on the first page;
And the display module is also used for displaying an input interface matched with the target input area determined by the processing module.
8. The apparatus according to claim 7, wherein the processing module is specifically configured to:
generating a basic function dependency graph of the N input areas based on the function description information of each input area, wherein the basic function dependency graph comprises at least one dependency edge, and each dependency edge is used for representing the function dependency relationship between the two input areas;
determining at least one input region group based on the basic function dependency graph, wherein each input region group corresponds to one connected component in the basic function dependency graph, and one input region group comprises input regions corresponding to all vertexes in one connected component;
determining a first set of input regions from the at least one set of input regions based on the sequence of operations;
wherein the target input area is one input area in the first input area group.
9. The apparatus according to claim 8, wherein the processing module is specifically configured to:
removing a first dependency edge in the basic function dependency graph to obtain a first function dependency graph under the condition that the first dependency edge exists in the basic function dependency graph, wherein the distance between two input areas connected by the first dependency edge is larger than a first preset distance;
If the distance between the first input area and the second input area is smaller than the second preset distance, adding a second dependency edge of the first input area and the third input area in the first function dependency graph to obtain a second function dependency graph;
determining the at least one input region group based on the second functional dependency graph;
the second input area and the third input area belong to the same communication component, the first input area is an input area without a dependency edge in the N input areas, the second input area is an input area closest to the first input area in each communication component in the first function dependency graph, and the third input area and the first input area have a function dependency relationship.
10. The apparatus of claim 8, wherein the apparatus further comprises: an acquisition module and a determination module;
the acquiring module is configured to acquire page element information in a first area in the first page, where the first area is an area within a predetermined range from the N input areas, before determining a target input area from the N input areas, based on the operation sequence of the first page and the function description information of each input area displayed by the display module;
The determining module is configured to determine function description information of the N input areas based on the page element information acquired by the acquiring module.
11. The apparatus according to claim 8 or 10, characterized in that the processing module is specifically configured to:
determining the confidence coefficient of the first input region group based on the operation sequence and the function description information of the first input region group to obtain M confidence coefficients, wherein M is a positive integer, and the confidence coefficients are used for representing the probability that a user expects to realize the function corresponding to the function description information of the input region group;
the first set of input regions is determined based on the M confidence levels.
12. The apparatus according to claim 11, wherein the processing module is specifically configured to:
when at least one confidence coefficient of the M confidence coefficients is larger than or equal to a preset threshold value, an input area group corresponding to the confidence coefficient of which the confidence coefficient is larger than or equal to the preset threshold value is used as the first input area group; or,
and under the condition that the M confidence degrees are smaller than the preset threshold value, taking an input area group of a sixth input area as the first input area group, wherein the sixth input area is the input area closest to the last operation behavior.
13. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the input interface display method of any one of claims 1 to 6.
14. A readable storage medium, wherein a program or instructions are stored on the readable storage medium, which when executed by a processor, implement the steps of the input interface display method of any one of claims 1 to 6.
CN202311602743.1A 2023-11-27 2023-11-27 Input interface display method and device, electronic equipment and readable storage medium Pending CN117572991A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311602743.1A CN117572991A (en) 2023-11-27 2023-11-27 Input interface display method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311602743.1A CN117572991A (en) 2023-11-27 2023-11-27 Input interface display method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117572991A true CN117572991A (en) 2024-02-20

Family

ID=89895251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311602743.1A Pending CN117572991A (en) 2023-11-27 2023-11-27 Input interface display method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117572991A (en)

Similar Documents

Publication Publication Date Title
US10599393B2 (en) Multimodal input system
CN107102746B (en) Candidate word generation method and device and candidate word generation device
CN106293074B (en) Emotion recognition method and mobile terminal
US20170337449A1 (en) Program, system, and method for determining similarity of objects
CN109753202B (en) Screen capturing method and mobile terminal
WO2021254251A1 (en) Input display method and apparatus, and electronic device
CN112364799A (en) Gesture recognition method and device
CN116168038B (en) Image reproduction detection method and device, electronic equipment and storage medium
CN111061383A (en) Character detection method and electronic equipment
CN112131401A (en) Method and device for constructing concept knowledge graph
CN107368205B (en) Handwriting input method and mobile terminal
CN109522706B (en) Information prompting method and terminal equipment
CN111125307A (en) Chat record query method and electronic equipment
WO2016018682A1 (en) Processing image to identify object for insertion into document
CN111553163A (en) Text relevance determining method and device, storage medium and electronic equipment
CN113190646A (en) User name sample labeling method and device, electronic equipment and storage medium
WO2020124442A1 (en) Pushing method and related product
CN116070114A (en) Data set construction method and device, electronic equipment and storage medium
CN114374663B (en) Message processing method and message processing device
CN117572991A (en) Input interface display method and device, electronic equipment and readable storage medium
CN111192027B (en) Method and device for processing list and computer readable storage medium
CN112732100A (en) Information processing method and device and electronic equipment
CN113392177A (en) Keyword acquisition method and device, electronic equipment and storage medium
CN115099899A (en) Data processing method and device, electronic equipment and storage medium
CN117875303A (en) Prompt word determining method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination