CN117270717A - Man-machine interaction method, device, equipment and storage medium based on user interface - Google Patents

Man-machine interaction method, device, equipment and storage medium based on user interface Download PDF

Info

Publication number
CN117270717A
CN117270717A CN202311149241.8A CN202311149241A CN117270717A CN 117270717 A CN117270717 A CN 117270717A CN 202311149241 A CN202311149241 A CN 202311149241A CN 117270717 A CN117270717 A CN 117270717A
Authority
CN
China
Prior art keywords
area
display area
image
class
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311149241.8A
Other languages
Chinese (zh)
Inventor
张慧娴
王越
郭浩
刘妍
苏婧
丁振新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuxun Technology Co Ltd
Original Assignee
Beijing Kuxun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuxun Technology Co Ltd filed Critical Beijing Kuxun Technology Co Ltd
Priority to CN202311149241.8A priority Critical patent/CN117270717A/en
Publication of CN117270717A publication Critical patent/CN117270717A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application discloses a man-machine interaction method, device, equipment and storage medium based on a user interface, belonging to the field of man-machine interaction, wherein the method comprises the following steps: displaying an access portal, wherein the access portal is an interaction portal for triggering a display user interface; responding to the triggering operation of the access entrance, displaying the user interface, wherein the user interface comprises an image type area and a character type area, the image type area has a first display area, and the character type area has a second display area; in response to satisfying the expansion condition, the image class region is increased from the first display area to a third display area, and the text class region is decreased to a fourth display area. In the application, when the user interface is entered, the text area is enabled to have a larger display area, so that a user can acquire information from the text area more easily, and after the expansion condition is met, the display area of the image area is increased, and the function of attracting the user by the image area is played.

Description

Man-machine interaction method, device, equipment and storage medium based on user interface
Technical Field
The present invention relates to the field of human-computer interaction, and in particular, to a human-computer interaction method, device, equipment and storage medium based on a user interface.
Background
With the development of internet technology, the information transmission form is greatly changed, and the original paper transmission is dominant, so that the information transmission form is gradually changed into the current network transmission dominant. In particular, multimedia technology is rising, and information spreaders tend to use multimedia means to spread information in user interfaces when conducting network propagation. And as the distributor of the information becomes aware that the multimedia information is more attractive to the recipient of the information, the layout of the multimedia information becomes larger and larger. The receiver of the information is more easily attracted by the multimedia information with larger layout, and ignores the text information with larger information bearing capacity, so that the receiver of the information is not complete enough.
In the related art, text information is added to multimedia information to increase the information bearing capacity of the multimedia information, but this approach is easy to overload the processing of information by the information receiver, and key information cannot be obtained quickly.
Therefore, how to design the user interface so that the user can notice the text information and not influence the information propagator to attract the information receiver by using the multimedia information is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a man-machine interaction method, device and equipment based on a user interface and a storage medium. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a human-computer interaction method based on a user interface, where the method includes:
displaying an access portal, wherein the access portal is an interaction portal for triggering a display user interface;
responding to the triggering operation of the access entrance, displaying the user interface, wherein the user interface comprises an image type area and a character type area, the image type area has a first display area, and the character type area has a second display area;
in response to satisfying a deployment condition, increasing the image class region from the first display area to a third display area, and decreasing the text class region to a fourth display area;
the first display area is smaller than the third display area, and the second display area is larger than the fourth display area.
In another aspect, an embodiment of the present application provides a human-computer interaction device based on a user interface, where the device includes:
the first display module is used for displaying an access portal, wherein the access portal is an interaction portal for triggering display of a user interface;
The second display module is used for responding to the triggering operation of the access entrance and displaying the user interface, wherein the user interface comprises an image area and a character area, the image area has a first display area, and the character area has a second display area;
and the unfolding module is used for responding to the fact that the unfolding condition is met, increasing the image type area from the first display area to the third display area and reducing the text type area to the fourth display area.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores a computer program, and the computer program is loaded and executed by the processor to implement the above-mentioned human-computer interaction method based on a user interface.
In another aspect, embodiments of the present application provide a non-transitory computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the above-described user interface-based human-computer interaction method.
In another aspect, embodiments of the present application provide a computer program product comprising a computer program stored in a computer readable storage medium; the computer program is read from the computer readable storage medium and executed by a processor of a computer device, causing the computer device to perform the above-described user interface based human-machine interaction method.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
in the technical scheme, the image area and the text area have effective information, and when the user interface is accessed, the effective information of the text area is highlighted, and the effective information of the image area is weakened. After the unfolding condition is met, the effective information of the text area is weakened, the effective information of the image area is highlighted, and the effective information of the image area and the text area can be received by a user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a schematic diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 illustrates a flowchart of a user interface-based human-machine interaction method provided by an exemplary embodiment of the present application;
FIG. 3 illustrates a flowchart of a user interface-based human-machine interaction method provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of the layout of image class areas and text class areas in a user interface of the present application;
FIG. 5 illustrates a flowchart of a user interface-based human-machine interaction method provided by an exemplary embodiment of the present application;
FIG. 6 illustrates a flowchart of a user interface-based human-machine interaction method provided by an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of a human-computer interaction method based on a user interface according to an exemplary embodiment of the present application;
FIG. 8 illustrates a flowchart of a user interface-based human-machine interaction method provided by an exemplary embodiment of the present application;
FIG. 9 is a diagram of a human-machine interaction method based on a user interface according to an exemplary embodiment of the present application;
FIG. 10 illustrates a flowchart of a user interface-based human-machine interaction method provided by an exemplary embodiment of the present application;
FIG. 11 illustrates a flowchart of a user interface-based human-machine interaction method provided by an exemplary embodiment of the present application;
FIG. 12 illustrates a flowchart of a user interface-based human-machine interaction method provided in an exemplary embodiment of the present application;
FIG. 13 illustrates a flowchart of a user interface-based human-machine interaction method provided by an exemplary embodiment of the present application;
FIG. 14 illustrates a flowchart of a user interface-based human-machine interaction method provided in an exemplary embodiment of the present application;
FIG. 15 illustrates a flowchart of a user interface-based human-machine interaction method provided in an exemplary embodiment of the present application;
FIG. 16 is a diagram illustrating a user interface-based human-machine interaction method provided in an exemplary embodiment of the present application;
FIG. 17 is a diagram illustrating a human-machine interaction method based on a user interface according to an exemplary embodiment of the present application;
FIG. 18 illustrates a schematic diagram of a user interface-based human-machine interaction method provided in an exemplary embodiment of the present application;
FIG. 19 is a diagram of a user interface-based human-machine interaction method provided in an exemplary embodiment of the present application;
FIG. 20 is a diagram illustrating a human-machine interaction method based on a user interface according to an exemplary embodiment of the present application;
FIG. 21 illustrates a schematic diagram of a user interface-based human-machine interaction method provided in an exemplary embodiment of the present application;
FIG. 22 illustrates a schematic diagram of a user interface-based human-machine interaction method provided in an exemplary embodiment of the present application;
FIG. 23 is a schematic diagram of the layout of image class areas and text class areas within a user interface of the present application;
FIG. 24 illustrates a flowchart of a user interface-based human-machine interaction method provided by an exemplary embodiment of the present application;
FIG. 25 illustrates a flowchart of a user interface-based human-machine interaction method provided in an exemplary embodiment of the present application;
FIG. 26 illustrates a block diagram of a user interface based human-machine interaction device provided in an exemplary embodiment of the present application;
fig. 27 shows a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be noted that, before and during the process of collecting the relevant data of the user, the present application may display a prompt interface, a popup window or output a voice prompt message, where the prompt interface, popup window or voice prompt message is used to prompt the user to collect the relevant data currently, so that the present application only starts to execute the relevant step of obtaining the relevant data of the user after obtaining the confirmation operation sent by the user to the prompt interface or popup window, otherwise (i.e. when the confirmation operation sent by the user to the prompt interface or popup window is not obtained), the relevant step of obtaining the relevant data of the user is finished, i.e. the relevant data of the user is not obtained. In other words, all user data collected in the present application is collected with the consent and authorization of the user, and the collection, use and processing of relevant user data requires compliance with relevant laws and regulations and standards of the relevant country and region.
First, description is made of related terms related to the present application:
in response to: for representing a condition or state upon which an operation is performed, one or more operations performed may be in real-time or with a set delay when the condition or state upon which the operation is dependent is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
FIG. 1 illustrates a schematic diagram of a computer system provided in an exemplary embodiment of the present application. The computer system may include: terminal equipment 110, server 140.
The terminal device 110 may be a laptop portable computer, a desktop computer, a cell phone, a tablet computer, an electronic book reader, an electronic game machine, and the like.
The terminal device 110 includes a memory and a processor therein; the memory may include one or more computer-readable storage media. The computer-readable storage medium includes at least one of a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), and a Flash Memory (Flash). The memory has an operating system and application programs installed therein.
The operating system is the underlying software that provides applications with secure access to computer hardware, and may be Android or apple systems (IOS). The operating system supports the downloading and installation of applications.
Optionally, the terminal device 110 further includes a touch screen; the touch screen may be a capacitive screen or a resistive screen. The touch screen is used to enable interaction between the terminal device and the user 130. In the embodiment of the application, the terminal device obtains the interactive operation of the interface in the application program triggered by the user 130 through the touch screen.
The terminal device 110 is installed and runs an application program, which is designed with a user interface 120.
The user interface 120 is used for displaying graphic information, and the graphic information comprises image type information and text type information. The user interface 120 includes an image type area 123 and a text type area 124, the image type area 123 is used for displaying image type information, the image type information includes picture type information, video type information, and the like, the text type area 124 is used for displaying text type information, and the text type information includes at least one of commodity information, article titles, article contents, and application information. The display areas of the image-like area 123 and the text-like area 124 can be changed.
Terminal device 110 is the terminal device used by user 130, and user 130 may operate user interface 120 to obtain the teletext information. The operation performed by the user 130 on the user interface 120 is referred to as an interactive operation. Wherein the interactive operation includes a sliding operation including at least one of up, down, left, and right sliding, and a clicking operation, and the interactive operation may be further classified according to a difference in interactive area. Illustratively, the interactive region includes an image class region 123 and a text class region 124, and the interactive operation includes at least one of a sliding operation within the image class region, a clicking operation within the image class region, a sliding operation within the text class region, a clicking operation within the text class region, and a sliding operation within the image class region and the text class region. Optionally, the terminal device 110 changes the user interface 120 according to the interaction operation, and illustratively switches from the state corresponding to the interface 121 to the state corresponding to the interface 122, or switches from the state corresponding to the interface 122 to the state corresponding to the interface 121.
In some embodiments, the computer system further includes a server 140, as shown in FIG. 2. Server 140 may be any of a number of servers, virtual cloud storage, or cloud computing centers. Optionally, the server 140 is configured to provide an installation package of the application program for the terminal device 110; or, the server 140 is configured to provide the user interface 120 for the terminal device 110; or, the server 140 is configured to provide the terminal device 110 with the graphic information; or, the server 140 is configured to provide the terminal device 110 with image class information; or, the server 140 is configured to provide the terminal device 110 with text information; or, the server 140 is configured to receive an interaction operation, and instruct the terminal device 110 to switch from the state corresponding to the interface 121 to the state corresponding to the interface 122; or, the server 140 is configured to receive the interaction, and instruct the terminal device 110 to switch from the state corresponding to the interface 122 to the state corresponding to the interface 121.
In some embodiments, the connection between terminal device 110 and server 140 is made via a wired or wireless network.
Fig. 3 shows a schematic diagram of a human-computer interaction method based on a user interface according to an exemplary embodiment of the present application, which is performed by a terminal device, which may be the terminal device shown in fig. 1 and 2. The method comprises the following steps:
Step 210: displaying an access portal, the access portal being an interactive portal that triggers the display of a user interface;
in some embodiments, the user interface is a teletext interface.
When the application program is an electronic commerce application program, the image-text mixed arrangement interface is an article detail interface; or when the application program is an information reading application program, the image-text mixed interface is an information interface, such as a news interface; or when the application program is a travel application program, the image-text mixed arrangement interface is a scenic spot detail interface; or when the application program is a navigation application program, the image-text mixed arrangement interface is a place detail interface; or when the application program is a community application program, the image-text mixed arrangement interface is a note detail interface.
Illustratively, the access portal is an icon of an application program that includes a user interface; or, the access entry is a paging list, such as a paging image list, a paging text list, a paging icon list, a paging text-to-text mixed list, etc.; or, the access entry is a paging grid, such as a paging image grid, a paging text grid, a paging icon grid, a paging text-graphics hybrid grid, etc.; or, the access entry is a scroll list, such as a scroll image list, a scroll text list, a scroll icon list, a scroll text mix list, etc.; or, the access portal is a scrolling grid, such as a scrolling image grid, a scrolling text grid, a scrolling icon grid, a scrolling text-to-text mixture grid, etc.; or the access entrance is an image-text mixed interface; or, the access portal is a hidden portal.
Step 220: responding to the triggering operation of the access entrance, displaying a user interface, wherein the user interface comprises an image type area and a character type area, the image type area has a first display area, and the character type area has a second display area;
in some embodiments, the display area of the user interface is the sum of the first display area and the second display area.
In some embodiments, the triggering operation for accessing the portal includes at least one of a swipe operation, a click operation, a voice command, a gesture command, a gravity sensing command, an eye movement control command, a handle control command, the swipe operation including at least one of a swipe up, a swipe down, a swipe left, and a swipe right.
In some embodiments, the image class area is used to display image class information and/or video class information.
In some embodiments, the text-like region is used to display at least one of merchandise information, article titles, article content, application information.
Illustratively, in the user interface, the image type region and the text type region are laid out in a manner as shown in fig. 4, and the image type region and the text type region are laid out in a left-right manner 910; or, the image area and the text area are up-down layout 920; or, the image class area and the text class area are embedded layouts 930. In fig. 4, the positions of the image type region and the text type region may be exchanged.
In some embodiments, other regions are included in the user interface, such as a thumbnail list region, a comment region.
Illustratively, in response to a slide-up operation on the scrolling graphical-text mixing list, a user interface is displayed, the user interface including an image-type region and a text-type region, the image-type region and the text-type region being a top-bottom layout 920, the image-type region having a first display area and the text-type region having a second display area.
Illustratively, in response to a click operation on an icon of an application program including a user interface, the user interface is displayed, the user interface including an image class area and a text class area, the image class area and the text class area being a left-right layout 910, the image class area having a first display area and the text class area having a second display area.
Illustratively, in response to a voice command to conceal an entry, the voice command is an "open xxx application," a user interface is displayed that includes an image class region and a text class region, the image class region and the text class region being an embedded layout 930, the image class region having a first display area and the text class region having a second display area.
It should be noted that, the above-mentioned combination of the triggering operation and the layout manner of the image area and the text area in the user interface is merely a simple example, but the embodiment of the present application is not limited thereto.
Step 230: in response to the expansion condition being met, the image-type region is increased from the first display area to the third display area and the text-type region is reduced to the fourth display area.
The first display area is smaller than the third display area, and the second display area is larger than the fourth display area.
In some embodiments, the fourth display area has a duty cycle in the user interface that is greater than the first threshold.
Illustratively, the first threshold is one third and the fourth display area occupies one third of the user interface; or, the first threshold is one-fourth and the fourth display area occupies one-fourth of the user interface.
In some embodiments, the display area of the user interface is the sum of the third display area and the fourth display area.
In some embodiments, as shown in fig. 23, the image type area is increased from the first display area to the third display area, and the width of the image type area is kept unchanged, the length is increased from the first length to the second length, the image type area before the display area is increased is shown in a schematic diagram (1), and the image type area after the display area is increased is shown in a schematic diagram (2); or the image type area is increased from the first display area to the third display area, the length of the image type area is kept unchanged, the width is increased from the first width to the second width, the image type area before the display area is increased is shown as a schematic diagram (3), and the image type area after the display area is increased is shown as a schematic diagram (4); or the image type area is increased from the first display area to the third display area, the length of the image type area is increased from the third length to the fourth length, the width is increased from the third width to the fourth width, the image type area before the display area is increased is shown as a schematic diagram (5), and the image type area after the display area is increased is shown as a schematic diagram (6); or the image type area is increased from the first display area to the third display area, the image type area is increased from the fifth length to the sixth length, the width is reduced from the fifth width to the sixth width, the image type area before the display area is increased is shown as a schematic diagram (5), and the image type area after the display area is increased is shown as a schematic diagram (7); or the image type region is increased from the first display area to the eighth width from the seventh width, the length is reduced from the seventh length to the eighth length, the image type region before the display area is increased is shown in a schematic diagram (5), and the image type region after the display area is increased is shown in a schematic diagram (8). Wherein the first length is less than the second length, the third length is less than the fourth length, the fifth length is less than the sixth length, and the seventh length is greater than the eighth length; the first width is smaller than the second width, the third width is smaller than the fourth width, the fifth width is larger than the sixth width, and the seventh width is smaller than the eighth width.
In some embodiments, the increasing of the image-like area from the first display area to the third display area is displaying a mask layer in the user interface that includes only the image-like area; or, the image type region is increased from the first display area to the third display area, and a popup window only comprising the image type region is displayed in the user interface.
Illustratively, in the user interface, the image type region and the text type region are arranged in a top-bottom arrangement 920, and in response to the expansion condition being satisfied, the width of the image type region is kept unchanged, and the length is increased from the first length to the second length.
Illustratively, in the user interface, the image-type region and the text-type region are laid out in a left-right layout 910, and the length of the image-type region is kept unchanged and the width is increased from the first width to the second width in response to the expansion condition being satisfied.
In summary, in the method provided in this embodiment, when entering the user interface, the image area has a smaller first display area, and the text area has a larger second display area, so that the user can notice the text information in the more important text area when entering the user interface, and when the expansion condition is satisfied, the display area of the image area is increased, and the display area of the text area is reduced, so that the user refocuses the image information in the image area for comprehensively displaying the information content after receiving the basic text information.
The present application shows two different expansion conditions, for which there are two implementations of increasing an image class area from a first display area to a third display area and reducing a text class area to a fourth display area, the two implementations being as follows:
the implementation mode is as follows: in response to the auto-expand condition being met, the image type region is increased from the first display area to the third display area and the text type region is decreased to the fourth display area.
The implementation mode II is as follows: in response to receiving the interactive operation, the image-type region is increased from the first display area to the third display area and the text-type region is decreased to the fourth display area.
The following description is sequentially given to the above two implementations, but the order of introduction of the two implementations does not limit the advantages and disadvantages of the two implementations.
The implementation mode is as follows:
in an alternative embodiment based on fig. 3, step 230 may alternatively be implemented as step 231, as shown in fig. 5.
Step 231: in response to the auto-expand condition being met, the image type region is increased from the first display area to the third display area and the text type region is decreased to the fourth display area.
Illustratively, as shown in fig. 7, in response to the automatic expansion condition being satisfied, the image-type region is increased from the first display area to the third display area and the text-type region is decreased to the fourth display area, the user interface is switched from the state corresponding to the interface 310 to the state corresponding to the interface 320, the width of the image-type region remains unchanged, the length is increased, the width of the text-type region remains unchanged, and the length is decreased.
In some embodiments, meeting the auto-launch condition is the application triggering an auto-launch event.
In summary, the method provided in this embodiment designs the automatic expansion condition, and when the automatic expansion condition is satisfied, the display areas of the image area and the text area are adjusted, so that the whole interaction process is more complete.
In some embodiments, step 2311 replaces step 231, as shown in FIG. 6.
Step 2311: in response to not receiving the interactive operation within the first time period, the image type region is increased from the first display area to the third display area, and the text type region is decreased to the fourth display area.
In some embodiments, the first duration begins counting from entering the user interface.
In some embodiments, the first duration is a fixed duration; or, the first duration is a duration set by the user.
Illustratively, as shown in FIG. 7, in response to not receiving an interactive operation within 3 seconds from a fixed duration of entering the user interface, increasing the image-like region from the first display area to the third display area and decreasing the text-like region to the fourth display area, the user interface switches from the state corresponding to interface 310 to the state corresponding to interface 320, the width of the image-like region remains unchanged, the length increases, the width of the text-like region remains unchanged, and the length decreases.
Illustratively, as shown in fig. 7, in response to not receiving an interactive operation within 3 seconds from a user setting time period entering the user interface, the image type region is increased from the first display area to the third display area and the text type region is decreased to the fourth display area, the user interface is switched from the state corresponding to the interface 310 to the state corresponding to the interface 320, the width of the image type region remains unchanged, the length is increased, the width of the text type region remains unchanged, and the length is decreased.
In some embodiments, the non-receipt of the interactive operation within the first time period is a non-triggering interactive event within the first time period of the terminal device; or, the interaction operation is not received in the first time period, and is not triggered in the first time period of the application program.
In summary, the method provided in this embodiment is based on the fact that no interactive operation is received in the first duration, where the first duration is a duration that is preset or set by the user to ensure that the user can browse the text information in the text area, and when no interactive operation is received in the first duration, the image area is automatically expanded to ensure that the user can focus on the image area after browsing the text information, so that the whole interactive process is smoother, and the text area and the image area can exert their functions.
The implementation mode II is as follows:
in an alternative embodiment based on fig. 3, step 230 may alternatively be implemented as step 232, as shown in fig. 8.
Step 232: in response to receiving the interactive operation, the image-type region is increased from the first display area to the third display area and the text-type region is decreased to the fourth display area.
In some embodiments, the interactive operation includes a sliding operation including at least one of up, down, left, and right sliding, and a clicking operation, and the interactive operation may be further classified according to a difference in interactive area. For example, the interactive region includes an image-type region and a text-type region, and the interactive operation includes at least one of a sliding operation within the image-type region, a clicking operation within the image-type region, a sliding operation within the text-type region, a clicking operation within the text-type region, and a sliding operation within the image-type region and the text-type region.
Illustratively, as shown in fig. 9, in response to receiving an interactive operation, increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area, the image class area and the text class area may also vary differently depending on the different interactive operations received.
In some embodiments, the image class region is an image view and the text class region is a text view.
In some embodiments, an interactive operation is received that triggers a touch gesture event for an application. The touch gesture event includes at least one of a press event, a move event, and a lift event. Wherein the pressing event records coordinates of the pressing location; the mobile event records the coordinates when the mobile event is triggered; the lift event will record the coordinates of the lift position.
In some embodiments, the terminal device shares the received interactive operation to the operating system; or the terminal equipment processes the received interactive operation and shares the processed interactive operation with the operating system.
The terminal device receives a clicking operation through the touch screen, the terminal device obtains coordinates (x 1, y 1) corresponding to the clicking operation, an interaction area corresponding to the coordinates is an image view, the terminal device converts the clicking operation into a touch gesture event for triggering the image view, and both a pressing point and a lifting point corresponding to the gesture are (x 1, y 1); or the terminal equipment receives the sliding operation on the image area through the touch screen, the terminal equipment acquires that the pressing point corresponding to the sliding operation is (x 2, y 2), the lifting point is (x 3, y 3), the interaction area corresponding to the two coordinates is the image view, the terminal equipment converts the sliding operation into a touch gesture event triggering the image view, the pressing point corresponding to the gesture is (x 2, y 2), and the lifting point is (x 3, y 3); or the terminal equipment receives the sliding operation through the touch screen, the terminal equipment obtains that the pressing point corresponding to the sliding operation is (x 4, y 4), the moving point is (x 5, y 5), the moving point is the current pressing point obtained by the terminal equipment when the user performs the sliding operation, the terminal equipment judges that the sliding direction points to the text area from the image area according to the coordinates, namely the sliding operation, the operating system converts the sliding operation into a touch gesture triggering event, and the sliding distance is from (x 4, y 4) to (x 5, y 5).
In some embodiments, the operating system shares the received interactive operation to the application; or the operating system processes the received interactive operation and shares the processed interactive operation with the application program.
In some embodiments, the terminal device receives a sliding operation through the touch screen, the terminal device obtains a first moving point corresponding to the sliding operation as (x 1, y 1), a second moving point is (x 2, y 2), the moving point is a current touch point obtained by the terminal device when the user performs the sliding operation, the terminal device determines that the sliding direction points from the image area to the text area according to the coordinates, that is, the sliding operation is performed by the operating system, the sliding operation is converted into a touch gesture triggering event, and the sliding distance is from (x 1, y 1) to (x 2, y 2).
In some embodiments, the user interface further comprises a list view comprising at least one image view that is a thumbnail of the image view in the user interface.
In summary, the method provided in this embodiment increases the image area from the first display area to the third display area and reduces the text area to the fourth display area according to the user intention represented by the received interactive operation, thereby improving the friendliness of the interactive process and fully satisfying the user intention.
The interactive processes corresponding to different interactive operations will be described below, and the description sequence does not limit the advantages and disadvantages of the interactive processes corresponding to the interactive operations.
1. Receiving a first click operation within an image class area
In an alternative embodiment based on fig. 8, step 232 may alternatively be implemented as step 2321, as shown in fig. 10.
Step 2321: in response to receiving the first click operation within the image class area, the image class area is increased from the first display area to a third display area, and the text class area is decreased to a fourth display area.
In some embodiments, as shown in fig. 9, in response to receiving a first click operation within an image class region, the image class region is increased from a first display area to a third display area and the text class region is decreased to a fourth display area, the user interface is switched from a state corresponding to interface 410 to a state corresponding to interface 440, the width of the image class region remains unchanged, the length increases, the width of the text class region remains unchanged, and the length decreases.
In summary, in the method provided in this embodiment, when the user performs the clicking operation on the image area, it is reflected that the user wants to be able to focus on the image area, so when the first clicking operation is received in the image area, the display area of the image area is increased, and the display area of the text area is reduced, so that the user can more conveniently view the image area that the user wants to view.
2. Receiving a second click operation on a target thumbnail element in the thumbnail list
In an alternative embodiment based on fig. 8, step 232 may alternatively be implemented as steps 2322 to 2324, as shown in fig. 11.
Step 2322: displaying a thumbnail list comprising thumbnail elements of at least two image class elements, the image class element displayed by the image class region being one of the at least two image class elements;
in some embodiments, the thumbnail list is used to present thumbnail elements for all image class elements; or, the thumbnail list is used for displaying thumbnail elements of part of image class elements; or, the thumbnail list is used to present thumbnail elements of the set of image class elements.
In some embodiments, the thumbnail element is a thumbnail of an image class element; or, in some embodiments, the thumbnail element is a simple geometric figure; or, in some embodiments, the thumbnail element is an icon.
In some embodiments, the thumbnail list is independent of the image class area and the text class area; or, the thumbnail list is positioned in the image type area; or, the thumbnail list is positioned in the text area; or, the thumbnail list is hidden in the hover icon, and clicking on the hover icon is required to expand the thumbnail list.
Illustratively, a thumbnail list of thumbnail elements for displaying all image class elements independent of the image class region and the text class region is displayed; or, displaying a thumbnail list of thumbnail elements for displaying part of the image class elements in the image class area; or, displaying a thumbnail list of thumbnail elements for showing the image class element set independently of the image class region and the text class region. It should be noted that, in the embodiment of the present application, the thumbnail list is used as a thumbnail list for displaying all the image elements independent of the image area and the text area, but the display form of the thumbnail list is not limited.
In some embodiments, the list of thumbnails is a list view on the user interface, and the thumbnail elements are image views in the list view.
Step 2323: in response to receiving a second click operation on a target thumbnail element in the thumbnail list, increasing the image class area from the first display area to a third display area and decreasing the text class area to a fourth display area;
in some embodiments, in response to receiving a second click operation on a target thumbnail element in the thumbnail list, the image class area is increased from the first display area to the third display area and the text class area is decreased to the fourth display area.
Illustratively, in the thumbnail list after receiving the click operation, the target thumbnail element is moved to the first position of the thumbnail list; or, in the thumbnail list after receiving the click operation, the target thumbnail element is highlighted.
Step 2324: and switching the image class elements displayed in the image class area into the image class elements corresponding to the target thumbnail elements.
In some embodiments, based on step 2323, as shown in fig. 9, in response to receiving a second click operation on a target thumbnail element in the thumbnail list, increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area, switching the image class element displayed in the image class area to the image class element corresponding to the target thumbnail element, switching the state of the user interface corresponding to the interface 410 to the state corresponding to the interface 450, the width of the image class area remains unchanged, the length increases, the width of the text class area remains unchanged, the length decreases, and the i-th image class element 411 displayed in the image class area switches to the n-th image class element 451 in the thumbnail list. Wherein, i and n are both positive integers, and the image element corresponding to the target thumbnail is the nth image element in the thumbnail list.
In summary, in the method provided in this embodiment, when the user clicks the target thumbnail element, it is reflected that the user wants to focus on the image class area and wants to know other image class elements, so after receiving the click operation on the target thumbnail element, the display area of the image class area is increased, the display area of the text class area is reduced, so that the image class area is more prominent in the user interface, and simultaneously, the image class element displayed in the image class area is switched to the image class element corresponding to the target thumbnail, so that the requirement of the user can be met.
3. Receiving a first sliding operation in an image class area
In an alternative embodiment based on fig. 8, step 232 may alternatively be implemented as step 2325 and step 2326, as shown in fig. 12.
Step 2325: in response to receiving the first sliding operation within the image class area, increasing the image class area from the first display area to a third display area, and decreasing the text class area to a fourth display area;
wherein the sliding direction of the first sliding operation is directed from a first side of the user interface to a second side of the user interface.
In some embodiments, the first side is the left side and the second side is the right side; or the first side is the right side, and the second side is the left side; or, the first side is an upper side, and the second side is a lower side; or, the first side is the lower side and the second side is the upper side. It should be noted that, in the embodiment of the present application, the first side is taken as the left side, and the second side is taken as the right side, but specific directions of the first side and the second side are not limited.
Step 2326: switching the image class element displayed in the image class region from the first image class element to the second image class element;
the image class elements displayed in the image class area are image class elements in an image list, the image list comprises at least two image class elements, the first image class element is an nth image class element in the image list, the second image class element is an nth-1 image class element in the image list, and n is a positive integer.
In some embodiments, the image list is a list of thumbnails in the embodiment of FIG. 11; or, the image list is implicitly present.
In some embodiments, the image list is a fixed list, for example, there are 5 pictures 1 to 5 in the image list, and when the image list is 1 st picture, the image list is still 1 st picture after receiving a right-sliding operation of the user; or, the image list is a scrolling list, for example, the image list has 5 th pictures from 1 st to 5 th, when the image list is in the 1 st picture, the image list is switched to the 5 th picture after the user receives the right sliding operation, and the image list is switched to the 4 th picture after the user receives the left sliding operation again.
In some embodiments, based on step 2325, in response to receiving a first sliding operation within the image class region, increasing the image class region from the first display area to the third display area and decreasing the text class region to the fourth display area, based on the sliding direction of the first sliding operation, switching the image class element displayed by the image class region from the i-th image class element 411 to the i-1-th image class element 431, switching the state of the user interface corresponding to the interface 410 to the state corresponding to the interface 430, maintaining the width of the image class region unchanged, increasing the length, maintaining the width of the text class region unchanged, decreasing the length, and switching the image class element displayed by the image class region from the i-th image class element 411 to the i-1-th image class element 431, as shown in fig. 9. Wherein i is a positive integer, or i is a positive integer greater than 1.
In some embodiments, step 2326 is optional.
In summary, in the method provided in this embodiment, when the user slides the image area, the user wants to focus on the image area and want to know other image elements, so after receiving the first sliding operation on the image area, the display area of the image area is increased, the display area of the text area is reduced, so that the image area is more prominent in the user interface, and simultaneously, the image elements displayed in the image area are switched from the first image element to the second image element according to the sliding direction of the first sliding operation, so that the user's requirement can be met.
4. Receiving a second sliding operation in an image class area
In an alternative embodiment based on fig. 8, step 232 may alternatively be implemented as step 2327 and step 2328, as shown in fig. 13.
Step 2327: in response to receiving the second sliding operation within the image class area, increasing the image class area from the first display area to a third display area, and decreasing the text class area to a fourth display area;
wherein the sliding direction of the second sliding operation is directed from the second side of the user interface to the first side of the user interface.
In some embodiments, the first side is the left side and the second side is the right side; or the first side is the right side, and the second side is the left side; or, the first side is an upper side, and the second side is a lower side; or, the first side is the lower side and the second side is the upper side. It should be noted that, in the embodiment of the present application, the first side is taken as the left side, and the second side is taken as the right side, but specific directions of the first side and the second side are not limited.
Step 2328: switching the image class element displayed in the image class region from the third image class element to the fourth image class element;
the image class elements displayed in the image class area are image class elements in an image list, the image list comprises at least two image class elements, the third image class element is an nth image class element in the image list, the fourth image class element is an (n+1) th image class element in the image list, and n is a positive integer.
In some embodiments, the image list is a list of thumbnails in the embodiment of FIG. 11; or, the image list is implicitly present.
In some embodiments, the image list is a fixed list, for example, there are 5 th pictures 1 to 5 in the image list, and when the image list is in the 5 th picture, the image list is still in the 5 th picture after the left slide operation of the user is received; or, the image list is a scrolling list, for example, the image list has 5 th pictures from 1 to 5 th, when the image list is in the 5 th picture, the image list is switched to the 1 st picture after the left sliding operation of the user is received, and the image list is switched to the 2 nd picture after the left sliding operation is received again.
In some embodiments, based on step 2327, in response to receiving a second sliding operation in the image class area, as shown in fig. 9, the image class area is increased from the first display area to the third display area, and the text class area is reduced to the fourth display area, based on the sliding direction of the second sliding operation, the image class element displayed in the image class area is switched from the i-th image class element 411 to the i+1th image class element 421, the state of the user interface corresponding to the interface 410 is switched to the state corresponding to the interface 420, the width of the image class area remains unchanged, the length is increased, the width of the text class area remains unchanged, the length is reduced, and the image class element displayed in the image class area is switched from the i-th image class element 411 to the i+1th image class element 421. Wherein i is a positive integer, or i is an integer.
In some embodiments, step 2328 is optional.
In summary, in the method provided in this embodiment, when the user slides the image area, the user wants to focus on the image area and want to know other image elements, so after receiving the second sliding operation on the image area, the display area of the image area is increased, the display area of the text area is reduced, so that the image area is more prominent in the user interface, and simultaneously, the image elements displayed in the image area are switched from the third image element to the fourth image element according to the sliding direction of the second sliding operation, so that the user's requirement can be met.
5. Receiving a third sliding operation
In an alternative embodiment based on fig. 8, step 232 may alternatively be implemented as step 2329, as shown in fig. 14.
Step 2329: in response to receiving the third sliding operation, the image-type region is increased from the first display area to a third display area, and the text-type region is decreased to a fourth display area.
Wherein the sliding direction of the third sliding operation is from the image-like region to the text-like region.
In some embodiments, as shown in fig. 9, in response to receiving the third sliding operation, the image type region is increased from the first display area to the third display area, and the text type region is decreased to the fourth display area, the user interface is switched from the state corresponding to the interface 410 to the state corresponding to the interface 440, the width of the image type region remains unchanged, the length is increased, the width of the text type region remains unchanged, and the length is decreased.
In summary, according to the method provided by the embodiment, when the user performs the sliding operation, the user is not interested in the text region, so that after the third sliding operation, that is, the sliding operation is received, the display area of the image region is increased, the display area of the text region is reduced, the occupation ratio of the image region in the user interface is increased, and the user can clearly understand the content of the image elements displayed in the image region.
In an alternative embodiment based on fig. 3, the method further comprises step 240, as shown in fig. 15.
Step 240: in response to receiving the fourth sliding operation, the image-type region is reduced from the first display area to a fifth display area, and the text-type region is increased to a sixth display area.
The sliding direction of the fourth sliding operation points to the picture area from the text area, the first display area is larger than the fifth display area, the second display area is smaller than the sixth display area, the fifth display area is reduced along with the increase of the sliding distance of the fourth sliding operation, and the sixth display area is increased along with the increase of the sliding distance of the fourth sliding operation.
In some embodiments, as shown in fig. 16, in response to receiving the fourth sliding operation, the image area is reduced from the first display area to the fifth display area, and the text area is increased to the sixth display area, and the user interface is switched from the state corresponding to the interface 510 to the state corresponding to the interface 520, and is switched to the state corresponding to the interface 530 according to the increase of the sliding distance of the fourth sliding operation. At this time, only the text region exists in the user interface.
In some embodiments, step 240 may be performed before step 230.
In summary, in the method provided in this embodiment, when the user performs the sliding operation, it is reflected that the user wants to focus on the text region, so after receiving the fourth sliding operation, that is, after the sliding operation, the display area of the image region is reduced, the display area of the text region is increased, and the user is facilitated to view the text information in the text region.
It should be noted that the above-described alternative embodiments may be combined with each other. The alternative embodiment as corresponding to fig. 15 may be combined with the alternative embodiment as corresponding to fig. 5, step 240 being performed after step 231; the alternative embodiment corresponding to fig. 15 may be combined with the alternative embodiment corresponding to fig. 6, step 240 being performed after step 2311; the alternative embodiment corresponding to fig. 15 may be combined with the alternative embodiment corresponding to fig. 8, step 240 being performed after step 231, or step 240 being performed before step 231; the alternative embodiment corresponding to fig. 15 may be combined with the alternative embodiment corresponding to fig. 10, step 240 being performed after step 2321, or step 240 being performed before step 2321; the alternative embodiment corresponding to fig. 15 may be combined with the alternative embodiment corresponding to fig. 11, step 240 being performed after step 2324, or step 240 being performed before step 2323; the alternative embodiment corresponding to fig. 15 may be combined with the alternative embodiment corresponding to fig. 12, step 240 being performed after step 2326, or step 240 being performed before step 2325; the alternative embodiment corresponding to fig. 15 may be combined with the alternative embodiment corresponding to fig. 13, step 240 being performed after step 2328, or step 240 being performed before step 2327; the alternative embodiment corresponding to fig. 15 may be combined with the alternative embodiment corresponding to fig. 14, step 240 being performed after step 2329, or step 240 being performed before step 2329.
In order to more clearly understand the various display modes and operation modes of the embodiments of the present application, the following detailed description is provided with reference to schematic diagrams. The application program is an electronic commerce application program, the user interface is an article detail interface, the image elements displayed in the image area are article images, and the text information displayed in the text area is article introduction text.
Self-deployment
Fig. 17 is a schematic diagram of a human-computer interaction method based on a user interface according to an exemplary embodiment of the present application.
As shown in the schematic diagram (1) in fig. 17, a commodity list interface is currently displayed, a user selects a commodity 812 in a commodity list 811, clicks the commodity 812 (or refers to a click operation on an access portal), and a commodity detail interface is displayed, as shown in the schematic diagram (2) in fig. 17, wherein the commodity detail interface includes a commodity image 821 and a commodity introduction text 822, and the display ratio of the area corresponding to the commodity image 821 is 1:1. When the terminal device does not receive the interactive operation of the user on the commodity detail interface within 3 seconds, the terminal device adjusts the display proportion of the area corresponding to the commodity image to 3:4, so that the display area of the commodity image 821 in the commodity detail interface is increased, and meanwhile, the display area of the commodity introduction text 822 is also reduced. As shown in the schematic diagram (3) of fig. 17, the display ratio of the area corresponding to the commodity image 821 is 3:4, the display area of the commodity image 821 increases, and the display area of the commodity introduction text 822 decreases.
Clicking on the region corresponding to the commodity image
Fig. 18 is a schematic diagram of a human-computer interaction method based on a user interface according to an exemplary embodiment of the present application.
As shown in a schematic diagram (1) in fig. 18, the product detail interface includes a product image 821 and a product introduction text 822, and the display ratio of the area corresponding to the product image 821 is 1:1. When the terminal device receives the clicking operation of the user on the commodity image 821, the terminal device adjusts the display proportion of the region corresponding to the commodity image to 3:4, so that the display area of the commodity image 821 in the commodity detail interface is increased, and meanwhile, the display area of the commodity introduction text 822 is also reduced. As shown in the schematic diagram (2) of fig. 18, the display ratio of the area corresponding to the commodity image 821 is 3:4, the display area of the commodity image 821 increases, and the display area of the commodity introduction text 822 decreases.
Clicking on a target thumbnail element in a thumbnail list
Fig. 19 is a schematic diagram of a human-computer interaction method based on a user interface according to an exemplary embodiment of the present application.
As shown in the schematic diagram (1) in fig. 19, the product detail interface includes a product image 821 and a product introduction text 822, and the display ratio of the area corresponding to the product image 821 is 1:1. A thumbnail list 823 exists in the product detail interface, and the thumbnail list 823 is located at the bottom of the area corresponding to the product image 821. When the terminal device receives the click operation of the user on the product thumbnail 824 in the thumbnail list 823, the terminal device adjusts the display ratio of the area corresponding to the product image 821 to 3:4, so that the display area of the product image 821 in the product detail interface is enlarged, the display area of the product introduction text 822 is reduced, and meanwhile, the product image 821 is switched to the product image 831 corresponding to the product thumbnail 824. As shown in the schematic diagram (2) of fig. 19, the display ratio of the area corresponding to the commodity image 831 is 3:4, the display area of the commodity image 831 is larger than the commodity image 821, the display area of the commodity introduction text 822 is reduced, and the commodity image 831 is a commodity image corresponding to the commodity thumbnail 824.
Sliding the region corresponding to the commodity image left and right
FIG. 20 is a diagram illustrating a human-machine interaction method based on a user interface according to an exemplary embodiment of the present application.
As shown in the schematic diagram (1) in fig. 20, the product detail interface includes a product image 821 and a product introduction text 822, and the display ratio of the area corresponding to the product image 821 is 1:1. The commodity image 821 is one commodity image in the commodity image list. When the terminal device receives the sliding operation of the user on the area corresponding to the commodity image 821, the terminal device adjusts the display ratio of the area corresponding to the commodity image 821 to 3:4, so that the display area of the commodity image 821 in the commodity detail interface is enlarged, the display area of the commodity introduction text 822 is reduced, and meanwhile, the commodity image 821 is switched to a commodity image 831 adjacent to the commodity image 821 in the commodity image list. As shown in the schematic diagram (2) of fig. 20, the display ratio of the area corresponding to the commodity image 831 is 3:4, the display area of the commodity image 831 is larger than that of the commodity image 821, the display area of the commodity introduction text 822 is reduced, and the commodity image 831 is a commodity image adjacent to the commodity image 821 in the commodity image list.
Slide down item detail interface
FIG. 21 is a diagram illustrating a human-machine interaction method based on a user interface according to an exemplary embodiment of the present application.
As shown in the schematic diagram (1) in fig. 21, the product detail interface includes a product image 821 and a product introduction text 822, and the display ratio of the area corresponding to the product image 821 is 1:1. When the terminal device receives the sliding operation of the user on the commodity detail interface (the sliding direction is that the image area points to the text area), the terminal device adjusts the display proportion of the area corresponding to the commodity image to be 3:4, so that the display area of the commodity image 821 in the commodity detail interface is enlarged, and meanwhile, the display area of the commodity introduction text 822 is also reduced. As shown in the schematic diagram (2) of fig. 21, the display ratio of the area corresponding to the commodity image 821 is 3:4, the display area of the commodity image 821 increases, and the display area of the commodity introduction text 822 decreases.
Slide on item detail interface
FIG. 22 illustrates a schematic diagram of a user interface-based human-machine interaction method provided in an exemplary embodiment of the present application.
As shown in the schematic diagram (1) in fig. 22, the product detail interface includes a product image 821 and a product introduction text 822, and the display ratio of the area corresponding to the product image 821 is 1:1. When the terminal device receives the user's sliding operation on the product detail interface (the sliding direction is that the text area points to the image area), the terminal device gradually reduces the display area of the area corresponding to the product image along with the sliding distance of the sliding operation, and simultaneously gradually increases the display area of the product introduction text 822. As shown in the schematic diagram (2) of fig. 22, the display area of the commodity image 821 decreases, and the display area of the commodity introduction text 822 increases. With increasing sliding distance of the up-slide operation, the item detail interface will display only item introduction text 822, as shown in schematic diagram (3) in fig. 22.
FIG. 24 illustrates a flowchart of a user interface-based human-machine interaction method provided in an exemplary embodiment of the present application.
Step 210-1: displaying an access portal, wherein the access portal is a control bound with a user interface jump method;
in some embodiments, the access portal is a control to which a trigger event is bound, and the trigger event is that when a trigger operation of the access portal by a user through the touch screen is received, the user interface is jumped to.
Step 220-1: in response to a triggering operation of the access portal, displaying a user interface, the user interface including an image view and a text view, the image view having a first display area and the text view having a second display area;
in some embodiments, the triggering operation for accessing the portal includes at least one of a swipe operation, a click operation, a voice command, a gesture command, a gravity sensing command, an eye movement control command, a handle control command, the swipe operation including at least one of a swipe up, a swipe down, a swipe left, and a swipe right.
In some embodiments, the terminal device receives a sliding operation, a clicking operation and a gesture instruction through the touch screen; or the terminal equipment receives a voice instruction through an audio interface; or the terminal equipment receives a gravity sensing instruction through a sensor; or the terminal equipment receives an eye movement control instruction through the camera; or, the terminal device receives the handle control instruction through the external controller.
In some embodiments, the terminal device shares the received trigger operation to the operating system; or the terminal equipment processes the received triggering operation and shares the triggering operation with the operating system.
The trigger operation received by the terminal device is that the "up" of the direction key of the handle controller is pressed, the terminal device converts the "up" of the direction key of the handle controller into an "up" pressing event of the trigger direction key, the "up" pressing event of the direction key also comprises a parameter value between 0 and-1.0, and the terminal device shares the "up" pressing event of the direction key and the parameter value thereof to the operating system.
In some embodiments, the operating system shares the received trigger operation to the application; or the operating system processes the received triggering operation and shares the triggering operation with the application program.
In some embodiments, in response to a triggering operation to access the portal, displaying a user interface including an image view and a text view; or, in response to a triggering operation of the access portal, displaying a user interface, wherein the user interface comprises at least one layout container, and the layout container comprises an image view and a text view.
In some embodiments, a list view and a text view are included in the layout container, the list view including at least one of an image view and a text view.
In some embodiments, the image view is a first aspect ratio, the image view having a first display area; or, the width of the image view is a first width, the length is a first length, and the image view has a first display area.
Step 220-2: acquiring image type information and text type information;
in some embodiments, the application obtains image class information and text class information from the terminal device. The image information and the text information are stored in the terminal equipment; or the image information and the text information are obtained from the server by the terminal equipment; or, the image information is stored in the terminal device, and the text information is acquired from the server by the terminal device.
Step 220-3: loading image class information to the image view and loading text class information to the text view;
in some embodiments, the application loads the acquired image class information into the image view and the acquired text class information into the text view.
Step 230-1: in response to the expansion condition being met, increasing the image view from the first display area to a third display area and decreasing the text view to a fourth display area;
in some embodiments, the image view is a first aspect ratio, the image view having a first display area; or, the width of the image view is a first width, the length is a first length, and the image view has a first display area.
In some embodiments, as shown in fig. 23, the image view increases from the first display area to the third display area, and the width of the image view remains unchanged, the length increases from the first length to the second length, the image view before the display area increases is shown in schematic diagram (1), and the image view after the display area increases is shown in schematic diagram (2); or the image view is increased from the first display area to the third display area, the length of the image view is kept unchanged, the width is increased from the first width to the second width, the image view before the display area is increased is shown as a schematic diagram (3), and the image view after the display area is increased is shown as a schematic diagram (4); or the image view is increased from the first display area to the third display area, the length of the image view is increased from the third length to the fourth length, the width is increased from the third width to the fourth width, the image view before the display area is increased is shown as a schematic diagram (5), and the image view after the display area is increased is shown as a schematic diagram (6); or the image view is increased from the first display area to the third display area, the image view is increased from the fifth length to the sixth length, the width is reduced from the fifth width to the sixth width, the image view before the display area is increased is shown in a schematic diagram (5), and the image view after the display area is increased is shown in a schematic diagram (7); or the image view is increased from the first display area to the eighth width from the seventh width, the length is reduced from the seventh length to the eighth length, the image view before the display area is increased is shown in a schematic diagram (5), and the image view after the display area is increased is shown in a schematic diagram (8). Wherein the first length is less than the second length, the third length is less than the fourth length, the fifth length is less than the sixth length, and the seventh length is greater than the eighth length; the first width is smaller than the second width, the third width is smaller than the fourth width, the fifth width is larger than the sixth width, and the seventh width is smaller than the eighth width.
Step 240-1: in response to receiving the fourth sliding operation, the image-type region is reduced from the first display area to a fifth display area, and the text-type region is increased to a sixth display area.
The sliding direction of the fourth sliding operation points to the picture area from the text area, the first display area is larger than the fifth display area, the second display area is smaller than the sixth display area, the fifth display area is reduced along with the increase of the sliding distance of the fourth sliding operation, and the sixth display area is increased along with the increase of the sliding distance of the fourth sliding operation.
In some embodiments, the terminal device shares the received sliding operation to the operating system; or the terminal equipment processes the received sliding operation and shares the sliding operation with the operating system.
The terminal device receives a sliding operation through the touch screen, the terminal device obtains that a pressing point corresponding to the sliding operation is (x 1, y 1), a lifting point is (x 2, y 2), the terminal device judges that a sliding direction points to a picture area from a text area according to the coordinates, namely, the terminal device converts the sliding operation into a touch gesture triggering event, the pressing point corresponding to the gesture is (x 1, y 1), and the lifting point is (x 2, y 2).
In some embodiments, the operating system shares the received interactive operation to the application; or the operating system processes the received interactive operation and shares the processed interactive operation with the application program.
The terminal device receives a sliding operation through the touch screen, the terminal device obtains that a pressing point corresponding to the sliding operation is (x 1, y 1), a moving point is (x 2, y 2), the moving point is a current touch point obtained by the terminal device when the user performs the sliding operation, the operating system judges that the sliding direction points from a text area to a picture area according to the coordinates, namely, the sliding operation is performed, the operating system converts the sliding operation into a touch gesture triggering event, and the sliding distance is from (x 1, y 1) to (x 2, y 2).
In an alternative embodiment based on fig. 3, step 230 may alternatively be implemented as steps 250 to 252, as shown in fig. 25.
Step 250: judging whether the interactive operation is received in the first duration;
in some embodiments, judging whether the interactive operation is received in the first time period, and if the interactive operation is not received in the first time period, meeting the automatic unfolding condition, and conducting automatic unfolding; and if the interactive operation is received in the first time period, the manual unfolding condition is met, and the manual unfolding is performed.
Step 251: in response to not receiving the interactive operation within the first time period, increasing the image type area from the first display area to a third display area, and reducing the text type area to a fourth display area;
in some embodiments, the first duration does not receive the interaction operation as the automatic expansion condition, and a specific implementation manner is shown in an embodiment corresponding to fig. 5 and fig. 6.
Step 252: in response to receiving the interactive operation within the first time period, the image type region is increased from the first display area to the third display area, and the text type region is decreased to the fourth display area.
In some embodiments, the interaction operation is received within the first time period as a manual unfolding condition, and a specific implementation manner is shown in the corresponding embodiments of fig. 8 to 14.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Referring to fig. 23, a block diagram of a man-machine interaction device based on a user interface according to an exemplary embodiment of the present application is shown. The device has the function of realizing the human-computer interaction method example based on the user interface, and the function can be realized by hardware or by executing corresponding software by hardware. As shown in fig. 13, the apparatus 600 may include: a first display module 610, a second display module 620, and a deployment module 630.
A first display module 610 is configured to display an access portal, which is an interactive portal that triggers the display of a user interface.
The second display module 620 is configured to display a user interface in response to a triggering operation on the access portal, where the user interface includes an image area and a text area, and the image area has a first display area and the text area has a second display area.
The expansion module 630 is configured to increase the image area from the first display area to the third display area and decrease the text area to the fourth display area in response to the expansion condition being satisfied.
In some embodiments, the expansion module 630 includes an auto-expansion sub-module.
And the automatic unfolding sub-module is used for increasing the image type area from the first display area to the third display area and reducing the character type area to the fourth display area in response to the condition of meeting the automatic unfolding condition.
In some embodiments, the self-expanding sub-module includes a self-expanding unit.
And the automatic unfolding unit is used for increasing the image type area from the first display area to the third display area and reducing the character type area to the fourth display area in response to the fact that the interactive operation is not received within the first time period.
In some embodiments, the expansion module 630 includes an expansion sub-module.
And the unfolding sub-module is used for responding to the received interactive operation, increasing the image type area from the first display area to the third display area and reducing the text type area to the fourth display area.
In some embodiments, the expansion submodule includes a first receive unit.
And a first receiving unit for increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area in response to receiving the first click operation in the image class area.
In some embodiments, the unfolding submodule includes a first display unit, a second receiving unit, and a first switching unit.
And a first display unit configured to display a thumbnail list including thumbnail elements of at least two image class elements, the image class element displayed in the image class area being one of the at least two image class elements.
And a second receiving unit for increasing the image type area from the first display area to the third display area and decreasing the text type area to the fourth display area in response to receiving the second click operation on the target thumbnail element in the thumbnail list.
And the first switching unit is used for switching the image class element displayed in the image class area into the image class element corresponding to the target thumbnail element.
In some embodiments, the unfolding submodule includes a third receiving unit and a second switching unit.
And a third receiving unit for increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area in response to receiving the first sliding operation in the image class area.
And the second switching unit is used for switching the image class element displayed in the image class area from the first image class element to the second image class element.
In some embodiments, the unfolding submodule includes a fourth receiving unit and a third switching unit.
And a fourth receiving unit for increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area in response to receiving the second sliding operation in the image class area.
And the third switching unit is used for switching the image class element displayed in the image class area from the third image class element to the fourth image class element.
In some embodiments, the expansion submodule includes a fifth receive unit.
And a fifth receiving unit for increasing the image type area from the first display area to the third display area and decreasing the text type area to the fourth display area in response to receiving the third sliding operation.
In some embodiments, the apparatus 600 further comprises a receiving module.
And a receiving module for reducing the image type area from the first display area to the fifth display area and increasing the text type area to the sixth display area in response to receiving the fourth sliding operation.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Fig. 24 shows a block diagram of a computer device according to an exemplary embodiment of the present application.
The computer device 700 may be a portable mobile terminal, also referred to as a mobile terminal in this embodiment. Such as: smart phones, tablet computers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg 3), MP4 (Moving Picture Experts Group Audio Layer IV, mpeg 4) players. The computer device 700 may also be referred to by other names, such as user device, portable terminal, etc.
In general, the computer device 700 includes: a processor 701 and a memory 702.
Processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 701 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 701 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 701 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 701 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be tangible and non-transitory. The memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the user interface-based human-machine interaction methods provided in embodiments of the present application.
In some embodiments, the computer device 700 may further optionally include: a peripheral interface 703 and at least one peripheral. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, a touch display 705, a camera 706, audio circuitry 707, and a power supply 708.
A peripheral interface 703 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 701 and memory 702. In some embodiments, the processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 704 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuitry 704 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, etc. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 704 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The touch display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. The touch display 705 also has the ability to collect touch signals at or above the surface of the touch display 705. The touch signal may be input to the processor 701 as a control signal for processing. The touch display 705 is used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the touch display 705 may be one, providing a front panel of the computer device 700; in other embodiments, the touch display 705 may be at least two, disposed on different surfaces of the computer device 700 or in a folded design; in some embodiments, touch display 705 may be a flexible display disposed on a curved surface or a folded surface of computer device 700. Even more, the touch display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The touch display 705 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 706 is used to capture images or video. Optionally, the camera assembly 706 includes a front camera and a rear camera. In general, a front camera is used for realizing video call or self-photographing, and a rear camera is used for realizing photographing of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and the rear cameras are any one of a main camera, a depth camera and a wide-angle camera, so as to realize fusion of the main camera and the depth camera to realize a background blurring function, and fusion of the main camera and the wide-angle camera to realize a panoramic shooting function and a Virtual Reality (VR) shooting function. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 707 is used to provide an audio interface between the user and the computer device 700. The audio circuit 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing, or inputting the electric signals to the radio frequency circuit 704 for voice communication. The microphone may be provided in a plurality of different locations of the computer device 700 for stereo acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 707 may also include a headphone jack.
The power supply 708 is used to power the various components in the computer device 700. The power source 708 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power source 708 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 700 also includes one or more sensors 709. The one or more sensors 709 include, but are not limited to: acceleration sensor 710, gyro sensor 711, pressure sensor 712, optical sensor 713, and proximity sensor 714.
The acceleration sensor 710 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the computer device 700. Such as: the acceleration sensor 710 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 701 may control the touch display screen 705 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 710. Acceleration sensor 710 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 711 may detect the body direction and the rotation angle of the computer device 700, and the gyro sensor 711 may collect the 3D motion of the user on the computer device 700 in cooperation with the acceleration sensor 710. The processor 701 may implement the following functions according to the data collected by the gyro sensor 711: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 712 may be disposed on a side frame of computer device 700 and/or on an underlying layer of touch display 705. When the pressure sensor 712 is disposed at a side frame of the computer device 700, a grip signal of the computer device 700 by a user may be detected, and left-right hand recognition or shortcut operation may be performed according to the grip signal. When the pressure sensor 712 is disposed at the lower layer of the touch display screen 705, control of the operability control on the UI interface can be achieved according to the pressure operation of the user on the touch display screen 705. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 713 is used to collect the intensity of ambient light. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 713. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 705 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 705 is turned down. In another embodiment, the processor 701 may also dynamically adjust the shooting parameters of the camera assembly 706 based on the ambient light intensity collected by the optical sensor 713.
A proximity sensor 714, also known as a distance sensor, is typically provided on the front of the computer device 700. The proximity sensor 714 is used to capture the distance between the user and the front of the computer device 700. In one embodiment, when the proximity sensor 714 detects a gradual decrease in the distance between the user and the front of the computer device 700, the processor 701 controls the touch display 705 to switch from the bright screen state to the off screen state; when the proximity sensor 714 detects that the distance between the user and the front surface of the computer device 700 gradually increases, the processor 701 controls the touch display screen 705 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 18 is not limiting of the computer device 700, and may include more or fewer components than shown, or may combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, the present application provides a chip including programmable logic circuits and/or program instructions for implementing the user interface-based human-computer interaction method provided by the above method embodiments when the chip is run on a computer device.
The application provides a computer readable storage medium storing a computer program loaded and executed by a processor to implement the human-computer interaction method based on the user interface provided by the method embodiment.
The present application provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the processor of the computer device loads and executes the computer instructions to implement the human-computer interaction method based on the user interface provided by the method embodiment.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the above mentioned computer readable storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (15)

1. A human-machine interaction method based on a user interface, the method comprising:
displaying an access portal, wherein the access portal is an interaction portal for triggering the display of the user interface;
responding to the triggering operation of the access entrance, displaying the user interface, wherein the user interface comprises an image type area and a character type area, the image type area has a first display area, and the character type area has a second display area;
in response to satisfying a deployment condition, increasing the image class region from the first display area to a third display area, and decreasing the text class region to a fourth display area;
the first display area is smaller than the third display area, and the second display area is larger than the fourth display area.
2. The method of claim 1, wherein the increasing the image class area from the first display area to a third display area and decreasing the text class area to a fourth display area in response to the expansion condition being satisfied comprises:
In response to an auto-expand condition being met, the image class area is increased from the first display area to the third display area and the text class area is decreased to the fourth display area.
3. The method of claim 2, wherein the increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area in response to the auto-expand condition being satisfied comprises:
and in response to no interactive operation being received within a first time period, increasing the image type area from the first display area to the third display area, and decreasing the text type area to the fourth display area.
4. The method of claim 1, wherein the increasing the image class area from the first display area to a third display area and decreasing the text class area to a fourth display area in response to the expansion condition being satisfied comprises:
in response to receiving an interactive operation, the image class area is increased from the first display area to the third display area, and the text class area is decreased to the fourth display area.
5. The method of claim 4, wherein in response to receiving an interactive operation, increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area comprises:
in response to receiving a first click operation within the image class area, the image class area is increased from the first display area to the third display area, and the text class area is decreased to the fourth display area.
6. The method of claim 4, wherein in response to receiving an interactive operation, increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area comprises:
displaying a thumbnail list, wherein the thumbnail list comprises thumbnail elements of at least two image class elements, and the image class element displayed in the image class area is one of the at least two image class elements;
in response to receiving the second click operation on a target thumbnail element in the thumbnail list, increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area;
And switching the image class elements displayed in the image class area into the image class elements corresponding to the target thumbnail elements.
7. The method of claim 4, wherein in response to receiving an interactive operation, increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area comprises:
in response to receiving a first sliding operation within the image class area, increasing the image class area from the first display area to the third display area, and decreasing the text class area to the fourth display area;
switching the image class elements displayed in the image class region from the first image class elements to the second image class elements;
the sliding direction of the first sliding operation points to the second side of the user interface from the first side of the user interface, the image class elements displayed by the image class area are image class elements in an image list, the image list comprises at least two image class elements, the first image class element is an nth image class element in the image list, the second image class element is an nth-1 image class element in the image list, and n is a positive integer.
8. The method of claim 4, wherein in response to receiving an interactive operation, increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area comprises:
in response to receiving a second sliding operation within the image class area, increasing the image class area from the first display area to the third display area, and decreasing the text class area to the fourth display area;
switching the image class element displayed in the image class region from the third image class element to a fourth image class element;
the sliding direction of the second sliding operation points to the first side of the user interface from the second side of the user interface, the image class elements displayed by the image class area are image class elements in an image list, the image list comprises at least two image class elements, the third image class element is an nth image class element in the image list, the fourth image class element is an (n+1) th image class element in the image list, and n is a positive integer.
9. The method of claim 4, wherein in response to receiving an interactive operation, increasing the image class area from the first display area to the third display area and decreasing the text class area to the fourth display area comprises:
In response to receiving a third sliding operation, increasing the image class area from the first display area to the third display area, and decreasing the text class area to the fourth display area;
wherein a sliding direction of the third sliding operation is directed from the image class area to the text class area.
10. The method according to any one of claims 1 to 9, further comprising:
in response to receiving a fourth sliding operation, reducing the image class area from the first display area to a fifth display area, and increasing the text class area to a sixth display area;
the sliding direction of the fourth sliding operation points to the picture area from the text area, the first display area is larger than the fifth display area, the second display area is smaller than the sixth display area, the fifth display area is reduced along with the increase of the sliding distance of the fourth sliding operation, and the sixth display area is increased along with the increase of the sliding distance of the fourth sliding operation.
11. The method according to any one of claims 1 to 10, wherein the image class area is used for displaying image class information and/or video class information;
The text area is used for displaying at least one of commodity information, article titles, article contents and application information.
12. A user interface-based human-machine interaction device, the device comprising:
the first display module is used for displaying an access portal, wherein the access portal is an interaction portal for triggering display of a user interface;
the second display module is used for responding to the triggering operation of the access entrance and displaying the user interface, wherein the user interface comprises an image area and a character area, the image area has a first display area, and the character area has a second display area;
and the unfolding module is used for responding to the fact that the unfolding condition is met, increasing the image type area from the first display area to the third display area and reducing the text type area to the fourth display area.
13. A computer device comprising a processor and a memory, wherein the memory has stored therein a computer program that is loaded and executed by the processor to implement the user interface based human-machine interaction method of any of claims 1 to 11.
14. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the user interface based human-machine interaction method of any of claims 1 to 11.
15. A computer program product, characterized in that the computer program product comprises a computer program, the computer program being stored in a computer readable storage medium; the computer program is read from the computer readable storage medium and executed by a processor of a computer device, causing the computer device to perform the user interface based human-machine interaction method of any of claims 1 to 11.
CN202311149241.8A 2023-09-06 2023-09-06 Man-machine interaction method, device, equipment and storage medium based on user interface Pending CN117270717A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311149241.8A CN117270717A (en) 2023-09-06 2023-09-06 Man-machine interaction method, device, equipment and storage medium based on user interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311149241.8A CN117270717A (en) 2023-09-06 2023-09-06 Man-machine interaction method, device, equipment and storage medium based on user interface

Publications (1)

Publication Number Publication Date
CN117270717A true CN117270717A (en) 2023-12-22

Family

ID=89209670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311149241.8A Pending CN117270717A (en) 2023-09-06 2023-09-06 Man-machine interaction method, device, equipment and storage medium based on user interface

Country Status (1)

Country Link
CN (1) CN117270717A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203303A (en) * 2016-03-18 2017-09-26 阿里巴巴集团控股有限公司 A kind of interface display method and device
CN107734189A (en) * 2017-11-14 2018-02-23 优酷网络技术(北京)有限公司 Method for showing interface and device
CN107797729A (en) * 2017-11-14 2018-03-13 优酷网络技术(北京)有限公司 Method for showing interface and device
CN108476168A (en) * 2016-05-18 2018-08-31 苹果公司 Using confirmation option in graphical messages transmit user interface
CN111881916A (en) * 2020-07-17 2020-11-03 中国工商银行股份有限公司 Character positioning method, device and equipment
CN113485592A (en) * 2021-06-18 2021-10-08 浪潮卓数大数据产业发展有限公司 Barrier-free service method, device and medium based on mobile terminal
CN114821595A (en) * 2022-05-16 2022-07-29 傲讯全通科技(深圳)有限公司 Magnifier application system and method with positioning structure
US20230004216A1 (en) * 2021-07-01 2023-01-05 Google Llc Eye gaze classification
CN115905374A (en) * 2021-07-20 2023-04-04 腾讯科技(深圳)有限公司 Application function display method and device, terminal and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203303A (en) * 2016-03-18 2017-09-26 阿里巴巴集团控股有限公司 A kind of interface display method and device
CN108476168A (en) * 2016-05-18 2018-08-31 苹果公司 Using confirmation option in graphical messages transmit user interface
CN107734189A (en) * 2017-11-14 2018-02-23 优酷网络技术(北京)有限公司 Method for showing interface and device
CN107797729A (en) * 2017-11-14 2018-03-13 优酷网络技术(北京)有限公司 Method for showing interface and device
CN111881916A (en) * 2020-07-17 2020-11-03 中国工商银行股份有限公司 Character positioning method, device and equipment
CN113485592A (en) * 2021-06-18 2021-10-08 浪潮卓数大数据产业发展有限公司 Barrier-free service method, device and medium based on mobile terminal
US20230004216A1 (en) * 2021-07-01 2023-01-05 Google Llc Eye gaze classification
CN115905374A (en) * 2021-07-20 2023-04-04 腾讯科技(深圳)有限公司 Application function display method and device, terminal and storage medium
CN114821595A (en) * 2022-05-16 2022-07-29 傲讯全通科技(深圳)有限公司 Magnifier application system and method with positioning structure

Similar Documents

Publication Publication Date Title
CN114764298B (en) Cross-device object dragging method and device
KR101984673B1 (en) Display apparatus for excuting plurality of applications and method for controlling thereof
CN116055610B (en) Method for displaying graphical user interface and mobile terminal
WO2022062898A1 (en) Window display method and device
KR20210068097A (en) Method for controlling display of system navigation bar, graphical user interface and electronic device
CN112230914B (en) Method, device, terminal and storage medium for producing small program
CN110928464B (en) User interface display method, device, equipment and medium
CN111694478A (en) Content display method, device, terminal and storage medium
CN111459363A (en) Information display method, device, equipment and storage medium
CN112825040B (en) User interface display method, device, equipment and storage medium
CN114845152B (en) Display method and device of play control, electronic equipment and storage medium
CN110992268B (en) Background setting method, device, terminal and storage medium
CN112230910B (en) Page generation method, device and equipment of embedded program and storage medium
EP4125274A1 (en) Method and apparatus for playing videos
CN117270717A (en) Man-machine interaction method, device, equipment and storage medium based on user interface
CN109189525B (en) Method, device and equipment for loading sub-page and computer readable storage medium
CN114100121A (en) Operation control method, device, equipment, storage medium and computer program product
CN115379274B (en) Picture-based interaction method and device, electronic equipment and storage medium
CN112732133A (en) Message processing method and device, electronic equipment and storage medium
CN113220203B (en) Activity entry display method, device, terminal and storage medium
CN113507647B (en) Method, device, terminal and readable storage medium for controlling playing of multimedia data
WO2022228042A1 (en) Display method, electronic device, storage medium, and program product
KR20170071290A (en) Mobile terminal
CN117336562A (en) Live broadcast picture adjustment method and device, electronic equipment and storage medium
CN115639937A (en) Interface display method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination