CN113961132B - Interactive processing method and device, electronic equipment and storage medium - Google Patents

Interactive processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113961132B
CN113961132B CN202111115617.4A CN202111115617A CN113961132B CN 113961132 B CN113961132 B CN 113961132B CN 202111115617 A CN202111115617 A CN 202111115617A CN 113961132 B CN113961132 B CN 113961132B
Authority
CN
China
Prior art keywords
mobile terminal
information
user
operation area
target operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111115617.4A
Other languages
Chinese (zh)
Other versions
CN113961132A (en
Inventor
冼钊铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111115617.4A priority Critical patent/CN113961132B/en
Publication of CN113961132A publication Critical patent/CN113961132A/en
Application granted granted Critical
Publication of CN113961132B publication Critical patent/CN113961132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Abstract

The disclosure provides an interaction processing method, an interaction processing device, electronic equipment and a storage medium, and relates to the technical fields of computer vision, artificial intelligence and the like. The specific implementation scheme is as follows: determining information of a target operation area which can be touched by a user when the user operates the mobile terminal according to a current operation mode; and performing interaction setting with the user based on the information of the target operation area. According to the scheme, the interaction convenience of the user can be effectively improved.

Description

Interactive processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of computers, in particular to the technical fields of computer vision, artificial intelligence and the like, and particularly relates to an interactive processing method, an interactive processing device, electronic equipment and a storage medium.
Background
With the advent of various large-screen mobile terminals, along with the advantages of large screen and good experience, the mobile terminal is touted by users who like to watch live broadcast, brush video, refresh information and the like.
In many special scenarios, the user can only operate the mobile phone with one hand, and because the screen of the mobile terminal is usually relatively large, when the user operates with one hand, the user needs to hold the mobile terminal at the same time, so that fingers cannot touch all areas of the screen of the mobile terminal.
Disclosure of Invention
The disclosure provides an interaction processing method, an interaction processing device, electronic equipment and a storage medium.
According to an aspect of the present disclosure, there is provided an interaction processing method, including:
determining information of a target operation area which can be touched by a user when the user operates the mobile terminal according to a current operation mode;
and performing interaction setting with the user based on the information of the target operation area.
According to another aspect of the present disclosure, there is provided an interaction processing apparatus including:
the determining module is used for determining information of a target operation area which can be reached when a user operates the mobile terminal according to a current operation mode;
and the setting module is used for performing interaction setting with the user based on the information of the target operation area.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the aspects and methods of any one of the possible implementations described above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method of the aspects and any possible implementation described above.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of the aspects and any one of the possible implementations described above.
According to the technology disclosed by the invention, the interaction convenience of the user can be effectively improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic illustration of an interface provided by the present disclosure;
FIG. 4 is another interface schematic provided by the present disclosure;
FIG. 5 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 7 is a schematic diagram according to a fifth embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device for implementing the methods of embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It will be apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments in this disclosure without inventive faculty, are intended to be within the scope of this disclosure.
It should be noted that, the terminal device in the embodiments of the present disclosure may include, but is not limited to, smart devices such as a mobile phone, a personal digital assistant (Personal Digital Assistant, PDA), a wireless handheld device, and a Tablet Computer (Tablet Computer); the display device may include, but is not limited to, a personal computer, a television, or the like having a display function.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure; as shown in fig. 1, this embodiment provides an interaction processing method, and describes a technical solution of the present disclosure at a mobile terminal side, which may specifically include the following steps:
s101, determining information of a target operation area which can be touched by a user when the user operates the mobile terminal according to a current operation mode;
s102, performing interaction setting with a user based on the information of the target operation area.
The application scenario of the embodiment of the present disclosure may be that in a process that a user holds the mobile terminal with one hand, since the user holds the mobile terminal with one hand, a finger that is used by the user to operate the mobile terminal cannot touch each position in the screen of the mobile terminal. Based on this scenario, the present embodiment may first determine information of a target operation area that can be touched when the user operates the mobile terminal in the current operation manner. That is, the target operation area is an area in the screen of the mobile terminal that can be touched by the finger when the user operates the mobile terminal according to the current operation mode. Based on the information of the target operation area, the position that the finger of the user can easily reach in the screen of the current mobile terminal can be determined. And the target operation area can be determined according to the information of the target operation area, and interaction setting with a user can be carried out in the target operation area. Thus, when the user uses the mobile terminal according to the current operation mode, the user can very conveniently complete interaction based on the interaction setting in the target operation area when the user finger is very easy to touch the target operation area.
In the embodiment of the present disclosure, the information of the target operation area may be specifically identified in different manners according to the shape of the target operation area. For example, for regular shapes such as squares or rectangles, etc., multiple boundary point coordinates may be used to identify the boundary, and for irregular shapes, a line identification may be used to identify the boundary. In practical application, the identification can be performed by using a dot-line combination mode or other modes based on the mode of the target operation area, and in a word, only the target operation area can be clearly identified.
According to the interaction processing method, the interaction setting with the user can be performed based on the determined information of the target operation area, and because the user can easily touch the target operation area when operating the mobile terminal according to the current operation mode, the user can easily realize interaction according to the information of the target operation area, the interaction convenience of the user is effectively improved, and the use experience of the user is enhanced.
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure; the interactive processing method of the present embodiment further introduces the technical solution of the present disclosure in more detail on the basis of the embodiment shown in fig. 1. As shown in fig. 2, the interactive processing method of the present embodiment may specifically include the following steps:
s201, acquiring screen size information of a mobile terminal;
the screen size information of the mobile terminal of this embodiment is static information of the screen of the mobile terminal. The method can be obtained by detecting the equipment information of the mobile terminal, and can comprise the length and the width of a screen of the mobile terminal. Or the length and the width of the screen of the mobile terminal can be obtained by detecting the boundary of the page displayed on the screen of the mobile terminal.
S202, acquiring operation information of a user using a mobile terminal;
the operation information of the mobile terminal used by the user may refer to some operation information corresponding to when the user operates the mobile terminal according to the current operation mode.
For example, the following steps may be included:
(1) Acquiring handheld state information of a user using a mobile terminal;
the hand-held state information of the mobile terminal used by the user refers to which hand the user uses to operate the mobile terminal, such as left hand or right hand. For example, the hand-held state information of the user using the mobile terminal can be specifically identified based on the data collected by the gravity sensor of the mobile terminal. The hand-held state information identified in this way is very accurate.
(2) Acquiring screen state information of a user using a mobile terminal;
the screen state information of the mobile terminal used by the user refers to whether the user operates the mobile terminal in a horizontal screen or a vertical screen, and specifically, by detecting the current screen state of the mobile terminal, the user can know whether the user uses the mobile terminal in the horizontal screen or the vertical screen.
S203, based on screen size information of the mobile terminal, handheld state information of the mobile terminal used by a user and screen state information, adopting a pre-acquired area calculation model to calculate information of a target operation area which can be touched by the user when the user operates the mobile terminal according to a current operation mode;
the difference in the operation area on the screen when the user operates the mobile terminal is caused by the different screen sizes, and the different hand-held states and the different screen states when the user uses the mobile terminal. Therefore, in this embodiment, the screen size information of the mobile terminal, the hand-held state information of the mobile terminal used by the user, and the screen state information may be input into the area estimation model acquired in advance, and the area estimation model may estimate the information of the target operation area that the user can touch when operating the mobile terminal according to the current operation mode, so that the accuracy of the estimated information of the target operation area may be effectively ensured.
In one embodiment of the present disclosure, the region estimation model may be downloaded from the cloud in advance for the mobile terminal. For example, the solution of the present disclosure may be used in one application, or in some scenarios of an application, the user may download the regional prediction model at the same time when downloading the installation package of the application through the mobile terminal.
The regional calculation module downloaded by the mobile terminal is a model trained in advance by the cloud, and the cloud can collect training data from each mobile terminal and train the regional calculation model based on the collected training data. The trained region estimation model is then packaged within the installation package of the application. After the mobile terminal installs the application, if information of the target operation area needs to be estimated, the estimation may be implemented according to the manner of this embodiment.
In another embodiment of the present disclosure, when the step S203 is implemented, the information of the target operation area that can be reached when the user operates the mobile terminal according to the current operation mode may also be directly calculated based on the screen size information of the mobile terminal, the handheld state information of the user using the mobile terminal, and the screen state information. For example, various sizes of screens, various hand-held states, and operation area information corresponding to the two screen states, respectively, may be counted in advance and stored. And then based on the screen size information of the current mobile terminal, the handheld state information of the user using the mobile terminal and the screen state information, acquiring information of a target operation area which can be touched by the user when the user operates the mobile terminal according to the current operation mode based on the stored information.
It should be noted that, the steps S201 to S203 are one implementation manner of the step S101 in the embodiment of the disclosure shown in fig. 1. The information of the target operation area determined in this way is very accurate.
In an optional embodiment of the present disclosure, step S101 determines information of a target operation area that can be reached when the user operates the mobile terminal according to the current operation manner, and may further be: and receiving a target operation area track input by a user, and determining information of a target operation area which can be touched by the user when the user operates the mobile terminal according to the current operation mode based on the target operation area track input by the user.
The scene corresponding to the scheme can be that when a user operates the mobile terminal by one hand, a track of a target operation area which can be easily touched by a finger can be firstly drawn on a screen of the mobile terminal, so that subsequent interactive operation of the interactive processing device can be indicated to be arranged in the area surrounded by the track. At this time, the interaction processing device may determine the information of the target operation area based on the trajectory drawn by the user. In addition, in practical application, according to the logic disclosed in the present disclosure, information of a target operation area that can be touched by a user when the user operates the mobile terminal according to a current operation manner may be determined in other manners, which is not described in detail herein.
S204, performing user interaction setting based on the information of the target operation area.
Because the target operation area is an area which can be easily reached by the user, the interaction operation of anchor points, hot spot recommendations and the like with the user can be set in the area, the user interaction can be very convenient, and the use experience of the user is effectively improved.
In the present embodiment, information of only one target operation area that is most easily operated by the user is taken as an example. Alternatively, other operation area information of different levels such as general operation area information of the operator and operation area information which is most difficult to operate can be calculated, so that the whole interface can be set conveniently. For example, in the implementation, the multi-stage operation region division may be performed according to the difficulty level of the operation, and may be divided into a primary operation region, a secondary operation region, a tertiary operation region, and the like. The first-level operation area can correspond to the area which is easiest to operate, and the second-level operation area can be considered as an area which can be reached only by changing the operation mode, for example, an area which can be reached only by moving the mobile terminal in a certain direction; while a three or higher level of operating area may be considered an area that requires the assistance of another hand to reach. That is, the lower the level of the operation area, the easier the user can reach the operation area with the higher level according to the current operation mode, and the harder the user can reach the operation area with the higher level according to the current operation mode.
According to the technical scheme of the disclosure, screen size information of the mobile terminal, handheld state information of a user using the mobile terminal and screen state information are input into an area estimation model, and the area estimation model can calculate primary operation area information, namely information of a target operation area, information of a secondary operation area, information of a tertiary operation area and the like according to output information and parameter information of palm size, finger length and the like of a common person counted in advance. And then, setting the page of the mobile terminal according to the information of the operation areas of each level, so that the content with high probability of interaction with the user is displayed in the operation area which is conveniently touched by the user, such as a first-level operation area, but only displayed, and the content which does not need to be interacted with by the user is possibly displayed in the operation area which is not easy to be operated by the user, such as a third-level operation area and the like.
For example, FIG. 3 is a schematic illustration of one interface provided by the present disclosure; as shown in fig. 3, taking a user right hand vertical screen operation mobile terminal as an example, the three-stage operation area is divided. Fig. 4 is another interface schematic provided by the present disclosure, differing from fig. 3 only in that: fig. 4 is an example of a user's operation of the mobile terminal with a horizontal screen. The determination of the operation regions of each stage of the interface shown in fig. 3 and 4 can be performed in the manner described above for the present embodiment. The first-level operation area is an area which is very easy to touch by a user according to the current right-hand operation, buttons which need to interact with the user, such as view anchor points, can be arranged in the first-level operation area of the interface, or a link of hot spot information can be arranged in the first-level operation area, so that the user can click and view conveniently, and the convenience of user interaction can be effectively improved. The secondary operating area is an area that requires little effort from the user, such as by stretching the finger to reach. Some less interactive content may be provided in the secondary operation region. And the three-level operation area is an area which is unlikely to be reached by a user according to the current operation mode and can be reached by the other hand. Some content that does not require interaction with the user may be placed within the tertiary operating region. The above-described fig. 3 and 4 are merely two examples of the present disclosure, and do not limit the disclosure. For example, in practical applications, the method of the present disclosure may be used to identify only one level of operation area of the interface, or may be used to identify multiple levels of operation area, which is not limited herein.
According to the interactive processing method, the target operation area is calculated locally on the mobile terminal by adopting the technical scheme, and the user interaction is set based on the target operation area.
FIG. 5 is a schematic diagram according to a third embodiment of the present disclosure; the interactive processing method of the present embodiment further introduces the technical solution of the present disclosure in more detail on the basis of the foregoing embodiments. As shown in fig. 5, the interactive processing method of the present embodiment may specifically include the following steps:
s501, acquiring screen size information of a mobile terminal;
s502, acquiring handheld state information of a user using a mobile terminal;
s503, acquiring screen state information of a user using the mobile terminal;
s504, acquiring click data information of a user using the mobile terminal;
steps S502-S504 are one implementation of step S202 in the embodiment shown in fig. 2.
Unlike the embodiment shown in fig. 2, in this embodiment, the operation information of the user using the mobile terminal is obtained, which further includes obtaining click data information of the user using the mobile terminal. The click data information specifically refers to click data information clicked by a user in an accessible operation area according to current handheld state information and screen state information. These click data information may be specifically identified by coordinate information of a user click in a screen dimension. Theoretically, these click data information must fall within the target operation region. Therefore, in this embodiment, the information of the target operation area can be estimated by referring to the click data information, so that the estimation accuracy of the information of the target operation area is further improved.
Of course, optionally, step S504 of this embodiment may also be included when acquiring the operation information of the user using the mobile terminal in the embodiment shown in fig. 2.
S505, screen size information of the mobile terminal, handheld state information of the mobile terminal used by a user, screen state information and click data information are sent to the cloud end, so that the cloud end adopts a pre-trained region calculation model, and information of a target operation region which can be touched when the user operates the mobile terminal according to a current operation mode is calculated based on the screen size information of the mobile terminal, the handheld state information of the mobile terminal used by the user, the screen state information and the click data information;
s506, receiving information of a target operation area returned by the cloud;
different from the embodiment shown in fig. 2, in this embodiment, the information of the target operation area may be calculated at the cloud end, and the implementation principle is the same, which is not described herein.
S507, performing user interaction setting based on the information of the target operation area;
similarly, the solution of the present disclosure may be used in a certain application, or may also be used in a certain scenario of an application, which is not limited herein. For example, in a live broadcast application, a user may enter a live broadcast room to watch live broadcast by holding the mobile terminal in one hand, and the user may draw a list of data from more of the live broadcast room to the left. The more live data list may then be collapsed by clicking on a blank area outside the list, or clicking on a collapse button, or other operation. However, because the existing mobile terminal has large screen size difference, the palm and finger lengths of the user are also different, and the single-hand operation stowing function of the user is directly affected. For example, some female users using large screen cell phones cannot click on a blank area outside the list or click on a stow button, resulting in a failure to operate the stow function. According to the scheme of the present disclosure, for each user, based on the screen size information of the mobile terminal used by the user, the handheld state information of the mobile terminal used by the user, and the screen state information, and the click data information of the mobile terminal used by the user, the information of the target operation area that can be reached when the user operates the mobile terminal according to the current operation mode can be accurately calculated by adopting the area calculation model. Then an anchor point is arranged in the target operation area, and a stow button is placed, so that a user can operate the stow function very conveniently. Of course, this scenario is only one exemplary scenario of the present embodiment, and the technical solution of the present disclosure is not limited. In practical application, in other applications or other scenes, the information of the target operation area can be obtained according to the method disclosed by the disclosure; and then, buttons interacted with the user are arranged in the target operation area, or hot spot information which is convenient for the user to operate is displayed, so that the convenience of the user to operate can be effectively enhanced, and the use experience of the user is improved.
S508, integrating screen size information of the mobile terminal, click data information of a user using the mobile terminal and information of a target operation area into one piece of training data;
s509, sending training data to the cloud.
Steps S508-S509 are further optional aspects of the disclosure, that is, in an embodiment of the disclosure, a piece of training data may be integrated based on the collected information and the estimated result, where the estimated target area information is used as the labeling data. And sending the training data to the cloud end so that the cloud end can continuously conduct fine tuning training on the regional calculation model based on the received training data sent by each mobile terminal. Or the cloud can train the regional calculation models of other applications or other application scenes.
According to the interactive processing method, through the adoption of the technical scheme, the target operation area can be accurately calculated by means of the cloud, user interaction setting is conducted based on the target operation area, and the interactive processing method is very convenient for users to conduct interactive operation and improves user experience. Furthermore, in this embodiment, training data may be provided for the cloud based on the collected information and the calculation result, so as to further provide effective data support for model training for the cloud.
FIG. 6 is a schematic diagram according to a fourth embodiment of the present disclosure; as shown in fig. 6, the present embodiment provides an interaction processing apparatus 600, where the interaction processing apparatus 600 may be applied to a mobile terminal, and includes:
a determining module 601, configured to determine information of a target operation area that can be touched when a user operates the mobile terminal according to a current operation mode;
a setting module 602, configured to perform interaction setting with a user based on information of the target operation area.
The interaction processing device 600 of the present embodiment, by adopting the above modules to implement the implementation principle and the technical effect of the interaction processing, is the same as the implementation of the above related method embodiments, and detailed description of the above related method embodiments may be referred to and will not be repeated here.
FIG. 7 is a schematic diagram according to a fifth embodiment of the present disclosure; as shown in fig. 7, the present embodiment provides an interaction processing apparatus 700, and similarly, the interaction processing apparatus 700 may be applied to a mobile terminal.
Unlike the above-described interaction device shown in fig. 6, in the interaction device 700 of the present embodiment, the determining module 701 includes:
a first acquisition unit 7011 configured to acquire screen size information of a mobile terminal;
a second acquisition unit 7012 for acquiring operation information of a user using the mobile terminal;
third obtaining unit 7013 is configured to obtain information of a target operation area that can be reached when the user operates the mobile terminal according to the current operation mode, based on the screen size information of the mobile terminal and the operation information of the user using the mobile terminal.
Further alternatively, in one embodiment of the present disclosure, the second obtaining unit 7012 is configured to:
acquiring handheld state information of a user using a mobile terminal;
and acquiring screen state information of the user using the mobile terminal.
Further alternatively, the second acquiring unit 7012 is configured to:
based on the data collected by the gravity sensor of the mobile terminal, the handheld state information of the user using the mobile terminal is identified.
Further alternatively, the second acquiring unit 7012 is further configured to:
and acquiring click data information of the user using the mobile terminal.
Further alternatively, the third acquiring unit 7013 is configured to:
based on the screen size information of the mobile terminal and the operation information of the user using the mobile terminal, a pre-acquired area estimation model is adopted to estimate the information of a target operation area which can be reached when the user operates the mobile terminal according to the current operation mode.
Further alternatively, the third acquiring unit 7013 is configured to:
the method comprises the steps that screen size information of a mobile terminal and state information of a user using the mobile terminal are sent to a cloud end, so that the cloud end adopts a pre-trained region calculation model, and information of a target operation region which can be reached when the user operates the mobile terminal according to a current operation mode is calculated based on the screen size information of the mobile terminal and operation information of the user using the mobile terminal;
and receiving information of the target operation area returned by the cloud.
In the interaction processing apparatus 700 of the present embodiment, the setting module 602 is also configured to perform interaction setting with the user based on the information of the target operation area.
As a further alternative, as shown in fig. 7, the interaction processing apparatus 700 of the present embodiment may further include:
an integrating module 703, configured to integrate screen size information of the mobile terminal, operation information of a user using the mobile terminal, and information of a target operation area into one piece of training data;
and the sending module 704 is configured to send training data to the cloud.
The interaction processing device 700 of the present embodiment, by adopting the above modules to implement the implementation principle and the technical effect of the interaction processing, is the same as the implementation of the above related method embodiments, and detailed description of the above related method embodiments may be referred to and will not be repeated here.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the various methods and processes described above, such as the above-described methods of the present disclosure. For example, in some embodiments, the above-described methods of the present disclosure may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When a computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of the above-described methods of the present disclosure may be performed as described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the above-described methods of the present disclosure in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (14)

1. An interaction processing method, comprising:
determining information of a target operation area which can be touched by a user when the user operates the mobile terminal according to a current operation mode;
based on the information of the target operation area, performing interaction setting with the user;
the method for determining the information of the target operation area which can be reached when the user operates the mobile terminal according to the current operation mode comprises the following steps:
acquiring screen size information of the mobile terminal;
acquiring operation information of the user using the mobile terminal;
acquiring information of a target operation area which can be reached by the user when the user operates the mobile terminal according to a current operation mode based on screen size information of the mobile terminal and operation information of the user using the mobile terminal;
based on the screen size information of the mobile terminal and the operation information of the user using the mobile terminal, the method for acquiring the information of the target operation area which can be reached when the user operates the mobile terminal according to the current operation mode comprises the following steps:
based on the screen size information of the mobile terminal and the operation information of the user using the mobile terminal, adopting a pre-acquired area calculation model to calculate the information of the target operation area which can be reached when the user operates the mobile terminal according to the current operation mode.
2. The method of claim 1, wherein obtaining operation information of the user using the mobile terminal comprises:
acquiring handheld state information of the user using the mobile terminal;
and acquiring screen state information of the user using the mobile terminal.
3. The method of claim 2, wherein obtaining the user's hand-held status information using the mobile terminal comprises:
and identifying the handheld state information of the user using the mobile terminal based on the data acquired by the gravity sensor of the mobile terminal.
4. The method of claim 2, wherein acquiring operation information of a user using the mobile terminal, further comprises:
and acquiring click data information of the user using the mobile terminal.
5. The method of any one of claims 1 to 4, wherein acquiring information of a target operation area that can be reached when the user operates the mobile terminal in a current operation manner based on screen size information of the mobile terminal and operation information of the user using the mobile terminal, comprises:
transmitting screen size information of the mobile terminal and state information of the mobile terminal used by the user to a cloud end so that the cloud end adopts a pre-trained region calculation model, and calculating information of the target operation region which can be touched by the user when the user operates the mobile terminal according to a current operation mode based on the screen size information of the mobile terminal and operation information of the mobile terminal used by the user;
and receiving the information of the target operation area returned by the cloud.
6. The method according to any one of claims 1 to 4, wherein after acquiring information of a target operation area that can be reached when the user operates the mobile terminal in a current operation manner based on screen size information of the mobile terminal and operation information of the user using the mobile terminal, the method further comprises:
integrating screen size information of the mobile terminal, operation information of the user using the mobile terminal and information of the target operation area into one piece of training data;
and sending the training data to the cloud.
7. An interaction processing apparatus comprising:
the determining module is used for determining information of a target operation area which can be reached when a user operates the mobile terminal according to a current operation mode;
the setting module is used for performing interaction setting with the user based on the information of the target operation area;
wherein, the determining module includes:
a first obtaining unit, configured to obtain screen size information of the mobile terminal;
a second obtaining unit, configured to obtain operation information of the user using the mobile terminal;
a third obtaining unit, configured to obtain information of a target operation area that can be touched when the user operates the mobile terminal according to a current operation mode, based on screen size information of the mobile terminal and operation information of the user using the mobile terminal;
wherein the third obtaining unit is configured to:
based on the screen size information of the mobile terminal and the operation information of the user using the mobile terminal, adopting a pre-acquired area calculation model to calculate the information of the target operation area which can be reached when the user operates the mobile terminal according to the current operation mode.
8. The apparatus of claim 7, wherein the second acquisition unit is configured to:
acquiring handheld state information of the user using the mobile terminal;
and acquiring screen state information of the user using the mobile terminal.
9. The apparatus of claim 8, wherein the second acquisition unit is configured to:
and identifying the handheld state information of the user using the mobile terminal based on the data acquired by the gravity sensor of the mobile terminal.
10. The apparatus of claim 8, wherein the second acquisition unit is further configured to:
and acquiring click data information of the user using the mobile terminal.
11. The apparatus according to any one of claims 7-10, wherein the third acquisition unit is configured to:
transmitting screen size information of the mobile terminal and state information of the mobile terminal used by the user to a cloud end so that the cloud end adopts a pre-trained region calculation model, and calculating information of the target operation region which can be touched by the user when the user operates the mobile terminal according to a current operation mode based on the screen size information of the mobile terminal and operation information of the mobile terminal used by the user;
and receiving the information of the target operation area returned by the cloud.
12. The apparatus according to any one of claims 7-10, wherein the apparatus further comprises:
the integration module is used for integrating the screen size information of the mobile terminal, the operation information of the user using the mobile terminal and the information of the target operation area into one piece of training data;
and the sending module is used for sending the training data to the cloud.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202111115617.4A 2021-09-23 2021-09-23 Interactive processing method and device, electronic equipment and storage medium Active CN113961132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111115617.4A CN113961132B (en) 2021-09-23 2021-09-23 Interactive processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111115617.4A CN113961132B (en) 2021-09-23 2021-09-23 Interactive processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113961132A CN113961132A (en) 2022-01-21
CN113961132B true CN113961132B (en) 2023-07-25

Family

ID=79462516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111115617.4A Active CN113961132B (en) 2021-09-23 2021-09-23 Interactive processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113961132B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016090773A1 (en) * 2014-12-09 2016-06-16 捷开通讯(深圳)有限公司 Information acquisition method and operating method for operating mobile terminal with one hand, and mobile terminal
CN106412232A (en) * 2016-08-26 2017-02-15 珠海格力电器股份有限公司 Scaling method and device for controlling operation interface, and electronic device
CN106527904A (en) * 2016-09-20 2017-03-22 维沃移动通信有限公司 Method for adjusting display interface and mobile terminal
CN106648419A (en) * 2016-11-16 2017-05-10 努比亚技术有限公司 Display processing method and device and terminal
CN106855785A (en) * 2016-12-16 2017-06-16 广东欧珀移动通信有限公司 Method, device and the mobile terminal of screen false-touch prevention are realized during a kind of hands grasping
CN107329683A (en) * 2017-06-26 2017-11-07 努比亚技术有限公司 One kind grips exchange method, equipment and computer-readable recording medium
CN108769396A (en) * 2018-05-16 2018-11-06 Oppo(重庆)智能科技有限公司 Display control method, device, electronic device and storage medium
CN112799530A (en) * 2020-12-31 2021-05-14 科大讯飞股份有限公司 Touch screen control method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016090773A1 (en) * 2014-12-09 2016-06-16 捷开通讯(深圳)有限公司 Information acquisition method and operating method for operating mobile terminal with one hand, and mobile terminal
CN106412232A (en) * 2016-08-26 2017-02-15 珠海格力电器股份有限公司 Scaling method and device for controlling operation interface, and electronic device
CN106527904A (en) * 2016-09-20 2017-03-22 维沃移动通信有限公司 Method for adjusting display interface and mobile terminal
CN106648419A (en) * 2016-11-16 2017-05-10 努比亚技术有限公司 Display processing method and device and terminal
CN106855785A (en) * 2016-12-16 2017-06-16 广东欧珀移动通信有限公司 Method, device and the mobile terminal of screen false-touch prevention are realized during a kind of hands grasping
CN107329683A (en) * 2017-06-26 2017-11-07 努比亚技术有限公司 One kind grips exchange method, equipment and computer-readable recording medium
CN108769396A (en) * 2018-05-16 2018-11-06 Oppo(重庆)智能科技有限公司 Display control method, device, electronic device and storage medium
CN112799530A (en) * 2020-12-31 2021-05-14 科大讯飞股份有限公司 Touch screen control method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向大屏幕手机的单手目标选择方法;辛义忠;李洋;李岩;姜欣慧;;计算机辅助设计与图形学学报(10);全文 *

Also Published As

Publication number Publication date
CN113961132A (en) 2022-01-21

Similar Documents

Publication Publication Date Title
CN105867751B (en) Operation information processing method and device
CN109583391B (en) Key point detection method, device, equipment and readable medium
CN112650790B (en) Target point cloud plane determining method and device, electronic equipment and storage medium
CN112634343A (en) Training method of image depth estimation model and processing method of image depth information
CN112306235A (en) Gesture operation method, device, equipment and storage medium
CN112994980B (en) Time delay test method, device, electronic equipment and storage medium
CN111783889B (en) Image recognition method and device, electronic equipment and computer readable medium
CN113961132B (en) Interactive processing method and device, electronic equipment and storage medium
CN113033346A (en) Text detection method and device and electronic equipment
CN113127780A (en) Page loading method and device and electronic equipment
CN112784102A (en) Video retrieval method and device and electronic equipment
CN111833391A (en) Method and device for estimating image depth information
CN108874141B (en) Somatosensory browsing method and device
CN113792876B (en) Backbone network generation method, device, equipment and storage medium
CN114283398A (en) Method and device for processing lane line and electronic equipment
CN113327311A (en) Virtual character based display method, device, equipment and storage medium
CN114090158B (en) Display method, display device, electronic equipment and medium
CN114690997B (en) Text display method and device, equipment, medium and product
CN114459494B (en) Method and device for acquiring reachable area, electronic equipment and storage medium
CN114416937B (en) Man-machine interaction method, device, equipment, storage medium and computer program product
CN116301361A (en) Target selection method and device based on intelligent glasses and electronic equipment
CN111666969B (en) Method and device for calculating image-text similarity, electronic equipment and readable storage medium
CN113642559B (en) Text acquisition method, device and equipment based on scanning equipment and storage medium
CN114092874B (en) Training method of target detection model, target detection method and related equipment thereof
US20220342525A1 (en) Pushing device and method of media resource, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant