CN114527920A - Man-machine interaction method and electronic equipment - Google Patents

Man-machine interaction method and electronic equipment Download PDF

Info

Publication number
CN114527920A
CN114527920A CN202011187246.6A CN202011187246A CN114527920A CN 114527920 A CN114527920 A CN 114527920A CN 202011187246 A CN202011187246 A CN 202011187246A CN 114527920 A CN114527920 A CN 114527920A
Authority
CN
China
Prior art keywords
control
electronic device
visually impaired
impaired user
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011187246.6A
Other languages
Chinese (zh)
Inventor
相超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Priority to CN202011187246.6A priority Critical patent/CN114527920A/en
Publication of CN114527920A publication Critical patent/CN114527920A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of terminals, and discloses a man-machine interaction method and electronic equipment, which aim to improve the efficiency and improve the use experience of a vision-impaired user by simplifying the operation flow of triggering broadcast and selecting a target control. The method comprises the following steps: the method comprises the steps of detecting a first operation triggered by a vision-impaired user, wherein the first operation is used for indicating that at least one control displayed in a current display interface of the electronic equipment is broadcasted. Responding to the first operation, and broadcasting at least one control in the current display interface in sequence according to a set sequence; the broadcasting interval time between any two adjacent controls is preset time length. Detecting a second operation triggered by the visually impaired user within a preset time after the first control is broadcasted, wherein the second operation is used for indicating that the first control is selected, and the first control is any one of at least one control; in response to the second operation, the first control is determined to be the target control selected by the visually impaired user.

Description

Man-machine interaction method and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of terminals, in particular to a human-computer interaction method and electronic equipment.
Background
In order to facilitate the use of the visually impaired users, smart terminal type electronic devices such as tablets and mobile phones are usually provided with auxiliary functions for the visually impaired users to interact with.
The auxiliary function set in the prior art can provide a voice prompt for the visually impaired user, the visually impaired user needs to touch a certain control on the display screen of the electronic device, and then the visually impaired user is prompted to touch the certain control currently through the voice prompt. Fig. 1 illustrates a mode of interaction between a visually impaired user and an electronic device in the prior art, for example, when the visually impaired user touches an icon "recorder" in a display screen, the electronic device broadcasts the "recorder" by voice, and the recorder is opened by double-click; also, for example, when the visually impaired user moves into contact with another icon "phone" in the display screen, the electronic device may voice-report "phone, double-click open phone".
It is thus clear that among the prior art when the vision impairment user wants to select a certain application or function, need not slide incessantly on the electronic equipment display screen and explore, touch every controlling part and all need wait for electronic equipment to report and then operate on next step, the operation process is comparatively loaded down with trivial details, and is inefficient, is unfavorable for the use of the vision impairment user and experiences.
Disclosure of Invention
The embodiment of the application provides a human-computer interaction method and electronic equipment, which are used for improving the efficiency of using intelligent equipment by a vision-impaired user and improving the use experience of the vision-impaired user by simplifying the operation processes of triggering broadcast and selecting a target control.
In a first aspect, an embodiment of the present application provides a human-computer interaction method, which is applied to an electronic device, and the method includes: the method comprises the steps of detecting a first operation triggered by a visually impaired user, wherein the first operation is used for indicating to broadcast at least one control displayed in a current display interface of the electronic equipment. Responding to the first operation, and broadcasting the at least one control in sequence according to a set sequence; the broadcasting interval time between any two adjacent controls is preset time length. Detecting a second operation triggered by the visually impaired user within a preset time after the first control is broadcasted, wherein the second operation is used for indicating that the first control is selected; wherein the first control is any one of the at least one control. In response to the second operation, determining the first control as a target control selected by the visually impaired user.
In the embodiment of the application, the control is broadcasted by responding to the operation of the vision-impaired user and the target control is selected by responding to the operation of the vision-impaired user, the user can find the control which the user wants to select only by two times of operation without exploring and touching, the operation flow of the user is simplified, the interaction efficiency is improved, and the use experience of the user can be improved.
In one possible design, the detecting that the visually impaired user triggers the first operation of the announcement control includes: detecting that the fingers of the visually impaired user are in a first position and that the fingers of the visually impaired user are always in the first position before the visually impaired user triggers the second operation; the first position is any position on a display screen of the electronic equipment; or the first position is located above the display screen, and the distance between the first position and the display screen is smaller than a first distance threshold value. In the above design, the operation that needs the visual impairment user to carry out is comparatively simple, and the visual impairment user of being convenient for triggers the report controlling part.
In one possible design, the at least one control is a control included in a setting area in a current display interface of the electronic device; wherein the setting area is determined according to the first position, or the setting area is determined according to a projection position of the first position on a display screen of the electronic device.
In one possible design, the detecting of the second operation triggered by the visually impaired user includes: detecting that a finger of the visually impaired user has left the first position. In the above design, the operation that the control piece was reported in the release of the user that looks at the barrier, alright then trigger the second operation of selecting the target control piece, it is comparatively convenient, be convenient for select the control piece that the user that looks at the barrier wanted the selection.
In one possible design, a fingerprint identification area is provided on the electronic device, and the detecting a first operation triggered by a visually impaired user includes: detecting that a finger of the visually impaired user touches a fingerprint identification area of the electronic device, and before the visually impaired user triggers the second operation, the finger of the visually impaired user always touches the fingerprint identification area. In the above design, the operation that needs the visual impairment user to carry out is comparatively simple, and the visual impairment user of being convenient for triggers the report controlling part.
In one possible design, the at least one control is all controls in a currently displayed interface of the electronic device.
In one possible design, the detecting of the second operation triggered by the visually impaired user includes: detecting that the finger of the visually impaired user leaves the fingerprint identification area. In the above design, the operation that the control piece was reported in the release of the user that looks at the barrier, alright then trigger the second operation of selecting the target control piece, it is comparatively convenient, be convenient for select the control piece that the user that looks at the barrier wanted the selection.
In a possible design, before broadcasting the at least one control in sequence according to a set order, the method further includes: if the current display interface comprises a popup control, broadcasting sub-controls and/or text information contained in the popup control; receiving first voice information input by the visually impaired user, wherein the first voice information is used for indicating to close the pop-up window control; and closing the pop-up window control in the current display interface according to the first voice information.
In one possible design, before detecting the first operation triggered by the visually impaired user, the method further includes: identifying the visually impaired user; entering a barrier-free mode; wherein the electronic device is capable of responding to the first operation and the second operation while in the barrier-free mode.
In one possible design, the method further includes: and when the barrier-free mode is entered, reducing the display brightness of the display screen of the electronic equipment. The design can ensure the privacy safety of the visually impaired user and can also reduce power consumption.
In a second aspect, an embodiment of the present application provides an electronic device, including one or more processors; one or more memories; wherein the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform the steps of: the method comprises the steps of detecting a first operation triggered by a vision-impaired user, wherein the first operation is used for indicating to broadcast at least one control displayed in a current display interface of the electronic equipment. Responding to the first operation, and broadcasting the at least one control in sequence according to a set sequence; the broadcasting interval time between any two adjacent controls is preset time length. Detecting a second operation triggered by the visually impaired user within a preset time after the first control is broadcasted, wherein the second operation is used for indicating that the first control is selected; wherein the first control is any one of the at least one control. In response to the second operation, determining the first control as a target control selected by the visually impaired user.
In the embodiment of the application, the control part is broadcasted through responding to the operation of the vision impairment user, the target control part is selected through responding to the operation of the vision impairment user, the user can find the control part which the user wants to select only through twice operation, the user does not need to explore touch, the operation flow of the user is simplified, the interaction efficiency is improved, and the use experience of the user can be improved.
In one possible design, the electronic device further includes a display screen; the instructions, when executed by the one or more processors, cause the electronic device to perform the steps of: detecting that the fingers of the visually impaired user are in a first position and that the fingers of the visually impaired user are always in the first position before the visually impaired user triggers the second operation; the first position is any position on a display screen of the electronic equipment; or the first position is located above the display screen, and the distance between the first position and the display screen is smaller than a first distance threshold value. In the above design, the operation that the user of the visual impairment needs to carry out is comparatively simple, and the user of the visual impairment who is convenient for triggers the broadcast controlling part.
In one possible design, the at least one control is a control included in a setting area in a current display interface of the electronic device; wherein the setting area is determined according to the first position, or the setting area is determined according to a projection position of the first position on a display screen of the electronic device.
In one possible design, the instructions, when executed by the one or more processors, cause the electronic device to perform the steps of: detecting that a finger of the visually impaired user has left the first position. In the above design, the operation that the control piece was reported in the release of the user that looks at the barrier, alright then trigger the second operation of selecting the target control piece, it is comparatively convenient, be convenient for select the control piece that the user that looks at the barrier wanted the selection.
In one possible design, a fingerprint identification area is provided on the electronic device, and the instructions, when executed by the one or more processors, cause the electronic device to perform the steps of: detecting that a finger of the visually impaired user touches a fingerprint identification area of the electronic device, and before the visually impaired user triggers the second operation, the finger of the visually impaired user always touches the fingerprint identification area. In the above design, the operation that needs the visual impairment user to carry out is comparatively simple, and the visual impairment user of being convenient for triggers the report controlling part.
In one possible design, the at least one control is all controls in a currently displayed interface of the electronic device.
In one possible design, the instructions, when executed by the one or more processors, cause the electronic device to perform the steps of: detecting that the finger of the visually impaired user leaves the fingerprint identification area. In the above design, the operation that the control piece was reported in the release of the user that looks at the barrier, alright then trigger the second operation of selecting the target control piece, it is comparatively convenient, be convenient for select the control piece that the user that looks at the barrier wanted the selection.
In one possible design, the instructions, when executed by the one or more processors, cause the electronic device to perform the following steps before broadcasting the at least one control in sequence in a set order: if the current display interface comprises a popup control, broadcasting sub-controls and/or text information contained in the popup control; receiving first voice information input by the visually impaired user, wherein the first voice information is used for indicating to close the pop-up window control; and closing the pop-up window control in the current display interface according to the first voice information.
In one possible design, the instructions, when executed by the one or more processors, cause the electronic device to, prior to detecting a first operation triggered by a visually impaired user, perform the steps of: identifying the visually impaired user; entering a barrier-free mode; wherein the electronic device is capable of responding to the first operation and the second operation while in the barrier-free mode.
In one possible design, the instructions, when executed by the one or more processors, cause the electronic device to perform the steps of: and when the barrier-free mode is entered, reducing the display brightness of the display screen of the electronic equipment. The design can ensure the privacy safety of the visually impaired user and can also reduce power consumption.
In a third aspect, an embodiment of the present application further provides an electronic device, where the electronic device includes a functional module capable of executing a behavior of the electronic device in the first aspect or any one of the possible designs of the first aspect; these functional modules may be implemented by hardware, or may be implemented by hardware executing corresponding software.
In a fourth aspect, an embodiment of the present application further provides a chip, which is coupled to the memory and configured to read and execute the computer program instructions stored in the memory to perform the method described in the first aspect or any one of the possible designs of the first aspect.
In a fifth aspect, this embodiment of the present application further provides a computer storage medium, which includes computer instructions, when the computer instructions are executed on an electronic device, cause the electronic device to perform the method described in the first aspect or any one of the possible designs of the first aspect.
In a sixth aspect, embodiments of the present application further provide a program product, which when run on an electronic device, causes the electronic device to perform the method described in the first aspect or any one of the possible designs of the first aspect.
Drawings
FIG. 1 is a schematic diagram of a conventional human-computer interaction process;
fig. 2 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a human-computer interaction method according to an embodiment of the present disclosure;
FIG. 4 is an interaction diagram for identifying a visually impaired user according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a fast access barrier-free mode according to an embodiment of the present application;
fig. 6a is one of schematic diagrams of an interaction interface for triggering a response broadcast control according to an embodiment of the present application;
fig. 6b is a second schematic view of an interaction interface of the trigger response broadcast control provided in the embodiment of the present application;
FIG. 7a is one of schematic diagrams of an interaction interface for triggering a response to selecting a target control according to an embodiment of the present application;
FIG. 7b is a second schematic diagram of an interactive interface for triggering a response to a target control being selected according to an embodiment of the present application;
fig. 8 is a second flowchart of a human-computer interaction method according to an embodiment of the present application;
fig. 9a is a third schematic view of an interaction interface for triggering a response broadcast control according to the embodiment of the present application;
fig. 9b is a fourth schematic view of an interaction interface of the trigger response broadcast control provided in the embodiment of the present application;
FIG. 10a is a third schematic diagram of an interaction interface triggering a response to selecting a target control according to the embodiment of the present application;
FIG. 10b is a fourth schematic view of an interactive interface for triggering a response to selecting a target control according to an embodiment of the present application;
fig. 11a is one of schematic diagrams of an interaction flow of a prompt interface provided in an embodiment of the present application;
FIG. 11b is a second schematic diagram illustrating an interaction flow of a prompt interface according to an embodiment of the present application;
fig. 12 is a third schematic flowchart of a human-computer interaction method according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described in detail below with reference to the drawings in the following embodiments of the present application.
Hereinafter, some terms in the present application are explained so as to be easily understood by those skilled in the art.
The control related to the embodiment of the present application refers to a user interface control, for example, an application icon, a text box, a button control, a list box, a date/time control, a popup control, and the like displayed in a display interface of an electronic device. The control is broadcasted or broadcasted, and the voice broadcast of the text information displayed in the control is realized. For example, in the schematic of fig. 1, if an application icon, i.e., a recorder, is broadcasted, the displayed text message "recorder" is broadcasted.
The embodiments of the present application relate to at least one, including one or more; wherein a plurality means greater than or equal to two.
In addition, it is to be understood that the terms first, second, etc. in the description of the present invention are used for distinguishing between the descriptions and are not intended to indicate or imply relative importance nor order to be construed.
The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in embodiments of the present invention, "one or more" means one, two, or more than two; "and/or" describes the association relationship of the associated objects, indicating that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present invention. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The man-machine interaction method provided by the embodiment of the application can be applied to electronic devices such as a mobile phone, a tablet computer, a wearable device (e.g., a watch, a bracelet, a helmet, a necklace, etc.), a smart home device, an on-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a portable mobile Phone (PDA), and a mobile phone.
For example, fig. 2 shows a schematic structural diagram of the electronic device 100. As shown in fig. 2, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated in one or more processors. The application processor may also be referred to as a main processor, and is used for running an Operating System (OS) and related application software. The controller may be a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution. The NPU processes input information rapidly by using biological neural network structures, for example, by using a transfer mode between human brain neurons. And artificial intelligence operation can be applied to self-learning continuously. For example, a convolutional neural network model is used for a large amount of information identification and screening processing, and a part of training functions of the convolutional neural network model are selectively realized to realize training and identification of context intelligence. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as pictures, videos, and the like are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. Wherein, the storage program area can store an operating system, software codes of at least one application program (such as an Aichi art application, a WeChat application, etc.), and the like. The storage data area may store data (e.g., images, videos, etc.) generated during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The internal memory 121 may be used to store computer-executable program codes of the human-computer interaction method proposed by the embodiment of the present invention, and the executable program codes include instructions. The processor 110 may enable the electronic device 100 to perform the human-computer interaction method proposed by the embodiment of the present invention by executing the computer-executable program code of the human-computer interaction method stored in the internal memory 121.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. The charging management module 140 is configured to receive charging input from a charger. The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a user takes a picture, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, an optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and converting into an image visible to the naked eye. The ISP can also carry out algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1. For example, in the present embodiment, the camera 193 may include a front camera and a rear camera. The front camera may be disposed on the display screen 194, and the rear camera may be disposed on a back plate of the electronic device 100 opposite to the display screen 194. The front camera can be also provided with a front photosensitive sensor, and the front photosensitive sensor and the front camera are used for detecting the eyes of the user and/or the pupil change information of the eyes of the user and sending the information to the processor for analysis. The eye use habit of the visually impaired people can be detected, for example, no fixation activity of the eyes of the user within a certain time period, namely, the eyes are closed; the user may be determined to be a visually impaired user by the pupil of the user's eye not reacting to light or by the absence of an electronic device image in the user's pupil. As another example, the camera 193 in the electronic device 100 at least includes a camera 1 and a camera 2, where the field angle of the camera 1 is smaller than the field angle of the camera 2. For example, camera 1 is a tele camera, camera 2 is a wide camera (which may be a normal wide camera or an ultra wide camera); or the camera 1 is a common wide-angle camera, the camera 2 is a super wide-angle camera, and the like in different combinations. In some embodiments, camera 1 and camera 2 may both be rear cameras or both front cameras. It should be understood that the electronic device 100 may also include more cameras, such as a tele camera.
The display screen 194 may be used to display information input by or provided to the user and various menus of the electronic apparatus 100, and may additionally accept user input such as a touch operation by the user; display interfaces for displaying applications, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like.
In some embodiments, the electronic device 100 may include 1 or N display screens 194, where the 1 or N display screens 194 may be connected in a folding manner or in a flexible manner, and when the plurality of display screens 194 are folded, the electronic device is convenient to carry, and when the plurality of display screens 194 are connected in an unfolding manner, the user can use a large screen to watch, so as to improve the user experience, where N is a positive integer greater than 1. When the electronic equipment comprises a plurality of display screens, the man-machine interaction method in the embodiment of the invention can be independently applied to one display screen, and can also be applied to the connection of the plurality of display screens to form a whole large screen when the plurality of display screens are unfolded.
Optionally, the display screen 194 further includes a touch panel. The touch panel, also called a touch screen, a touch sensitive screen, etc., may collect contact or non-contact operations (such as operations performed by a user on or near the touch panel using any suitable object or accessory, such as a finger, a stylus, etc., and may also include somatosensory operations; the operations include single-point control operations, multi-point control operations, etc.) on or near the touch panel, and drive the corresponding connection device according to a preset program. The touch panel may include two parts of a touch detection device and a touch controller. The touch detection device detects whether a user touches or not, and the touch direction and the touch posture of the user, detects a signal caused by input operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into information that can be processed by the processor, sends the information to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, a surface acoustic wave, and the like, and may also be implemented by any technology developed in the future. Further, the touch panel may cover the display panel, a user may operate on or near (e.g., directly above) the touch panel covered on the display panel according to the content displayed on the display panel (the display content includes, but is not limited to, a soft keyboard, a virtual mouse, virtual keys, icons, etc.), the touch panel detects the operation on or near the touch panel, and transmits the operation to the processor 110 to determine the user input, and then the processor 110 provides the corresponding visual output on the display panel according to the user input.
For example, in this embodiment of the application, after a touch detection device in a touch panel detects a touch operation input by a visually impaired user, a touch controller sends a signal corresponding to the detected touch operation in real time, the touch controller converts the signal into a touch point coordinate and sends the touch point coordinate to the processor 110, the processor 110 determines, according to the received touch point coordinate, that the touch operation is specifically an operation for triggering a broadcast control, and then responds to the touch operation input by the user to broadcast at least one control in a target area in a current display interface corresponding to the touch coordinate. The specific implementation of this part of the scheme will be described in detail later.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. For example, the control on the display interface in the display screen 194 is broadcasted, and voice information input by the user, music playing, recording, and the like are received.
The sensor module 180 may include a distance sensor 180A, a proximity light sensor 180B, a fingerprint sensor 180C, a touch sensor 180D, and the like.
A distance sensor 180A for measuring distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the distance sensor 180A is disposed in the display screen for sensing whether the user's finger is within a distance range detectable by the distance sensor 180A. For example, when the display screen 194 faces upward, the electronic device 100 may utilize the distance sensor 180A in combination with the aforementioned camera 193 to measure whether the finger of the visually impaired user is located at a height directly above the display screen 194, where the distance between the height and the display screen 194 is smaller than the upper limit value of the aforementioned distance range; the electronic device 100 may also utilize the distance sensor 180A in conjunction with the aforementioned camera 193 to detect whether the impaired user's finger is off a height directly above the screen 194.
The proximity light sensor 180B may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180B to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the display screen to achieve the purpose of saving power. The proximity light sensor 180B may also be used in a holster mode, a pocket mode automatically unlocking and locking the screen.
The fingerprint sensor 180C is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on. The fingerprint sensor 180C may be disposed on the display screen 194, and specifically collects a fingerprint of a user in a form of a virtual key; the fingerprint sensor 180C may also be disposed on a side frame or a back panel of the electronic device 100, specifically, in the form of a physical key to collect a fingerprint of the user. In the embodiment of the present application, the fingerprint sensor 180C may also be used to detect a touch operation applied thereto by the user. The fingerprint sensor 180C may pass the detected touch operation to the application processor to determine the operation instruction triggered by the user.
The touch sensor 180D is also referred to as a "touch panel". The touch sensor 180D may be disposed on the display screen 194, and the display screen including the touch sensor 180D and the display panel may also be referred to as a "touch screen". The touch sensor 180D is used to detect a touch operation applied thereto or nearby. The touch sensor may pass the detected touch operation to the application processor to determine the touch event type or operation instruction triggered by the user. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180D may be disposed on the surface of the electronic device 100 at a different position than the display screen 194.
The keys 190 include physical keys such as a power-on key and a volume key. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100. The SIM card interface 195 is used to connect a SIM card. The SIM card may be brought into and out of contact with the electronic device 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195.
It is to be understood that the components shown in fig. 2 do not constitute a specific limitation of the electronic device 100, and that the handset may also include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components. In addition, the combination/connection relationship between the components in fig. 2 may also be modified.
The following describes a scheme provided by the embodiment of the present application in detail by taking an application of the embodiment of the present invention to the electronic device 100.
The following describes in detail the interaction process of the electronic device with the visually impaired user with reference to examples one and two. It should be noted that the first example and the second example provided in the embodiments of the present application may be implemented separately, or may be implemented by combining the first example and the second example, and modifications based on the first example and the second example are within the scope of the embodiments of the present application.
Example 1
As shown in fig. 3, an embodiment of the present application provides a flowchart of an interaction method, which is applied to the electronic device, and the electronic device performs the following steps.
S301, the electronic equipment identifies the visually impaired user.
Considering that a visually impaired user is generally used to close his eyes or his ears to the display screen, in one possible embodiment, the electronic device may detect that the user has not performed a gazing activity for a predetermined period of time through its front camera, for example, the electronic device may determine that the user is a visually impaired user if the user 6s has had his eyes closed all the time. In another possible implementation, even if the eyes of the visually impaired user are open to face the display screen of the electronic device, the electronic device may detect that the pupils of the user do not react to light through its front-facing camera, or there is no mobile phone image in the pupils, and then determine that the user is the visually impaired user. Illustratively, as shown in fig. 4, an interactive schematic diagram for identifying a visually impaired user is provided, which particularly illustrates that an electronic device detects that a user closes an eye through a front camera. Optionally, the electronic device may also be configured in advance for the visually impaired user, and then the electronic device may not perform S301. That is, S301 is not an essential step in the embodiment of the present application, and S301 may be omitted according to actual settings, which is not limited in the embodiment of the present application.
Alternatively, it is contemplated that the same electronic device may be used by visually impaired users as well as by general users. One possible embodiment is that a barrier-free mode may be preconfigured in the electronic device, and the electronic device is set to respond to a specific operation of the user when entering the barrier-free mode. For example, when detecting that the finger of the user presses the display screen or the fingerprint identification area for a long time, or the finger of the user is continuously located above the display screen, and the like, the related control can be broadcasted in response to the operation. In addition, it should be noted that the barrier-free mode is only an example of a name of a functional mode in one electronic device, and other names may be used instead, as long as the name is used to indicate one functional mode in which the electronic device provides an auxiliary service for a visually impaired user, and this name is not limited in the embodiment of the present application.
Further, the manner of starting the barrier-free mode, i.e. making the electronic device enter the barrier-free mode, may be implemented as follows. In one possible implementation, the electronic device may automatically initiate the barrier-free mode upon identifying the visually impaired user. In another possible implementation, the electronic device may be preconfigured with shortcuts to initiate barrier-free mode, such as double-clicking the fingerprint identification zone while holding down the power key and the low volume key for 5 seconds, double-clicking the display screen, three fingers sliding on the display screen from top to bottom at the same time, and so on. Illustratively, referring to fig. 5, the electronic device detects that the user holds down the power key and the low volume key for 5 seconds at the same time, and then may enter the barrier-free mode quickly.
S302, the electronic equipment detects that the fingers of the visually impaired user are located at the first position, and then the electronic equipment can determine that the visually impaired user indicates a broadcast control.
In a possible implementation manner, the electronic device detects that a finger of the visually impaired user touches (or is called as touching) a certain position on the display screen through the touch panel, and determines that the visually impaired user indicates the broadcast control. In another possible implementation, the electronic device detects that a finger of a user is suspended on the display screen through the distance sensor and the camera, and can determine that the visually impaired user indicates the broadcast control. The 'floating' means that the position of the finger of the user is located above the display screen, and the distance between the position of the finger of the user and the display screen is smaller than a first distance threshold. Optionally, the electronic device may determine the first distance threshold according to an upper limit value of a distance range that can be detected by the distance sensor; a fixed value may also be preset, which is not limited in the embodiment of the present application.
Optionally, the electronic device may be set to detect that a single finger of a visually impaired user is located at a first position, and then determine that the visually impaired user indicates a broadcast control; or setting the electronic equipment to detect that a plurality of fingers of the vision-impaired user are located at a first position, and determining that the vision-impaired user indicates a broadcast control. The embodiments of the present application do not limit this. Taking an example that the electronic device is specifically a mobile phone with a display screen, as an example, as an interactive interface for triggering a response broadcast control illustrated in fig. 6a/6b, it is specifically illustrated that a single finger of a visually impaired user touches a certain position on the display screen of the mobile phone, and then the mobile phone may determine that the visually impaired user indicates the broadcast control.
S303, the electronic equipment determines a target area in a current display interface of the electronic equipment according to the size of a preset area and the first position; wherein the target area comprises the first location or a projection of the first location on a display screen of the electronic device.
Optionally, the preset size of the area is smaller than or equal to the size of the display interface of the electronic device, that is, the target area determined in the embodiment of the present application may be the whole area of the current display interface, or may be a partial area in the current display interface.
Alternatively, the size of the region and/or the shape of the region may be preset in the electronic device. For example, the setting area is a rectangle, a square, a circle, an ellipse, or the like having a fixed size. In one possible embodiment, if the set area has a fixed size and is in a shape with vertices, such as a rectangle or a square, the electronic device may determine the target area using the first position as the vertex of the area, in combination with the set size and shape of the area. In another possible embodiment, if the set area has a fixed size and is a shape without a vertex, such as a circle or an ellipse, the electronic device may determine the target area by using the first position as the area center in combination with the set area size and area shape.
For example, as shown in fig. 6a/6b, when the mobile phone detects that a single finger of the visually impaired user touches a certain position on the display screen, if the preset area is rectangular in shape, the position of the single finger of the visually impaired user on the display screen is taken as the top right corner of the rectangle, and according to the size of the preset area, a target area, such as a rectangular dashed box shown in fig. 6a/6b, is determined in the current display interface on the display screen.
In practical application, the visually impaired user can also estimate the approximate position of the control to be selected on the display screen according to historical operation experience, and then touch the approximate position, so that the electronic equipment can broadcast the control to be selected as soon as possible in the follow-up process, and the efficiency of finding the target control to be used by the user is improved. For example, a visually impaired user may know the approximate position of the phone control in the display screen of the mobile phone, for example, in the lower left corner of the display screen, by using his/her mobile phone for a long time, and then the visually impaired user may directly touch the lower left corner area of the display screen of the electronic device.
Further, the electronic device responds to the user' S instruction to broadcast the plurality of controls included in the target area, and specifically executes the following steps S304, S305a or S304, S305 b.
S304, the electronic equipment judges whether the current display interface comprises a popup window control. If not, go to S305 a; if so, S305b is executed.
S305a, the electronic device broadcasts a plurality of controls included in the target area in the current display interface of the electronic device in sequence according to a set sequence, a set interval duration exists between every two adjacent broadcasted controls, that is, after the first control is broadcasted, the second control is broadcasted at a set interval, then the third control is broadcasted at a set interval, and so on. The interval duration may be set according to experience of responding to the time required by the visually impaired user for indicating, for example, ensuring that the operation of the target control is selected in response to the indication triggered by the visually impaired user within the interval duration; the aforementioned interval duration may also be a fixed value in advance, which is not limited in this embodiment of the application.
For example, as shown in fig. 6a, a mobile phone may broadcast a plurality of controls contained in a rectangular dashed-line frame in sequence through a speaker disposed on a display screen. For example, text information "camera", "address book", "telephone" and "information" displayed in the current display interface for 4 application icons in the rectangular dashed frame are broadcast cyclically in a sequence from left to right, and optionally, the electronic device broadcasts every two controls with a preset interval duration of 5 s. Then S306 is executed.
S305b, the electronic device broadcasts a sub-control and/or text information contained in the popup control, and receives first voice information input by the visually impaired user, wherein the first voice information is used for indicating to close the popup control; after the popup control is closed, the control included in the target area is broadcasted, that is, the process may return to S305 a.
Illustratively, as shown in fig. 6b, the current display interface of the mobile phone includes a pop-up window control, which contains text information of "welcome to china mobile", and two sub-controls "determine" and "cancel". The mobile phone can broadcast the sub-controls and the text information in the pop-up window controls through the loudspeaker arranged on the display screen, receives first voice information input by a vision-impaired user through the microphone, namely 'determination', and 'determination' matched with the sub-controls in the pop-up window controls based on 'determination' input by the vision-impaired user. The mobile phone responds to the first voice information input by the user, and closes the popup control by triggering a click event of 'determination' of the child control in the popup control. And then the mobile phone broadcasts the control contained in the target area. The method for broadcasting the control contained in the target area by the mobile phone may be implemented with reference to the example in S305a, and details of the embodiment of the present application are not repeated herein.
Further, after performing S305a or S305b, S306 is performed, wherein in the process from S302 to S306 above, the user' S finger is always at the first position, such that in S306: in the process of broadcasting each control in the target area, if the current first control is broadcasted and within a set time length before the second control after the first control is broadcasted, if the fact that the finger of the visually impaired user leaves the first position is detected, the electronic device can determine that the first control which is just broadcasted is selected by the visually impaired user, namely the first control is the target control. The target control is the last control broadcasted by the electronic equipment before the mobile phone of the visually impaired user leaves the first position. Optionally, after the electronic device finishes broadcasting a certain control, it is detected that the mobile phone of the visually impaired user leaves the first position within a preset time, and then the electronic device may determine the control as the selected target control. The preset time length may be a time length for broadcasting an interval between two controls, for example, 5 s.
Optionally, the electronic device may determine that the finger of the visually impaired user has left the first location by determining that the finger of the visually impaired user is not within a spatial range of a first distance threshold from the display screen. With the electronic device as a mobile phone with an induction display screen, as shown in fig. 7a/7b, an interactive interface schematic diagram of a target control selected by a trigger response is shown, the mobile phone can detect that a visually impaired user performs an action of lifting a finger upward through a front-facing camera, or detect that the finger of the visually impaired user is not located at a first position through a distance sensor, and then can determine that the finger of the visually impaired user leaves the first position.
For example, as shown in fig. 7a, when the mobile phone detects that the visually impaired user lifts the finger upward within a preset time after the mobile phone finishes broadcasting the "information" and before broadcasting the information of the next control, the mobile phone responds to the operation of the visually impaired user to lift the finger upward, and may simply voice-broadcast the "selected information" through a speaker on the display screen, and then trigger to open the "information" control to provide services for the visually impaired user.
For example, as shown in fig. 7b, when the mobile phone detects that the user with visual impairment has finished broadcasting the "information" and lifts the finger upward within the preset duration before broadcasting the information of the next control, the mobile phone responds to the operation of the user with visual impairment to lift the finger upward, or the mobile phone broadcasts the "opened information" by voice through a speaker on the display screen, and displays the interface information of the application, i.e., the "information" on the display screen.
Example two
As shown in fig. 8, an embodiment of the present application provides a flowchart of another interaction method, which is applied to the electronic device, and the electronic device performs the following steps.
S801, the electronic equipment identifies the visually impaired user.
Alternatively, S801 may be omitted. Specifically, the implementation may be implemented by referring to the implementation manner in S301 in example one, which is not described in this embodiment of the present application again.
S802, the electronic equipment detects that the finger of the vision-impaired user touches the fingerprint identification area of the electronic equipment for more than preset time, and then the electronic equipment determines that the vision-impaired user indicates a broadcast control. The fingerprint identification area of the electronic equipment comprises the fingerprint sensor, and can be arranged on a display screen of the electronic equipment, a side frame or a back plate of the electronic equipment. The preset time period may be set according to practical settings, such as 5s, 6s, and the like, and the embodiment of the present application is not limited thereto.
Optionally, the electronic device may be set to determine that the visually impaired user instructs the broadcast control if the electronic device detects that a single finger of the visually impaired user is always located in the fingerprint identification area within a preset time period; or if the electronic equipment detects that a plurality of fingers of the visually impaired user are always in the fingerprint identification area, determining that the visually impaired user indicates the broadcast control. The embodiments of the present application do not limit this. Taking an example that the electronic device is specifically a mobile phone with a display screen, as an example, as an interactive interface for triggering a broadcast control illustrated in fig. 9a/9b, it is specifically illustrated that a duration that a single finger of a visually impaired user is located in a fingerprint identification area set on a back plate of the mobile phone exceeds a preset duration, and then the mobile phone may determine that the visually impaired user indicates the broadcast control.
Further, the electronic device responds to the instruction of the user to broadcast the plurality of controls included in the current display interface, and specifically executes the following steps S803 and S804a, or S803 and S804 b.
S803, the electronic equipment judges whether the current display interface comprises a popup window control. If not, go to S804 a; if so, S804b is performed.
S804a, the electronic device broadcasts a plurality of controls included in a current display interface of the electronic device in sequence according to a set sequence, a set interval duration exists between every two adjacent broadcasted controls, that is, after a first control is broadcasted, a second control is broadcasted at a set interval, then a third control is broadcasted at a set interval, and so on. The interval duration may be set according to experience of responding to the time required by the visually impaired user for indicating, for example, ensuring that the operation of the target control is selected in response to the indication triggered by the visually impaired user within the interval duration; the aforementioned interval duration may also be a fixed value in advance, which is not limited in this embodiment of the application.
For example, as shown in fig. 9a, the mobile phone may broadcast all the controls in the display interface shown in fig. 9a through a speaker disposed on the display screen. For example, the "performance mode, off", "power saving mode, off", "super power saving, on", "power consumption ranking", "power consumption details", "power percentage display mode" and "more battery settings" may be broadcast cyclically in sequence from top to bottom. Optionally, the electronic device broadcasts that the preset time interval between every two control pieces is 5 s. For the purpose of distinction, the following descriptions are provided. As shown in fig. 9a, when the option control after the performance mode is filled with the ground color and displayed as white, it indicates that the performance mode is in an off state; and when the option control after super power saving is filled with the ground color and displayed as black, the super power saving mode is in an open state. Then, S805 is performed.
S804b, the electronic device broadcasts a sub-control and/or text information included in the popup control, and receives first voice information input by the visually impaired user, where the first voice information is used to instruct to close the popup control; after closing the pop-up window control, broadcasting all the controls in the current display interface of the electronic device, that is, returning to execute S804 a.
For example, as shown in fig. 9b, the current display interface of the mobile phone includes a popup control, where the popup control includes text information prompting that "the mobile phone is currently in the super power saving mode and part of functions cannot be used", and a sub control is "closed". The mobile phone can broadcast the sub controls and the text information in the pop-up window control through the loudspeaker arranged on the display screen, receives first voice information input by a vision-impaired user through the microphone, namely 'close', and matches the sub controls 'close' in the pop-up window control based on the 'close' input by the vision-impaired user. The mobile phone responds to the first voice information input by the user, and closes the popup control by triggering a click event of closing the child control in the popup control. And then, the mobile phone broadcasts the controls except the popup window control in the current display interface. The method for broadcasting the controls other than the pop-up window control in the current display interface by the mobile phone can be implemented with reference to the example in S804a, and this embodiment is not described herein again.
Further, after the execution of S804a or S804b, S805 is executed, wherein the user' S finger is always in the fingerprint identification area in the above-mentioned processes of S802 to S805, such that in S805: in the process of broadcasting each control of the current display interface, if the current first control is broadcasted and within a set time length before the second control after the first control is broadcasted, if the electronic equipment detects that the finger of the visually impaired user leaves the fingerprint identification area, the electronic equipment can determine that the first control which is just broadcasted is selected by the visually impaired user, namely the first control is a target control. The target control is the last control broadcasted by the electronic equipment before the mobile phone of the visually impaired user leaves the fingerprint identification area. Optionally, after the electronic device finishes broadcasting a certain control, it is detected that the mobile phone of the visually impaired user leaves the fingerprint identification area within a preset time, and then the electronic device may determine the control as the selected target control. The preset time length may be a time length for broadcasting an interval between two controls, for example, 5 s.
Optionally, the electronic device is a mobile phone provided with a display screen, for example, as shown in fig. 10a/10b, the interactive interface schematic diagram of the target control for triggering broadcast is shown, the mobile phone can detect fingerprint information of a user without visual impairment through a fingerprint sensor in a fingerprint identification area, that is, it can be determined that the finger of the user with visual impairment leaves the fingerprint identification area on the back plate of the mobile phone, and as shown in fig. 10a/10b, the user with visual impairment lifts up the finger originally located in the fingerprint sensor.
For example, as shown in fig. 10a, after the mobile phone finishes broadcasting "super power saving" and turns on "and lifts up the finger within a preset duration before broadcasting the control of" power consumption ranking ", the mobile phone responds to the operation of lifting up the finger by the user, and may simply report" selected super power saving, turn on "by voice through the speaker on the display screen, and trigger the super power saving mode of the electronic device to turn on.
For another example, as shown in fig. 10b, after the mobile phone finishes broadcasting the "super power saving mode," the mobile phone turns on "and lifts up the finger within a preset duration before broadcasting the" power consumption ranking "control, and the mobile phone responds to the operation of the mobile phone user lifting up the finger, so that the mobile phone can perform voice broadcasting" select to turn off the super power saving mode "through a speaker on the display screen, and display the" super power saving mode "on the display screen, that is, as shown in fig. 10b, at this time, the option control after super power saving is filled with the ground color and displayed as white.
Further, after the mobile phone executes the process shown in fig. 10b, if a popup control of the prompt interface shown in fig. 11a/11b appears, the mobile phone may also automatically broadcast the text message ("determine to turn off the super power saving mode. And as shown in fig. 11a, if the handset can receive the voice message "ok" input by the visually impaired user, the super power saving mode is maintained in the off state as shown in fig. 10 b. Or as shown in fig. 11b, if the mobile phone can receive the voice message "cancel" input by the visually impaired user, the state of the super power saving mode is changed to the on state, that is, as shown in fig. 11b, the option control after super power saving is filled with the ground color and displayed as black.
Based on example one and example two, please refer to fig. 12, an embodiment of the present application provides a human-computer interaction method, including:
s1201, the electronic equipment identifies the visually impaired user.
Alternatively, S1201 may be omitted. Specifically, the implementation may be implemented by referring to the implementation manner in S301 in example one, which is not described in this embodiment of the present application again.
And S1202, the electronic equipment detects a first operation triggered by a vision-impaired user, wherein the first operation is used for indicating to broadcast at least one control displayed in a current display interface of the electronic equipment. The first operation may be touching the user's finger on the display screen or hovering the user's finger directly above the display screen as described in example one. The first operation may also be that the finger of the user touches the fingerprint identification area for more than a preset time period as described in example two. Of course, other operations may also be performed, such as the visually impaired user directly sending a voice instruction to the electronic device, for example, "please broadcast control information on the current interface" to the electronic device.
And S1203, responding to the first operation, and broadcasting the at least one control in sequence according to a set sequence by the electronic equipment.
As described above, broadcasting at least one control means broadcasting text information displayed in the current display interface by the at least one control. Optionally, the displayed text information can be contained in the attribute of the control in advance, a content tag attribute of a talkback is added to the control, and then a talkback technology is adopted to perform voice broadcast on the text information displayed by at least one control in the current display interface; or, the intelligent screen recognition technology can be adopted to recognize the text information in the current display interface and perform voice broadcast on the text information displayed in the current display interface.
Optionally, if the electronic device is externally connected with a player, for example, a wired earphone, a bluetooth earphone, a sound device, etc., the electronic device may broadcast the text information displayed by the control in the current display interface to the visually impaired user through the player. If the electronic equipment is not externally connected with a player, the electronic equipment can broadcast the text information displayed by the control in the current display interface to the visually impaired user through the audio module of the electronic equipment, such as a loudspeaker and other components.
And S1204, after the electronic device broadcasts the first control and within a preset time period before the next control is broadcasted, detecting a second operation triggered by the visually impaired user, wherein the second operation is used for indicating that the first control is selected. The first control is any one of the at least one control. The preset time period may be set according to practical situations, for example, 5s, and the embodiment of the present application is not limited thereto.
Optionally, the second operation corresponds to a first operation triggered by a user. If the first operation is that the user's finger touches a certain position on the display screen, the second operation may be that the user's finger leaves the certain position on the display screen. If the first operation is a position where the user's finger is hovering directly above the display screen, the second operation may be an operation where the user's finger leaves the position, for example, an operation of lifting the finger upward. If the first operation is that the finger of the user touches the fingerprint identification area for a long time, the second operation may be that the finger of the user leaves from the fingerprint identification area. If the first operation is that the visually impaired user says ' please report the control information on the current interface ' to the electronic device ', the second operation may be that the visually impaired user says ' please select ' to the electronic device.
And S1205, the electronic equipment responds to the second operation and determines the first control as a target control selected by the visually impaired user.
In a possible implementation manner, in response to the second operation, the electronic device may broadcast the first control selected by the user without triggering the first control to open, for example, the first control selected by the user is an application icon, only text information displayed in the display interface of the application icon is broadcast, and the application icon is not clicked to jump out of an application page linked to the application icon. In another possible implementation manner, in response to the second operation, the first control selected by the user may be broadcasted and the first control may be triggered to be opened, for example, the first control selected by the user is an application icon, the text information displayed in the display interface by the application icon is "selected recorder", and "opened recorder" may be broadcasted; and opening the recorder control and displaying the application page linked by the application icon in the display screen.
In the embodiment of the application, the control is broadcasted by responding to the operation of the vision-impaired user and the target control is selected by responding to the operation of the vision-impaired user, the user can find the control which the user wants to select only by two times of operation without exploring and touching, the operation flow of the user is simplified, the interaction efficiency is improved, and the use experience of the user can be improved.
The following describes an apparatus provided in the embodiments of the present application with reference to the drawings to implement the above-mentioned embodiments of the method of the present application.
As shown in fig. 13, some further embodiments of the present application disclose an electronic device 1300, where the electronic device 1300 may include: one or more processors 1301; one or more memories 1302; wherein the one or more memories 1302 store one or more computer programs comprising instructions. Illustratively, a processor 1301 and a memory 1302 are illustrated in fig. 13. The instructions, when executed by the one or more processors 1301, cause the electronic device 1300 to perform the steps of:
detecting a first operation triggered by a visually impaired user, wherein the first operation is used for indicating to broadcast at least one control displayed in a current display interface of the electronic equipment;
responding to the first operation, and broadcasting the at least one control in sequence according to a set sequence; the broadcasting interval time between any two adjacent controls is preset time length;
detecting a second operation triggered by the visually impaired user within a preset time after the first control is broadcasted, wherein the second operation is used for indicating that the first control is selected; wherein the first control is any one of the at least one control;
in response to the second operation, determining the first control as a target control selected by the visually impaired user.
In the embodiment of the application, the control is broadcasted by responding to the operation of the vision-impaired user and the target control is selected by responding to the operation of the vision-impaired user, the user can find the control which the user wants to select only by two times of operation without exploring and touching, the operation flow of the user is simplified, the interaction efficiency is improved, and the use experience of the user can be improved.
In one possible implementation, the electronic device 1300 further includes a display screen 1303; the instructions, when executed by the one or more processors 1301, cause the electronic device 1300 to perform the steps of: detecting that the fingers of the visually impaired user are in a first position and that the fingers of the visually impaired user are always in the first position before the visually impaired user triggers the second operation; wherein the first position is any position on the display 1303 of the electronic device 1300; or the first position is above the display screen 1303 of the electronic device 1300, and a distance between the first position and the display screen 1303 is smaller than a first distance threshold.
In a possible implementation manner, the at least one control is a control included in a setting area in a current display interface of the electronic device 1300; wherein the setting area is determined according to the first position, or the setting area is determined according to a projection position of the first position on the display 1303 of the electronic device 1300.
In one possible implementation, the instructions, when executed by the one or more processors 1301, cause the electronic device 1300 to perform the steps of: detecting that a finger of the visually impaired user has left the first position.
In one possible implementation, the electronic device 1300 is provided with a fingerprint identification area 1304, and when the instructions are executed by the one or more processors 1301, the electronic device 1300 is caused to perform the following steps: it is detected that the finger of the visually impaired user touches the fingerprint identification area 1304 of the electronic device 1300, and the finger of the visually impaired user always touches the fingerprint identification area 1304 before the visually impaired user triggers the second operation.
In one possible implementation, the at least one control is all controls in a currently displayed interface of the electronic device.
In one possible implementation, the instructions, when executed by the one or more processors 1301, cause the electronic device 1300 to perform the steps of: it is detected that the visually impaired user's finger has left the fingerprint identification area 1304.
In one possible implementation, when executed by the one or more processors 1301, the electronic device 1300 is caused to perform the following steps before broadcasting the at least one control in sequence according to a set order: if the current display interface comprises a popup control, broadcasting sub-controls and/or text information contained in the popup control; receiving first voice information input by the visually impaired user, wherein the first voice information is used for indicating to close the pop-up window control; and closing the pop-up window control in the current display interface according to the first voice information.
In one possible implementation, the instructions, when executed by the one or more processors 1301, cause the electronic device 1300 to perform the following steps before detecting a first operation triggered by a visually impaired user: identifying the visually impaired user; entering a barrier-free mode; wherein the electronic device 1300 is capable of responding to the first operation and the second operation while in the barrier-free mode.
In one possible implementation, the instructions, when executed by the one or more processors 1301, cause the electronic device 1300 to perform the steps of: when entering the barrier-free mode, the display brightness of the display screen 1303 of the electronic device 1300 is reduced.
In the embodiments provided in the present application, the method provided in the embodiments of the present application is described from the perspective of an electronic device (e.g., a mobile phone) as an execution subject. In order to implement the functions in the method provided by the embodiment of the present application, the terminal device may include a hardware structure and/or a software module, and implement the functions in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
As used in the above embodiments, the terms "when …" or "after …" may be interpreted to mean "if …" or "after …" or "in response to determining …" or "in response to detecting …", depending on the context. Similarly, depending on the context, the phrase "at the time of determination …" or "if (a stated condition or event) is detected" may be interpreted to mean "if the determination …" or "in response to the determination …" or "upon detection (a stated condition or event)" or "in response to detection (a stated condition or event)".
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the exemplary discussions above are not intended to be exhaustive or to limit the application to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best utilize the application and various embodiments with various modifications as are suited to the particular use contemplated.
It is noted that a portion of this patent application contains material which is subject to copyright protection. The copyright owner reserves the copyright rights whatsoever, except for making copies of the patent files or recorded patent document contents of the patent office.

Claims (22)

1. A man-machine interaction method is applied to electronic equipment and is characterized by comprising the following steps:
detecting a first operation triggered by a visually impaired user, wherein the first operation is used for indicating to broadcast at least one control displayed in a current display interface of the electronic equipment;
responding to the first operation, and broadcasting the at least one control in sequence according to a set sequence; the broadcasting interval time between any two adjacent controls is preset time length;
detecting a second operation triggered by the visually impaired user within a preset time after the first control is broadcasted, wherein the second operation is used for indicating that the first control is selected; wherein the first control is any one of the at least one control;
in response to the second operation, determining the first control as a target control selected by the visually impaired user.
2. The method of claim 1, wherein the detecting of the first operation triggered by the visually impaired user comprises:
detecting that the fingers of the visually impaired user are in a first position and that the fingers of the visually impaired user are always in the first position before the visually impaired user triggers the second operation; the first position is any position on a display screen of the electronic equipment; or the first position is located above the display screen, and the distance between the first position and the display screen is smaller than a first distance threshold value.
3. The method of claim 2, wherein the at least one control is a control included within a set area in a current display interface of the electronic device; wherein the setting area is determined according to the first position, or the setting area is determined according to a projection position of the first position on a display screen of the electronic device.
4. The method of claim 2 or 3, wherein the detecting of the second operation triggered by the visually impaired user comprises:
detecting that a finger of the visually impaired user has moved away from the first position.
5. The method of claim 1, wherein a fingerprint identification area is disposed on the electronic device, and the detecting of the first operation triggered by the visually impaired user comprises:
detecting that a finger of the visually impaired user touches a fingerprint identification area of the electronic device, and before the visually impaired user triggers the second operation, the finger of the visually impaired user always touches the fingerprint identification area.
6. The method of claim 5, wherein the at least one control is all controls in a currently displayed interface of the electronic device.
7. The method of claim 5 or 6, wherein the detecting of the visually impaired user-triggered second operation comprises:
detecting that the finger of the visually impaired user leaves the fingerprint identification area.
8. The method according to any one of claims 1-7, wherein prior to broadcasting the at least one control in sequence in a set order, the method further comprises:
if the current display interface comprises a popup control, broadcasting sub-controls and/or text information contained in the popup control;
receiving first voice information input by the visually impaired user, wherein the first voice information is used for indicating to close the pop-up window control;
and closing the pop-up window control in the current display interface according to the first voice information.
9. The method of any of claims 1-8, wherein prior to detecting the first operation triggered by the visually impaired user, the method further comprises:
identifying the visually impaired user;
entering a barrier-free mode; wherein the electronic device is capable of responding to the first operation and the second operation while in the barrier-free mode.
10. The method of claim 9, wherein the method further comprises:
and when the barrier-free mode is entered, reducing the display brightness of the display screen of the electronic equipment.
11. An electronic device comprising one or more processors; one or more memories; wherein the one or more memories store one or more computer programs, the one or more computer programs comprising instructions, which when executed by the one or more processors, cause the electronic device to perform the steps of:
detecting a first operation triggered by a visually impaired user, wherein the first operation is used for indicating to broadcast at least one control displayed in a current display interface of the electronic equipment;
responding to the first operation, and broadcasting the at least one control in sequence according to a set sequence; the broadcasting interval time between any two adjacent controls is preset time length;
detecting a second operation triggered by the visually impaired user within a preset time after the first control is broadcasted, wherein the second operation is used for indicating that the first control is selected; wherein the first control is any one of the at least one control;
in response to the second operation, determining the first control as a target control selected by the visually impaired user.
12. The electronic device of claim 11, wherein the electronic device further comprises a display screen; the instructions, when executed by the one or more processors, cause the electronic device to perform the steps of:
detecting that the fingers of the visually impaired user are in a first position and that the fingers of the visually impaired user are always in the first position before the visually impaired user triggers the second operation; the first position is any position on a display screen of the electronic equipment; or the first position is located above the display screen, and the distance between the first position and the display screen is smaller than a first distance threshold value.
13. The electronic device of claim 12, wherein the at least one control is a control included within a set area in a current display interface of the electronic device; wherein the setting area is determined according to the first position, or the setting area is determined according to a projection position of the first position on a display screen of the electronic device.
14. The electronic device of claim 12 or 13, wherein the instructions, when executed by the one or more processors, cause the electronic device to perform the steps of:
detecting that a finger of the visually impaired user has left the first position.
15. The electronic device of claim 11, wherein a fingerprint identification area is provided on the electronic device, and wherein the instructions, when executed by the one or more processors, cause the electronic device to perform the steps of:
detecting that the finger of the visually impaired user touches the fingerprint identification area of the electronic device and always touching the fingerprint identification area by the finger of the visually impaired user before the visually impaired user triggers the second operation.
16. The electronic device of claim 15, wherein the at least one control is all controls in a currently displayed interface of the electronic device.
17. The electronic device of claim 15 or 16, wherein the instructions, when executed by the one or more processors, cause the electronic device to perform the steps of:
detecting that the finger of the visually impaired user leaves the fingerprint identification area.
18. The electronic device of any of claims 11-17, wherein the instructions, when executed by the one or more processors, cause the electronic device to perform the following steps prior to sequentially airing the at least one control in a set order:
if the current display interface comprises a popup control, broadcasting sub-controls and/or text information contained in the popup control;
receiving first voice information input by the visually impaired user, wherein the first voice information is used for indicating to close the pop-up window control;
and closing the pop-up window control in the current display interface according to the first voice information.
19. The electronic device of any one of claims 11-18, wherein the instructions, when executed by the one or more processors, cause the electronic device to, prior to detecting a first operation triggered by a visually impaired user, perform the steps of:
identifying the visually impaired user;
entering a barrier-free mode; wherein the electronic device is capable of responding to the first operation and the second operation while in the barrier-free mode.
20. The electronic device of claim 19, wherein the instructions, when executed by the one or more processors, cause the electronic device to perform the steps of: and when the barrier-free mode is entered, reducing the display brightness of the display screen of the electronic equipment.
21. A computer storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the human-computer interaction method of any of claims 1-10.
22. A program product, which, when run on a computer, causes the computer to carry out the human-computer interaction method according to any one of claims 1 to 10.
CN202011187246.6A 2020-10-30 2020-10-30 Man-machine interaction method and electronic equipment Pending CN114527920A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011187246.6A CN114527920A (en) 2020-10-30 2020-10-30 Man-machine interaction method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011187246.6A CN114527920A (en) 2020-10-30 2020-10-30 Man-machine interaction method and electronic equipment

Publications (1)

Publication Number Publication Date
CN114527920A true CN114527920A (en) 2022-05-24

Family

ID=81619398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011187246.6A Pending CN114527920A (en) 2020-10-30 2020-10-30 Man-machine interaction method and electronic equipment

Country Status (1)

Country Link
CN (1) CN114527920A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114968467A (en) * 2022-07-14 2022-08-30 中国工商银行股份有限公司 Control information generation method and device and electronic equipment

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102520822A (en) * 2011-12-09 2012-06-27 无锡知谷网络科技有限公司 Touch-recognizable touch screen for mobile phone for the visually impaired and response manner thereof
CN103034118A (en) * 2012-12-23 2013-04-10 黑龙江工程学院 Non-key electron time telling clock utilizing speech recognition technology
CN103116446A (en) * 2013-02-26 2013-05-22 浙江大学 List interactive method for touch portable device
CN103270484A (en) * 2010-10-21 2013-08-28 索尼电脑娱乐公司 Navigation of Electronic Device Menu Without Requiring Visual Contact
CN104461856A (en) * 2013-09-22 2015-03-25 阿里巴巴集团控股有限公司 Performance test method, device and system based on cloud computing platform
CN105824429A (en) * 2016-05-12 2016-08-03 深圳市联谛信息无障碍有限责任公司 Screen reading application instruction input method and device based on infrared sensor
CN105843404A (en) * 2016-05-12 2016-08-10 深圳市联谛信息无障碍有限责任公司 Screen reading application instruction input method and device
CN105915997A (en) * 2016-04-11 2016-08-31 广州酷狗计算机科技有限公司 Control part display method and device
CN106445369A (en) * 2015-08-10 2017-02-22 北京搜狗科技发展有限公司 Input method and device
CN106598391A (en) * 2016-12-13 2017-04-26 珠海格力电器股份有限公司 Blind operation implementation method and device of mobile terminal
US20170344246A1 (en) * 2016-05-31 2017-11-30 Snapchat, Inc. Application control using a gesture based trigger
CN107807783A (en) * 2017-10-26 2018-03-16 珠海市魅族科技有限公司 Terminal operation method and device, computer installation and readable storage medium storing program for executing
CN107831988A (en) * 2017-11-27 2018-03-23 维沃移动通信有限公司 The operating method and mobile terminal of a kind of mobile terminal
US20180131993A1 (en) * 2016-11-07 2018-05-10 Lg Electronics Inc. Display device and operating method thereof
CN109116982A (en) * 2018-07-09 2019-01-01 Oppo广东移动通信有限公司 Information broadcasting method, device and electronic device
CN110060672A (en) * 2019-03-08 2019-07-26 华为技术有限公司 A kind of sound control method and electronic equipment
CN110673783A (en) * 2019-08-29 2020-01-10 华为技术有限公司 Touch control method and electronic equipment
CN111666031A (en) * 2020-06-05 2020-09-15 北京百度网讯科技有限公司 Virtual key control method, device, equipment and storage medium for vehicle-mounted terminal
CN111679746A (en) * 2020-05-22 2020-09-18 北京搜狗科技发展有限公司 Input method and device and electronic equipment
WO2020187157A1 (en) * 2019-03-18 2020-09-24 华为技术有限公司 Control method and electronic device
CN112558825A (en) * 2019-09-26 2021-03-26 华为技术有限公司 Information processing method and electronic equipment
CN114746831A (en) * 2019-11-25 2022-07-12 三星电子株式会社 Electronic device for providing augmented reality service and method of operating the same

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103270484A (en) * 2010-10-21 2013-08-28 索尼电脑娱乐公司 Navigation of Electronic Device Menu Without Requiring Visual Contact
CN102520822A (en) * 2011-12-09 2012-06-27 无锡知谷网络科技有限公司 Touch-recognizable touch screen for mobile phone for the visually impaired and response manner thereof
CN103034118A (en) * 2012-12-23 2013-04-10 黑龙江工程学院 Non-key electron time telling clock utilizing speech recognition technology
CN103116446A (en) * 2013-02-26 2013-05-22 浙江大学 List interactive method for touch portable device
CN104461856A (en) * 2013-09-22 2015-03-25 阿里巴巴集团控股有限公司 Performance test method, device and system based on cloud computing platform
CN106445369A (en) * 2015-08-10 2017-02-22 北京搜狗科技发展有限公司 Input method and device
CN105915997A (en) * 2016-04-11 2016-08-31 广州酷狗计算机科技有限公司 Control part display method and device
CN105824429A (en) * 2016-05-12 2016-08-03 深圳市联谛信息无障碍有限责任公司 Screen reading application instruction input method and device based on infrared sensor
CN105843404A (en) * 2016-05-12 2016-08-10 深圳市联谛信息无障碍有限责任公司 Screen reading application instruction input method and device
US20170344246A1 (en) * 2016-05-31 2017-11-30 Snapchat, Inc. Application control using a gesture based trigger
US20180131993A1 (en) * 2016-11-07 2018-05-10 Lg Electronics Inc. Display device and operating method thereof
CN106598391A (en) * 2016-12-13 2017-04-26 珠海格力电器股份有限公司 Blind operation implementation method and device of mobile terminal
CN107807783A (en) * 2017-10-26 2018-03-16 珠海市魅族科技有限公司 Terminal operation method and device, computer installation and readable storage medium storing program for executing
CN107831988A (en) * 2017-11-27 2018-03-23 维沃移动通信有限公司 The operating method and mobile terminal of a kind of mobile terminal
CN109116982A (en) * 2018-07-09 2019-01-01 Oppo广东移动通信有限公司 Information broadcasting method, device and electronic device
CN110060672A (en) * 2019-03-08 2019-07-26 华为技术有限公司 A kind of sound control method and electronic equipment
WO2020187157A1 (en) * 2019-03-18 2020-09-24 华为技术有限公司 Control method and electronic device
CN110673783A (en) * 2019-08-29 2020-01-10 华为技术有限公司 Touch control method and electronic equipment
CN112558825A (en) * 2019-09-26 2021-03-26 华为技术有限公司 Information processing method and electronic equipment
CN114746831A (en) * 2019-11-25 2022-07-12 三星电子株式会社 Electronic device for providing augmented reality service and method of operating the same
CN111679746A (en) * 2020-05-22 2020-09-18 北京搜狗科技发展有限公司 Input method and device and electronic equipment
CN111666031A (en) * 2020-06-05 2020-09-15 北京百度网讯科技有限公司 Virtual key control method, device, equipment and storage medium for vehicle-mounted terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
边坤;: "基于视障人士信息产品界面的交互设计研究", 包装工程, no. 24, pages 156 - 159 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114968467A (en) * 2022-07-14 2022-08-30 中国工商银行股份有限公司 Control information generation method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN110865744B (en) Split-screen display method and electronic equipment
WO2021213164A1 (en) Application interface interaction method, electronic device, and computer readable storage medium
WO2020224449A1 (en) Split-screen display operation method and electronic device
JP7403641B2 (en) Touch screens, electronic devices, and display control methods
WO2021057343A1 (en) Method for operating electronic device and electronic device
WO2021063098A1 (en) Touch screen response method, and electronic device
WO2021063237A1 (en) Control method for electronic device, and electronic device
WO2021082564A1 (en) Operation prompt method and electronic device
WO2021063221A1 (en) Display method for electronic device, electronic device and computer readable storage medium
CN112740152B (en) Handwriting pen detection method, handwriting pen detection system and related device
CN110543287A (en) Screen display method and electronic equipment
WO2021238370A1 (en) Display control method, electronic device, and computer-readable storage medium
CN111124201A (en) One-hand operation method and electronic equipment
WO2021057699A1 (en) Method for controlling electronic device with flexible screen, and electronic device
CN110710191A (en) Photographing method and terminal
WO2020228735A1 (en) Method for displaying application, and electronic device
CN110658975A (en) Mobile terminal control method and device
CN113010076A (en) Display element display method and electronic equipment
WO2020221062A1 (en) Navigation operation method and electronic device
WO2021223560A1 (en) Screen state control method and electronic device
CN114527920A (en) Man-machine interaction method and electronic equipment
CN114089902A (en) Gesture interaction method and device and terminal equipment
CN113885973A (en) Translation result display method and device and electronic equipment
CN115639905A (en) Gesture control method and electronic equipment
EP4057122A1 (en) Screenshot method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination