CN109977836B - Information acquisition method and terminal - Google Patents
Information acquisition method and terminal Download PDFInfo
- Publication number
- CN109977836B CN109977836B CN201910210042.0A CN201910210042A CN109977836B CN 109977836 B CN109977836 B CN 109977836B CN 201910210042 A CN201910210042 A CN 201910210042A CN 109977836 B CN109977836 B CN 109977836B
- Authority
- CN
- China
- Prior art keywords
- information
- user
- terminal
- eyes
- determining whether
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides an information acquisition method and a terminal, belonging to the technical field of information acquisition, wherein the method comprises the following steps: in the process of collecting the face information, determining whether the collected face information meets a preset condition; if the collected face information does not accord with the preset condition, determining whether the eyes of the user have visual impairment which causes that the user cannot acquire at least part of information displayed on the terminal; if the vision disorder exists in the eyes of the user, outputting first prompt information, wherein the first prompt information can be acquired by the user with the vision disorder, so that the shooting posture can be adjusted and corresponding operation can be performed according to the first prompt information, and the user with the vision disorder can quickly and accurately complete the human face information acquisition process according with the preset conditions.
Description
Technical Field
The embodiment of the invention relates to the technical field of information acquisition, in particular to an information acquisition method and a terminal.
Background
At present, there are many scenes that a user needs to record a face image by himself/herself using a terminal, for example, the scenes are used for verifying whether the face image of the user is acquired, unlocking the face of the terminal, and acquiring the face image of an entry person when some examinations are entered. In some scenes, the user needs to record the face image according to the specified requirements, such as a dot and a blink, a face needing to be within the specified range, and the like. Even in some scenarios, the user is required to record a face video for a period of time, and the user is required to perform different actions, such as nodding the head first and blinking the eye. In the case that the user is required to record the face video for a certain period of time, the user is sometimes required to perform some operation, for example, to perform the next operation. In order to enable a user to record a facial image meeting requirements quickly and accurately, the conventional terminal generally displays prompt information in the process of recording the facial image, for example: if the face of the user is recognized to be out of the specified range, prompt information of 'please place the face in a frame' is displayed.
However, for users with impaired vision (for example, near-sighted users), if the users need to remove vision aids or vision correction tools such as glasses during the recording process, the existing prompting method may cause that the users cannot obtain the prompting information within the distance range meeting the recording requirement, if the users are close to the terminal to watch the prompting information, other situations that the recording requirement is not met may occur, and the prompting information may also be changed, thereby increasing the difficulty of recording face images by users with impaired vision and prolonging the recording time.
Disclosure of Invention
The embodiment of the invention provides an information acquisition method and a terminal, which aim to solve the problem that prompt information in the existing face image recording process is not easy to acquire by users with visual impairment.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an information acquisition method, applied to a terminal, including:
in the process of collecting the face information, determining whether the collected face information meets a preset condition;
if the collected face information does not accord with the preset condition, determining whether the eyes of the user have visual impairment which causes that the user cannot acquire at least part of information displayed on the terminal;
if the vision disorder exists in the eyes of the user, outputting first prompt information, wherein the first prompt information can be acquired by the user with the vision disorder, so that the acquired face information meets the preset condition.
In a second aspect, an embodiment of the present invention further provides a terminal, including:
the first judgment module is used for determining whether the acquired face information meets a preset condition or not in the process of acquiring the face information;
the second judgment module is used for determining whether the eyes of the user have visual impairment which causes the user to be incapable of acquiring at least part of information displayed on the terminal under the condition that the acquired face information does not accord with the preset condition;
the first output module is used for outputting first prompt information under the condition that the vision disorder exists in the eyes of the user, and the first prompt information can be acquired by the user with the vision disorder, so that the acquired face information meets the preset condition.
In a third aspect, an embodiment of the present invention further provides a terminal, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements the steps of the information acquisition method.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the information acquisition method are implemented.
In the embodiment of the invention, in the process of acquiring the face information, if the eyes of the user are determined to have vision disorder, the terminal outputs the first prompt information aiming at the vision disorder user under the condition that the acquired face information does not accord with the preset condition, so that the vision disorder user can obtain the prompt information (namely the first prompt information), and the shooting posture can be adjusted and corresponding operation can be carried out according to the prompt information. Therefore, the user with visual impairment can quickly and accurately complete the human face information acquisition process according with the preset conditions.
Drawings
Fig. 1 is a schematic flow chart of an information acquisition method according to a first embodiment of the present invention;
FIG. 2 is a schematic view of imaging a normal vision eye;
FIG. 3 is a schematic view of eye imaging in a near-sighted state;
fig. 4 is a schematic diagram of a process of acquiring a pupil area according to the collected face information in the embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of another terminal according to a fourth embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of a terminal for implementing various embodiments of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention, are within the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of an information acquisition method according to an embodiment of the present invention, where the method is applied to a terminal, and includes:
step 11: the method comprises the steps that in the process of acquiring face information, a terminal determines whether the acquired face information meets a preset condition;
step 12: if the face information collected by the terminal does not meet the preset condition, the terminal determines whether the eyes of the user have visual impairment which causes the user to be incapable of acquiring at least part of information displayed on the terminal;
step 13: if the vision disorder exists in the eyes of the user, the terminal outputs first prompt information, and the first prompt information can be acquired by the user with the vision disorder, so that the acquired face information meets the preset condition.
In the embodiment of the invention, in the process of acquiring the face information, if the terminal determines that the eyes of the user have visual impairment (which easily causes the user to be unable to acquire the prompt information displayed on the terminal), the first prompt information aiming at the user with visual impairment is output under the condition that the acquired face information does not meet the preset condition, so that the user with visual impairment can acquire the prompt information (namely the first prompt information), and adjust the shooting posture and perform corresponding operation according to the prompt information, for example, the face is over against a camera, the relative position between the user and the terminal is adjusted so that the face in the acquired face image is located in a specified area, and a shooting completion confirmation button is clicked, and the like. Therefore, the user with visual impairment can quickly and accurately complete the human face information acquisition process according with the preset conditions.
The terminal may collect the face information by using a currently common visible light image collecting device, may also collect the face information by using an infrared image collecting device, or may collect the face information by using another face information collecting device, which is not limited herein. In addition, the preset condition may include at least one of: the human face needs to be in a designated area, the user needs to nod the head in the acquisition process, the user needs to shake the head in the acquisition process, the user needs to blink in the acquisition process and the like, and certainly, other preset conditions can be included, and the method is not exhaustive.
The above steps will be exemplified below.
In some embodiments of the invention, the step of determining whether the user's eyes have visual impairment which results in the user not being able to access at least part of the information displayed on the terminal comprises:
the terminal acquires the eye information of the user according to the acquired face information;
and the terminal determines whether the eyes of the user have visual disorder according to the eye information.
In the embodiment of the invention, in the process of acquiring the face information, the terminal can acquire the eye information of the user in real time according to the acquired face information, then accurately determine whether the eyes of the user have visual disorder according to the acquired eye information, and if the eyes of the user have visual disorder, output first prompt information aiming at the user with visual disorder when the acquired face information does not meet the preset conditions, so that the user with visual disorder can adjust the shooting posture and/or execute corresponding operations and the like according to the first prompt information, and further the face information acquired by the terminal meets the preset conditions.
Optionally, the step of obtaining the eye information of the user according to the collected face information includes:
the terminal acquires the eye information when the user wears the glasses and the eye information when the user does not wear the glasses;
the step of determining whether the vision disorder exists in the eyes of the user according to the eye information comprises the following steps:
and the terminal determines whether the eyes of the user have visual impairment according to the eye information when the user wears the glasses and the eye information when the user does not wear the glasses.
Further optionally, the eye information includes:
the size information of the user pupil can be specifically area or diameter;
and/or the presence of a gas in the gas,
information of a maximum distance between upper and lower eyelids or information of an area enclosed by the upper and lower eyelids.
As shown in FIG. 2, in the human eye, the cornea has positive spherical aberration, the lens has negative spherical aberration, and the two spherical aberrations neutralize each other, so that the light can generate a sharp focus and a clear image on the retina. The positive spherical aberration generated by the cornea itself remains basically unchanged in the life of a person, but the lens cannot be completely freely adjusted in a near-sighted state, as shown in fig. 3, the positive spherical aberration of the cornea and the negative spherical aberration of the lens cannot be counteracted, objects at a far distance are blurred, the lens is adjusted through the pupil, and the size of the pupil influences the adjusting function of the lens. Under the state of myopia, the pupil is enlarged, the lens spherical aberration is reduced, and the negative spherical aberration is increased; the pupil is reduced, the spherical aberration of the lens is increased, and the negative spherical aberration is reduced, so that the pupil is reduced within a certain limit for more clearly seeing objects after the glasses are taken off. Therefore, when the face information is collected, if it is detected that the user wears glasses, as shown in fig. 4, a polygon close to the area of the front view of the pupil is formed by selecting enough points at the edge of the pupil and connecting adjacent points, and then the actual area of the polygon is calculated as the actual area of the pupil according to the spatial depth information (i.e., the depth of field) (denoted as S1). And then, if the fact that the user removes the glasses is detected, calculating the actual area (marked as S2) of the pupil of the user after the user removes the glasses by the same method, finally calculating the difference value between S1 and S2, and if the difference value exceeds a certain threshold value, determining that the eyes of the user are short-sighted. In order to improve the accuracy, the actual pupil area of the user wearing the glasses may be obtained multiple times, and then the weighted average value is taken as S1, and similarly, the actual pupil area of the user with the glasses removed may be obtained multiple times, and then the weighted average value is taken as S2.
In addition, if the glasses are taken off by the myopic people when the glasses are worn, the myopic people can squint the eyes to see in order to see surrounding fuzzy objects clearly, and at the moment, light rays entering the eyes are reduced, so that some useless light rays are shielded. Therefore, whether the user is near-sighted can be determined by detecting the distance between the upper eyelid and the lower eyelid when the user wears the glasses and does not wear the glasses or the difference value of the enclosed areas of the upper eyelid and the lower eyelid. Specifically, referring to the method for measuring the pupil area, the shape of the eye of the user wearing glasses and the shape of the eye of the user removing glasses are described by selecting enough points on the edge of the eyelid and connecting adjacent points, then the maximum distance between the upper eyelid and the lower eyelid or the enclosed area is calculated according to the spatial depth information, finally the difference between the maximum distance between the upper eyelid and the lower eyelid of the user wearing glasses and the upper eyelid with glasses removed or the enclosed area is calculated, and when the difference is greater than a certain threshold, the myopia of the user is determined. For example, if the difference between the maximum distance between the upper and lower eyelids when the user wears the glasses and after the glasses are removed is 1mm or more, it is determined that the user's eyes are myopic.
In still other embodiments of the present invention, the step of determining whether the user's eye has a vision disorder comprises:
the terminal determines whether the user wears the glasses and then takes off the glasses according to the acquired face information;
if the user wears the glasses and then takes off the glasses, the terminal determines that the eyes of the user have visual disorder.
That is, if it is detected in the process of recording the face image that the user wears the glasses at first and then takes off the glasses, it is determined that the eyes of the user are visually impaired, and in the case that the prompt information needs to be output, the first prompt information for the visually impaired user should be output. The method for determining whether the eyes of the user have visual impairment is simple, the calculation complexity is low, and the calculation burden of a processor can be reduced.
In other embodiments of the present invention, the step of determining whether the user's eyes have visual impairment comprises:
the terminal acquires the setting information of a user;
and the terminal determines whether the eyes of the user have visual impairment according to the setting information.
That is, the terminal may provide a setting item for the user to select whether there is a vision disorder (e.g., myopia), and if the user selects that there is a vision disorder, the terminal outputs the first prompt information for the vision disorder user if the prompt information needs to be output. Specifically, the user may set whether there is visual impairment before starting to collect the face information, or may set whether there is visual impairment in the process of collecting the face information. The method for judging whether the user has the visual disorder is simpler, the calculation complexity is further reduced, and the method is not limited by whether the user wears glasses or not.
Optionally, the first prompt message includes: text information; the step of outputting the first prompt message includes:
displaying the character information on a display screen of the terminal by adopting a specified character size;
and/or the presence of a gas in the gas,
the first prompt message comprises: audio information; the step of outputting the first prompt message includes:
and broadcasting the audio information.
When the first prompt message comprises text information, the character size of the text information displayed on the display screen of the terminal is larger.
Optionally, after the step of determining whether the user's eyes have visual impairment that causes the user to be unable to acquire at least part of the information displayed on the terminal, the method further includes: and if the eyes of the user do not have vision disorder, outputting second prompt information, wherein the second prompt information cannot be acquired by the user with the vision disorder but can be acquired by the user without the vision disorder. The second prompt message also includes the character message, but the character size of the second prompt message is smaller than that of the character message of the first prompt message. For example, the side length (or area) of the font of the text information of the first hint information is 1.5 times or 2 times the font of the second hint information.
The second embodiment of the invention provides an information acquisition method, which comprises the following steps:
the first step is as follows: in the process of acquiring the face information, the terminal determines whether the acquired face information meets preset conditions, for example, whether the face is in a preset range, and whether actions such as nodding, shaking, blinking and the like exist.
The second step is that: and under the condition that the collected face information does not accord with the preset condition, the terminal acquires the eye information of the user according to the collected face information.
The third step: the terminal determines whether the user wears the glasses before and does not wear the glasses (takes off the glasses) according to the obtained eye information of the user.
The fourth step: when the user wears glasses first and then does not wear glasses, the areas of the pupils (the diameter of the pupils, the maximum distance between the upper and lower eyelids of the user, or the area enclosed by the upper and lower eyelids) of the user when wearing glasses and when not wearing glasses are obtained respectively.
The fifth step: if the area of the pupil of the user is obtained when the user wears the glasses and when the user does not wear the glasses, the difference value between the area of the pupil of the user when the user wears the glasses and the area of the pupil of the user when the user does not wear the glasses is obtained (if the diameter of the pupil, the maximum distance between the upper eyelid and the lower eyelid of the user or the area enclosed by the upper eyelid and the lower eyelid is obtained, the difference value between the value when the user wears the glasses and the value when the user does not wear the glasses is also calculated).
And a sixth step: and if the difference calculated in the fifth step is larger than a preset threshold, determining that the eyes of the user have visual disorder, wherein the value of the preset threshold can be obtained according to big data statistics. Of course, if the pupil diameter of the user, the maximum distance between the upper and lower eyelids of the user, or the enclosed area of the upper and lower eyelids is obtained, the preset threshold value needs to be adjusted accordingly.
The seventh step: under the condition that the vision disorder of the eyes of the user is determined, outputting first prompt information; and in the case that the user is determined that the vision disorder does not exist in the eyes of the user, outputting second prompt information. The second prompt message only comprises character messages, the first prompt message not only comprises the character messages but also comprises audio information, and the character size of the character messages of the first prompt message is larger than that of the character messages of the second prompt message.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a terminal according to a third embodiment of the present invention, where the terminal includes:
the first judging module 51 is configured to determine whether the acquired face information meets a preset condition in the process of acquiring the face information;
a second judging module 52, configured to determine whether an eye of the user has visual impairment that causes the user to be unable to acquire at least part of the information displayed on the terminal, if the acquired face information does not meet the preset condition;
the first output module 53 is configured to output first prompt information under the condition that the vision disorder exists in the eyes of the user, where the first prompt information can be acquired by the user with the vision disorder, so that the acquired face information meets the preset condition.
In the embodiment of the invention, in the process of acquiring the face information, if the situation that the eyes of the user have vision disorder which causes that the user cannot acquire at least part of the information displayed on the terminal is determined, the first prompt information aiming at the vision disorder user is output under the condition that the acquired face information does not meet the preset condition, so that the vision disorder user can acquire the prompt information (namely the first prompt information), and the shooting posture can be adjusted and corresponding operation can be carried out according to the prompt information. Therefore, the user with visual impairment can quickly and accurately complete the human face information acquisition process according with the preset conditions.
Optionally, the second determining module 52 includes:
the acquisition unit is used for acquiring the eye information of the user according to the acquired face information;
and the judging unit is used for determining whether the vision disorder exists in the eyes of the user according to the eye information.
Further optionally, the obtaining unit includes:
the acquisition subunit is used for acquiring the eye information when the user wears the glasses and the eye information when the user does not wear the glasses;
the judging unit includes:
and the judging subunit is used for determining whether the eyes of the user have visual impairment according to the eye information when the user wears the glasses and the eye information when the user does not wear the glasses.
Optionally, the eye information includes:
size information of a user's pupil; and/or the presence of a gas in the gas,
information of a maximum distance between upper and lower eyelids or information of an area enclosed by the upper and lower eyelids.
Optionally, the first prompt message includes: text information; the first output module 53 includes:
the display unit is used for displaying the character information on a display screen of the terminal by adopting a specified character size;
and/or the presence of a gas in the gas,
the first prompt message comprises: audio information; the first output module 53 includes:
and the broadcasting unit is used for broadcasting the audio information.
Optionally, the terminal further includes:
and the second output module is used for outputting second prompt information under the condition that the eyes of the user have no visual impairment.
The terminal provided by the embodiment of the present invention can implement each process in the first method embodiment, and can achieve the same technical effect, and for avoiding repetition, details are not described here again.
Referring to fig. 6, fig. 6 is a schematic structural diagram of another terminal according to a fourth embodiment of the present invention, where the terminal 60 includes a processor 61, a memory 62, and a computer program stored in the memory 62 and operable on the processor 61, and when executed by the processor 61, the computer program implements the following steps:
in the process of collecting the face information, determining whether the collected face information meets a preset condition;
if the collected face information does not accord with the preset condition, determining whether the eyes of the user have visual impairment which causes that the user cannot acquire at least part of information displayed on the terminal;
if the vision disorder exists in the eyes of the user, outputting first prompt information, wherein the first prompt information can be acquired by the user with the vision disorder, so that the acquired face information meets the preset condition.
Optionally, the computer program when executed by the processor 61 may further implement the steps of:
the step of determining whether the user's eyes have visual impairment that results in the user not being able to access at least part of the information displayed on the terminal comprises:
acquiring eye information of the user according to the acquired face information;
and determining whether the vision disorder exists in the eyes of the user according to the eye information.
Optionally, the computer program when executed by the processor 61 may further implement the steps of:
the step of obtaining the eye information of the user according to the collected face information comprises the following steps:
acquiring eye information when the user wears glasses and eye information when the user does not wear glasses;
the step of determining whether the vision disorder exists in the eyes of the user according to the eye information comprises the following steps:
and determining whether the vision disorder exists in the eyes of the user according to the eye information when the user wears the glasses and the eye information when the user does not wear the glasses.
Optionally, the eye information includes:
size information of a user's pupil; and/or the presence of a gas in the gas,
information of a maximum distance between upper and lower eyelids or information of an area enclosed by the upper and lower eyelids.
Optionally, the first prompt message includes: text information; the computer program, when executed by the processor 61, may further implement the steps of:
the step of outputting the first prompt message includes:
displaying the character information on a display screen of the terminal by adopting a specified character size;
and/or the presence of a gas in the gas,
the first prompt message comprises: audio information; the computer program, when executed by the processor 61, may further implement the steps of:
the step of outputting the first prompt message includes:
and broadcasting the audio information.
Optionally, the computer program when executed by the processor 61 may further implement the steps of:
after the step of determining whether the user's eyes have visual impairment that results in the user not being able to access at least part of the information displayed on the terminal, the method further comprises:
and if the vision disorder does not exist in the eyes of the user, outputting second prompt information.
The terminal can implement each process in the first method embodiment, and can achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
Fig. 7 is a schematic diagram of a hardware structure of a terminal for implementing various embodiments of the present invention, where the terminal 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the terminal configuration shown in fig. 7 is not intended to be limiting, and that the terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 710 is configured to determine whether the acquired face information meets a preset condition in the process of acquiring the face information; if the collected face information does not accord with the preset condition, determining whether the eyes of the user have visual impairment which causes that the user cannot acquire at least part of information displayed on the terminal; if the vision disorder exists in the eyes of the user, outputting first prompt information, wherein the first prompt information can be acquired by the user with the vision disorder, so that the acquired face information meets the preset condition.
In the embodiment of the invention, in the process of acquiring the face information, if the vision disorder of the eyes of the user is determined, the first prompt information aiming at the vision disorder user is output under the condition that the acquired face information does not accord with the preset condition, so that the vision disorder user can obtain the prompt information (namely the first prompt information), and the shooting posture can be adjusted and corresponding operation can be carried out according to the prompt information. Therefore, the user with visual impairment can quickly and accurately complete the human face information acquisition process according with the preset conditions.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The terminal provides wireless broadband internet access to the user via the network module 702, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the terminal 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The terminal 700 also includes at least one sensor 705, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the terminal 700 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 7 as two separate components to implement the input and output functions of the terminal, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the terminal, which is not limited herein.
The interface unit 708 is an interface for connecting an external device to the terminal 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal 700 or may be used to transmit data between the terminal 700 and the external device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby integrally monitoring the terminal. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The terminal 700 may also include a power supply 711 (e.g., a battery) for providing power to the various components, and preferably, the power supply 711 may be logically coupled to the processor 710 via a power management system, such that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the terminal 700 includes some functional modules that are not shown, and are not described in detail herein.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned information acquisition method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the detailed description is omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (7)
1. An information acquisition method is applied to a terminal and is characterized by comprising the following steps:
in the process of collecting the face information, determining whether the collected face information meets a preset condition;
if the collected face information does not accord with the preset condition, determining whether the eyes of the user have visual impairment which causes that the user cannot acquire at least part of information displayed on the terminal;
if the vision disorder exists in the eyes of the user, outputting first prompt information, wherein the first prompt information can be acquired by the user with the vision disorder, so that the acquired face information meets the preset condition;
the determining whether the user's eyes have visual impairment that results in the user not being able to obtain at least part of the information displayed on the terminal includes:
acquiring setting information of a user;
determining whether the eyes of the user have visual disorder according to the setting information;
the step of determining whether the user's eyes have visual impairment that renders the user unable to access at least part of the information displayed on the terminal further comprises:
acquiring eye information of the user according to the acquired face information;
determining whether the vision disorder exists in the eyes of the user according to the eye information;
the step of obtaining the eye information of the user according to the collected face information comprises the following steps:
acquiring eye information when the user wears glasses and eye information when the user does not wear glasses;
the step of determining whether the vision disorder exists in the eyes of the user according to the eye information comprises the following steps:
determining whether the vision disorder exists in the eyes of the user according to the eye information of the user when the user wears glasses and the eye information of the user when the user does not wear glasses;
the eye information includes:
size information of a user's pupil; and/or the presence of a gas in the gas,
information of a maximum distance between upper and lower eyelids or information of an area enclosed by the upper and lower eyelids.
2. The information acquisition method according to claim 1,
the first prompt message comprises: text information; the step of outputting the first prompt message includes:
displaying the character information on a display screen of the terminal by adopting a specified character size;
and/or the presence of a gas in the gas,
the first prompt message comprises: audio information; the step of outputting the first prompt message includes:
and broadcasting the audio information.
3. The information collecting method as claimed in claim 1, wherein after the step of determining whether the user's eyes have visual impairment which results in the user not being able to obtain at least part of the information displayed on the terminal, further comprising:
and if the vision disorder does not exist in the eyes of the user, outputting second prompt information.
4. A terminal, comprising:
the first judgment module is used for determining whether the acquired face information meets a preset condition or not in the process of acquiring the face information;
the second judgment module is used for determining whether the eyes of the user have visual impairment which causes the user to be incapable of acquiring at least part of information displayed on the terminal under the condition that the acquired face information does not accord with the preset condition;
the first output module is used for outputting first prompt information under the condition that the vision disorder exists in the eyes of the user, and the first prompt information can be acquired by the user with the vision disorder so that the acquired face information meets the preset condition;
the second judging module includes:
a unit for acquiring setting information of a user;
means for determining whether the user's eyes have visual impairment based on the setting information;
the second determination module further includes:
the acquisition unit is used for acquiring the eye information of the user according to the acquired face information;
the judging unit is used for determining whether the vision disorder exists in the eyes of the user according to the eye information;
the acquisition unit includes:
the acquisition subunit is used for acquiring the eye information when the user wears the glasses and the eye information when the user does not wear the glasses;
the judging unit includes:
the judging subunit is configured to determine whether the user's eyes have the visual impairment according to the eye information of the user wearing glasses and the eye information of the user not wearing glasses;
the eye information includes:
size information of a user's pupil; and/or the presence of a gas in the gas,
information of a maximum distance between upper and lower eyelids or information of an area enclosed by the upper and lower eyelids.
5. The terminal of claim 4,
the first prompt message comprises: text information; the first output module includes:
the display unit is used for displaying the character information on a display screen of the terminal by adopting a specified character size;
and/or the presence of a gas in the gas,
the first prompt message comprises: audio information; the first output module includes:
and the broadcasting unit is used for broadcasting the audio information.
6. The terminal of claim 4, further comprising:
and the second output module is used for outputting second prompt information under the condition that the vision disorder does not exist in the eyes of the user.
7. A terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the information acquisition method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910210042.0A CN109977836B (en) | 2019-03-19 | 2019-03-19 | Information acquisition method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910210042.0A CN109977836B (en) | 2019-03-19 | 2019-03-19 | Information acquisition method and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109977836A CN109977836A (en) | 2019-07-05 |
CN109977836B true CN109977836B (en) | 2022-04-15 |
Family
ID=67079503
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910210042.0A Active CN109977836B (en) | 2019-03-19 | 2019-03-19 | Information acquisition method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109977836B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11669295B2 (en) * | 2020-06-18 | 2023-06-06 | Sony Group Corporation | Multiple output control based on user input |
CN112149593A (en) * | 2020-09-28 | 2020-12-29 | 深圳前海微众银行股份有限公司 | Face brushing correction method, device, equipment and computer readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704086A (en) * | 2017-10-20 | 2018-02-16 | 维沃移动通信有限公司 | A kind of mobile terminal operating method and mobile terminal |
CN108509037A (en) * | 2018-03-26 | 2018-09-07 | 维沃移动通信有限公司 | A kind of method for information display and mobile terminal |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102325211A (en) * | 2011-05-23 | 2012-01-18 | 中兴通讯股份有限公司 | Eyesight compensation method and device and mobile terminal |
CN104394252B (en) * | 2014-12-08 | 2018-04-06 | 上海斐讯数据通信技术有限公司 | A kind of method and its mobile terminal for detecting the eyes number of degrees |
CN104699250B (en) * | 2015-03-31 | 2018-10-19 | 小米科技有限责任公司 | Display control method and device, electronic equipment |
CN107273071A (en) * | 2016-04-06 | 2017-10-20 | 富泰华工业(深圳)有限公司 | Electronic installation, screen adjustment system and method |
CN107424584A (en) * | 2016-05-24 | 2017-12-01 | 富泰华工业(深圳)有限公司 | Eyes protecting system and method |
CN106214117A (en) * | 2016-08-19 | 2016-12-14 | 乐视控股(北京)有限公司 | A kind of glasses and the detection method of glasses |
CN109124565B (en) * | 2018-03-01 | 2020-04-24 | 卓建 | Eye state detection method |
CN108509863A (en) * | 2018-03-09 | 2018-09-07 | 北京小米移动软件有限公司 | Information cuing method, device and electronic equipment |
-
2019
- 2019-03-19 CN CN201910210042.0A patent/CN109977836B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704086A (en) * | 2017-10-20 | 2018-02-16 | 维沃移动通信有限公司 | A kind of mobile terminal operating method and mobile terminal |
CN108509037A (en) * | 2018-03-26 | 2018-09-07 | 维沃移动通信有限公司 | A kind of method for information display and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN109977836A (en) | 2019-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110874129B (en) | Display system | |
CN109381165B (en) | Skin detection method and mobile terminal | |
CN108491775B (en) | Image correction method and mobile terminal | |
CN108076290B (en) | Image processing method and mobile terminal | |
CN108491123B (en) | Method for adjusting application program icon and mobile terminal | |
CN110007758B (en) | Terminal control method and terminal | |
CN110969981A (en) | Screen display parameter adjusting method and electronic equipment | |
CN108962187B (en) | Screen brightness adjusting method and mobile terminal | |
CN109683703A (en) | A kind of display control method, terminal and computer readable storage medium | |
CN108650408B (en) | Screen unlocking method and mobile terminal | |
CN109782968B (en) | Interface adjusting method and terminal equipment | |
CN108012026B (en) | Eyesight protection method and mobile terminal | |
CN111031234B (en) | Image processing method and electronic equipment | |
CN109819166B (en) | Image processing method and electronic equipment | |
CN109977836B (en) | Information acquisition method and terminal | |
CN110225196B (en) | Terminal control method and terminal equipment | |
CN109871253A (en) | A kind of display methods and terminal | |
CN111008929B (en) | Image correction method and electronic equipment | |
CN110427149B (en) | Terminal operation method and terminal | |
CN109688325B (en) | Image display method and terminal equipment | |
CN112733673B (en) | Content display method and device, electronic equipment and readable storage medium | |
CN109164908B (en) | Interface control method and mobile terminal | |
CN109639981B (en) | Image shooting method and mobile terminal | |
JP2018134274A (en) | Information processing method, information processing device, and program | |
CN111354460B (en) | Information output method, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |