CN110866465A - Control method of electronic equipment and electronic equipment - Google Patents

Control method of electronic equipment and electronic equipment Download PDF

Info

Publication number
CN110866465A
CN110866465A CN201911040338.9A CN201911040338A CN110866465A CN 110866465 A CN110866465 A CN 110866465A CN 201911040338 A CN201911040338 A CN 201911040338A CN 110866465 A CN110866465 A CN 110866465A
Authority
CN
China
Prior art keywords
target
scene mode
user
electronic device
characteristic information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911040338.9A
Other languages
Chinese (zh)
Inventor
陈文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911040338.9A priority Critical patent/CN110866465A/en
Publication of CN110866465A publication Critical patent/CN110866465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a control method of electronic equipment and the electronic equipment, which are applied to the technical field of communication and are used for solving the problems of complicated operation steps and long time consumption in the whole process of manually switching scene modes by a user. The method comprises the following steps: the electronic equipment acquires a face image of a first user and target characteristic information of the face image, wherein the target characteristic information is used for representing a target local five sense organs posture of the first user; setting the scene mode of the electronic equipment as a target scene mode under the condition that the face image is matched with a preset face image and the target characteristic information is matched with the preset characteristic information; wherein the predetermined characteristic information is used for representing a predetermined local facial feature posture of the first user; the target scene mode is as follows: and the scene mode corresponding to the preset local five-sense organ posture.

Description

Control method of electronic equipment and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a control method of electronic equipment and the electronic equipment.
Background
With the development of electronic equipment (e.g., mobile phone) technology, electronic equipment is becoming an indispensable tool in work, life and study, and because the environments in which the electronic equipment is used are various, the electronic equipment provides scene modes suitable for being used in different environments, such as a ring mode, a mute mode, an eye protection mode, a child mode, a visitor mode, a driving mode and the like, for the convenience of using the electronic equipment in different environments.
However, since the conventional scene mode switching requires a user to manually switch. Thus, the whole handover process is complicated in steps and takes a long time.
Disclosure of Invention
The control method of the electronic equipment and the electronic equipment provided by the embodiment of the invention solve the problems of complicated operation steps and long time consumption in the whole process of manually switching the scene mode by a user.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, a method for controlling an electronic device provided in an embodiment of the present invention includes: the electronic equipment acquires a face image of a first user and target characteristic information of the face image, wherein the target characteristic information is used for representing a target local five sense organs posture of the first user; under the condition that the face image is matched with a preset face image and the target characteristic information is matched with the preset characteristic information, the electronic equipment sets the scene mode of the electronic equipment to be a target scene mode; wherein the predetermined characteristic information is used for representing a predetermined local facial feature posture of the first user; the target scene mode is as follows: and the scene mode corresponding to the preset local five-sense organ posture.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes: the acquisition module is used for acquiring a face image of a first user and target characteristic information of the face image, wherein the target characteristic information is used for representing the five sense organs form of the target local five sense organs of the first user; a setting module, configured to set a scene mode of the electronic device to a target scene mode when the face image acquired by the acquiring module matches a predetermined face image and the target feature information matches the predetermined feature information; wherein the predetermined characteristic information is used for representing a predetermined local facial feature posture of the first user; the target scene mode is as follows: and the scene mode corresponding to the preset local five-sense organ posture.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the control method of the electronic device according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the control method for an electronic device according to the first aspect.
In the embodiment of the invention, the electronic device acquires the face image of the first user and the target characteristic information of the face image, and the target characteristic information is used for representing the target local facial pose of the first user, and at least one local facial pose is preset to correspond to one scene mode, so that under the condition that the face image is matched with the preset face image and the target characteristic information is matched with the preset characteristic information, the electronic device can set the scene mode of the electronic device as the target scene mode based on the facial pose of the user, thereby simplifying the operation steps of switching the scene modes by the user, and avoiding the problems of complexity in the operation steps and long time consumption of the whole process of the traditional scene mode switching.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a control method of an electronic device according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a control of an electronic device according to an embodiment of the present invention;
fig. 4 is a second schematic diagram of a control interface of an electronic device according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a possible structure of an electronic device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
It should be noted that "a plurality" herein means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
It should be noted that, for the convenience of clearly describing the technical solutions of the embodiments of the present invention, in the embodiments of the present invention, words such as "first" and "second" are used to distinguish the same items or similar items with substantially the same functions or actions, and those skilled in the art can understand that the words such as "first" and "second" do not limit the quantity and execution order. For example, the first feature information and the second feature information are for distinguishing different feature information, not for describing a specific order of the feature information.
The execution main body of the control method of the electronic device provided in the embodiment of the present invention may be the electronic device, or may also be a functional module and/or a functional entity capable of implementing the control method of the electronic device in the electronic device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited.
For example, the electronic device is taken as a terminal device, and the terminal device may be a mobile terminal device or a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the control method for the electronic device according to the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the control method of the electronic device provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the control method of the electronic device may run based on the android operating system shown in fig. 1. That is, the processor or the electronic device may implement the control method of the electronic device provided by the embodiment of the present invention by running the software program in the android operating system.
The following describes a control method of an electronic device according to an embodiment of the present invention with reference to a flowchart of the control method of the electronic device shown in fig. 2, where fig. 2 is a flowchart of the control method of the electronic device according to the embodiment of the present invention, and includes steps 201 and 202:
step 201: the electronic equipment acquires a face image of a first user and target feature information of the face image.
In an embodiment of the present invention, the target feature information is used to characterize a target local five-sense-organ posture of the first user. Illustratively, the five sense organs refer to the eyes, ears, nose, eyebrows, and mouth facial organs; the target local five sense organs refer to at least one organ of the five sense organs of the first user.
Optionally, in an embodiment of the present invention, the target feature information includes N pieces of first feature information, where any one of the first feature information is used to characterize a local five sense organs of the first user, and N is a positive integer.
Optionally, in the embodiment of the present invention, taking the target local five sense organs as the eyes as an example, the shapes of the five sense organs of the eyes may be that the eyes are open, the eyes are closed, the eyes are gazing at the electronic device, the eyes are not gazing at the electronic device, the eyes are blocked or the eyes are not blocked, and the like; alternatively, taking the target local five sense organs as the mouth as an example, the posture of the five sense organs of the mouth can be that the mouth is opened, the mouth is closed, the mouth is shielded or the mouth is not shielded, and the like; or, taking the target local five sense organs as ears for example, the posture of the five sense organs of the ears can be that the ears are shielded or the ears are not shielded, etc.; alternatively, taking the target local five sense organs as the nose for example, the posture of the five sense organs of the nose can be that the nose is blocked or not blocked, etc.; or, taking the target local facial features as the eyebrows, the posture of the facial features of the eyebrows can be that the eyebrows are raised or the eyebrows are not raised;
optionally, in the embodiment of the present invention, when the electronic device executes step 201, the electronic device may acquire a to-be-recognized face image through the camera, perform face matching on the to-be-recognized face image after the to-be-recognized face image is acquired, and further acquire target feature information of the to-be-recognized face image by the electronic device under the condition that the to-be-recognized face image is matched with a predetermined face image, or directly acquire the target feature information of the face image after the to-be-recognized face image is acquired through the camera.
Step 202: and under the condition that the face image is matched with the preset face image and the target characteristic information is matched with the preset characteristic information, the electronic equipment sets the scene mode of the electronic equipment to be the target scene mode.
In an embodiment of the present invention, the predetermined characteristic information is used to characterize a predetermined local facial feature posture of the predetermined local facial feature of the first user.
In the embodiment of the present invention, the target scene mode is: a scene mode of the predetermined local facial pose pairs of the first user. The target scene mode includes at least one scene mode.
Illustratively, the scene mode may include at least one of: a ringer mode, a mute mode, a shake mode, an eye-protection mode, a child mode, a game mode, a visitor mode, a drive mode, a flight mode, or the like. In one example, the eye protection mode can be implemented in two ways: reducing the screen brightness of the electronic device or setting the theme of the electronic device as a dark theme. In an example, the scene mode may be a scene mode in which a dedicated setting control exists in the electronic device, such as an airplane mode, and may also be a scene mode corresponding to an APP dedicated to the electronic device, such as a child mode.
Optionally, in an embodiment of the present invention, the predetermined feature information includes M pieces of second feature information, any one piece of second feature information is used to characterize at least one predetermined gesture of one predetermined local five sense organs of the first user, and M is a positive integer.
Optionally, in an embodiment of the present invention, when the target feature information includes N pieces of first feature information, matching the target feature information with predetermined feature information includes: the local facial feature postures indicated by the N first feature information and the local facial feature posture indicated by the predetermined feature information all match, or the local facial feature posture indicated by part of the N first feature information and the local facial feature posture indicated by the predetermined feature information all match.
Optionally, in an embodiment of the present invention, the electronic device prestores one or more feature information, where one feature information is used to characterize at least one pose of at least one of the five sense organs of one or more local five sense organs of the first user, and one scene mode corresponds to the at least one feature information. For example, the predetermined characteristic information is at least one of one or more characteristic information prestored in the electronic device.
In one example, a local five-sense gesture corresponds to a scene mode. For example, a mouth configuration in which the mouth of the first user is occluded corresponds to a silent mode, an eye configuration in which the eyes of the first user are closed corresponds to an eye protection mode, and an eye configuration in which the eyes of the first user are not gazed at the terminal corresponds to a guest mode.
In another example, a combination of multiple local facial poses corresponds to one scene mode. For example, a combined five sense organ position in which the first user's eyes are not looking at the electronic device and the first user's mouth is occluded corresponds to a pediatric mode, and a combined five sense organ position in which the first user's eyes are not looking at the electronic device and the first user's mouth is closed corresponds to a visitor mode.
In another example, multiple local facial poses may correspond to the same scene mode. For example, both configurations of the first user's mouth being occluded or closed correspond to silent modes.
For example, taking the target local five sense organs as the eyes, when the user wants to set the eye protection mode, first, the user can trigger the face recognition function of the electronic device by pressing the power key of the electronic device. At this time, as shown in fig. 3, the user closes both eyes tightly, so that the electronic device acquires the face image to be recognized with both eyes tightly closed by the user. Then, in the case that the face image to be recognized matches a predetermined face image, the electronic device further obtains target feature information of the face image to be recognized, wherein the target feature information includes eye feature information for representing that the eyes of the user are in a closed state. Because the preset characteristic information used for representing that the eyes of the user are in the closed state is prestored in the electronic equipment, and the scene mode corresponding to the preset characteristic information is the eye protection mode, when the target characteristic information is matched with the preset characteristic information prestored in the electronic equipment, the scene mode can be directly set to be the eye protection mode by the electronic equipment.
According to the control method of the electronic device, the electronic device obtains the face image of the first user and the target feature information of the face image, the target feature information is used for representing the target local five sense organs posture of the first user, and at least one preset local five sense organs posture corresponds to one scene mode, so that under the condition that the face image is matched with the preset face image and the target feature information is matched with the preset feature information, the electronic device can set the scene mode of the electronic device as the target scene mode based on the five sense organs posture of the user, the operation steps of switching the scene modes by the user are simplified, and the problems that the operation steps of the whole process of traditional scene mode switching are complicated and time-consuming are long are avoided.
Optionally, in an embodiment of the present invention, in the case that M is greater than 1, before the step 202, the method further includes a step a1 and a step a 2:
step A1: the electronic equipment displays the M identifications on the first interface.
Wherein, a mark indicates a scene mode, each scene mode corresponds to a predetermined posture of the predetermined local five sense organs corresponding to the second characteristic information. For example, when the predetermined feature information matched with the target feature information corresponds to a plurality of scene modes, the plurality of scene modes may be displayed on the first interface in the form of scene mode icons.
It should be noted that, in the embodiments of the present invention, the size, shape, style, color, and transparency of the scene mode icon are not limited.
Step A2: the electronic device receives a first input for a target identification.
Wherein the target identifier is at least one of the M identifiers.
Optionally, in an embodiment of the present invention, the step 202 includes a step 202 a:
step 202 a: and responding to the first input, and setting the scene mode of the electronic equipment to be a target scene mode corresponding to the target identification by the electronic equipment.
For example, the user's input for the target identification may include: the click input of the user to the target identifier, or the slide input of the user to the target identifier, or other feasible inputs of the user to the target identifier may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The target identifier is used to instruct the electronic device to set the scene mode to the scene mode corresponding to the target identifier, for example, the guest mode icon is used to instruct the electronic device to set the scene mode to the guest mode.
For example, the click input may be a single click input, a double click input, or any number of click inputs; the click input may be a long-press input or a short-press input. The sliding input may be a sliding input in any direction, for example, sliding upwards, sliding downwards, sliding leftwards or sliding rightwards, and the sliding trajectory of the sliding input may be a straight line or a curved line, and may be specifically set according to actual requirements.
For example, in a case that the predetermined feature information matched with the target feature information corresponds to a plurality of scene modes, the electronic device may display the plurality of scene modes on the first interface of the electronic device in a form of scene mode icons, and after receiving a click input of a user on a scene mode icon corresponding to a target scene mode, the electronic device directly sets the scene mode as the target scene mode.
For example, when the user wants to set the scene mode to the guest mode, as shown in fig. 4 (a), the user does not watch on the electronic device after turning on the face recognition function, so that the electronic device acquires a face image to be recognized that the user does not watch on the electronic device. The characteristic information which is prestored in the electronic equipment and used for representing that eyes do not watch on the electronic equipment corresponds to three scene modes which are respectively a driving mode, a visitor mode and a child mode. After completing the matching, the electronic device displays a "driving mode" icon, a "guest mode" icon, and a "child mode" icon on the first interface (41 shown in fig. 4 (b)) as shown in fig. 4 (b). The user may then cause the electronic device to set the scene mode to guest mode by clicking on the "guest mode" icon (i.e., the target identifier described above, e.g., 42, shown in fig. 4 (b)).
In this way, when the feature information of the local facial feature posture included in the face image recognized by the electronic device corresponds to the feature information of the plurality of predetermined local facial features, the plurality of scene modes corresponding to the feature information of the plurality of predetermined local facial features are displayed on the first interface of the electronic device in the form of icons, so that the user can conveniently select the scene modes.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 5, the electronic device 500 includes: an obtaining module 501 and a setting module 502, wherein:
an obtaining module 501, configured to obtain a face image of a first user and target feature information of the face image, where the target feature information is used to represent a target local five-sense organ pose of the first user.
A setting module 502, configured to set a scene mode of the electronic device 500 to a target scene mode when the face image acquired by the acquiring module 501 matches a predetermined face image and the target feature information matches the predetermined feature information; wherein the predetermined characteristic information is used for representing a predetermined local facial feature posture of the first user; the target scene mode is as follows: and the scene mode corresponding to the preset local five-sense organ posture.
Optionally, the target feature information includes N pieces of first feature information, where any one piece of first feature information is used to characterize a local pose of the five sense organs of the first user, and N is a positive integer.
Optionally, the predetermined feature information includes M pieces of second feature information, any one piece of second feature information is used to characterize a predetermined local facial features form of a predetermined local facial features of the first user, and M is a positive integer.
Optionally, as shown in fig. 5, the electronic device 500 further includes: a display module 503 and a receiving module 504, wherein: a display module 503, configured to display M identifiers on the first interface, where one identifier indicates one scene mode, and each scene mode corresponds to one predetermined pose of the predetermined local facial features corresponding to the second feature information; a receiving module 504, configured to receive a first input for the target identifier, where the target identifier is at least one of the M identifiers; the setting module 502 is specifically configured to set the scene mode of the electronic device to the target scene mode corresponding to the target identifier in response to the first input received by the receiving module 504.
Optionally, the target local five sense organs pose is: the mouth of the first user is occluded, and the target scene mode is a mute mode or a vibration mode.
According to the electronic device provided by the embodiment of the invention, the electronic device acquires the face image of the first user and the target characteristic information of the face image, and the target characteristic information is used for representing the target local five-sense organ gesture of the first user, and at least one preset local five-sense organ gesture corresponds to one scene mode, so that under the condition that the face image is matched with the preset face image and the target characteristic information is matched with the preset characteristic information, the electronic device can set the scene mode of the electronic device as the target scene mode based on the five-sense organ gesture of the user, thereby simplifying the operation steps of switching the scene modes by the user, and avoiding the problems of complexity and long time consumption of the whole operation steps of the traditional scene mode switching process.
The electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and is not described herein again to avoid repetition.
Taking an electronic device as an example of a terminal device, as shown in fig. 6, fig. 6 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the configuration of the terminal device 100 shown in fig. 6 does not constitute a limitation of the terminal device, and that the terminal device 100 may include more or less components than those shown, or combine some components, or arrange different components. In the embodiment of the present invention, the terminal device 100 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The input unit 104 is configured to acquire a face image of a first user and target feature information of the face image, where the target feature information is used to represent a target local five sense organs pose of the first user; a processor 110, configured to set a scene mode of the terminal device 100 as a target scene mode in response to the face image of the first user received by the input unit 104, in a case where the face image matches a predetermined face image and the target feature information matches predetermined feature information; wherein the predetermined characteristic information is used for representing a predetermined local facial feature posture of the first user; the target scene mode is as follows: and the scene mode corresponding to the preset local five-sense organ posture.
According to the terminal device provided by the embodiment of the invention, the terminal device acquires the face image of the first user and the target characteristic information of the face image, and the target characteristic information is used for representing the target local five sense organs posture of the first user, and at least one preset local five sense organs posture corresponds to one scene mode, so that under the condition that the face image is matched with the preset face image and the target characteristic information is matched with the preset characteristic information, the terminal device can set the scene mode of the terminal device as the target scene mode based on the five sense organs posture of the user, thereby simplifying the operation steps of switching the scene modes by the user, and avoiding the problems of complexity in the operation steps of the whole process of switching the traditional scene modes and long time consumption.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device 100 provides the user with wireless broadband internet access via the network module 102, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device 100. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 6, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device 100, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device 100, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device 100, connects various parts of the entire terminal device 100 by various interfaces and lines, and performs various functions of the terminal device 100 and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device 100. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor 110, where the computer program, when executed by the processor, implements each process of the control method embodiment of the terminal device, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the control method embodiment of the terminal device, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A method of controlling an electronic device, the method comprising:
acquiring a face image of a first user and target characteristic information of the face image, wherein the target characteristic information is used for representing a target local five sense organs posture of the first user;
setting a scene mode of the electronic equipment as a target scene mode under the condition that the face image is matched with a preset face image and the target characteristic information is matched with preset characteristic information;
wherein the predetermined characteristic information is used to characterize a predetermined local facial pose of the first user; the target scene mode is as follows: and the scene mode corresponding to the preset local five-sense organ gesture.
2. The method according to claim 1, wherein the target feature information includes N first feature information, any first feature information is used for characterizing a local facial pose of the first user, and N is a positive integer.
3. The method according to claim 1 or 2, wherein the predetermined characteristic information comprises M second characteristic information, any second characteristic information being used for characterizing at least one predetermined posture of a predetermined local five sense organs of the first user, M being a positive integer.
4. The method according to claim 3, wherein in case M is greater than 1, before the setting the scene mode of the electronic device to the target scene mode, the method further comprises:
displaying M marks on a first interface, wherein one mark indicates a scene mode, and each scene mode corresponds to one preset posture of a preset local five sense organs corresponding to the second characteristic information;
receiving a first input for a target identifier, the target identifier being at least one of the M identifiers;
the setting the scene mode of the electronic device to the target scene mode includes:
and responding to the first input, and setting the scene mode of the electronic equipment to be a target scene mode corresponding to the target identification.
5. The method of claim 1, wherein the target local five-sense posture is: the mouth of the first user is occluded and the target scene mode is a silent mode or a vibration mode.
6. An electronic device, characterized in that the electronic device comprises:
the acquisition module is used for acquiring a first user face image and target characteristic information of the face image, wherein the target characteristic information is used for representing a target local five sense organs posture of the first user;
the setting module is used for setting the scene mode of the electronic equipment to be a target scene mode under the condition that the face image acquired by the acquisition module is matched with a preset face image and the target characteristic information is matched with preset characteristic information;
wherein the predetermined characteristic information is used to characterize a predetermined local facial pose of the first user; the target scene mode is as follows: and the scene mode corresponding to the preset local five-sense organ gesture.
7. The electronic device of claim 6, wherein the target feature information includes N first feature information, any first feature information is used to characterize a local pose of the five sense organs of the first user, and N is a positive integer.
8. The electronic device according to claim 6 or 7, wherein the predetermined feature information includes M second feature information, any second feature information is used for characterizing at least one predetermined gesture of one predetermined local five sense organ of the first user, and M is a positive integer.
9. The electronic device of claim 8, further comprising:
the display module is used for displaying M identifications on the first interface, wherein one identification indicates one scene mode, and each scene mode corresponds to one preset posture of the preset local facial features corresponding to one piece of second characteristic information;
a receiving module, configured to receive a first input for a target identifier, where the target identifier is at least one of the M identifiers;
the setting module is specifically configured to set a scene mode of the electronic device to a target scene mode corresponding to the target identifier in response to the first input received by the receiving module.
10. The electronic device of claim 6, wherein the target local five-sense gestures are: the mouth of the first user is occluded and the target scene mode is a silent mode or a vibration mode.
11. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the control method of the electronic device according to any one of claims 1 to 5.
12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the control method of an electronic device according to any one of claims 1 to 5.
CN201911040338.9A 2019-10-29 2019-10-29 Control method of electronic equipment and electronic equipment Pending CN110866465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911040338.9A CN110866465A (en) 2019-10-29 2019-10-29 Control method of electronic equipment and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911040338.9A CN110866465A (en) 2019-10-29 2019-10-29 Control method of electronic equipment and electronic equipment

Publications (1)

Publication Number Publication Date
CN110866465A true CN110866465A (en) 2020-03-06

Family

ID=69654243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911040338.9A Pending CN110866465A (en) 2019-10-29 2019-10-29 Control method of electronic equipment and electronic equipment

Country Status (1)

Country Link
CN (1) CN110866465A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435735A (en) * 2020-12-01 2021-03-02 重庆金山医疗器械有限公司 Switching method, device, equipment, medium and system
CN113473015A (en) * 2021-06-08 2021-10-01 荣耀终端有限公司 Holder control method and electronic equipment
CN115225649A (en) * 2022-07-19 2022-10-21 维沃移动通信有限公司 Data synchronization method and device and electronic equipment
CN115866135A (en) * 2022-11-30 2023-03-28 联想(北京)有限公司 Processing method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557678A (en) * 2016-11-09 2017-04-05 珠海格力电器股份有限公司 Intelligent terminal mode switching method and device
CN107742072A (en) * 2017-09-20 2018-02-27 维沃移动通信有限公司 Face identification method and mobile terminal
CN110164440A (en) * 2019-06-03 2019-08-23 清华大学 Electronic equipment, method and medium are waken up based on the interactive voice for sealing mouth action recognition
CN110225196A (en) * 2019-05-30 2019-09-10 维沃移动通信有限公司 Terminal control method and terminal device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106557678A (en) * 2016-11-09 2017-04-05 珠海格力电器股份有限公司 Intelligent terminal mode switching method and device
CN107742072A (en) * 2017-09-20 2018-02-27 维沃移动通信有限公司 Face identification method and mobile terminal
CN110225196A (en) * 2019-05-30 2019-09-10 维沃移动通信有限公司 Terminal control method and terminal device
CN110164440A (en) * 2019-06-03 2019-08-23 清华大学 Electronic equipment, method and medium are waken up based on the interactive voice for sealing mouth action recognition

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435735A (en) * 2020-12-01 2021-03-02 重庆金山医疗器械有限公司 Switching method, device, equipment, medium and system
CN113473015A (en) * 2021-06-08 2021-10-01 荣耀终端有限公司 Holder control method and electronic equipment
CN113473015B (en) * 2021-06-08 2022-03-08 荣耀终端有限公司 Holder control method and electronic equipment
CN115225649A (en) * 2022-07-19 2022-10-21 维沃移动通信有限公司 Data synchronization method and device and electronic equipment
CN115866135A (en) * 2022-11-30 2023-03-28 联想(北京)有限公司 Processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN108255378B (en) Display control method and mobile terminal
CN109002243B (en) Image parameter adjusting method and terminal equipment
CN110062105B (en) Interface display method and terminal equipment
CN108762634B (en) Control method and terminal
CN109032486B (en) Display control method and terminal equipment
CN111124245B (en) Control method and electronic equipment
CN110058836B (en) Audio signal output method and terminal equipment
CN110769155B (en) Camera control method and electronic equipment
CN111142991A (en) Application function page display method and electronic equipment
CN108763317B (en) Method for assisting in selecting picture and terminal equipment
CN110489045B (en) Object display method and terminal equipment
CN109710349B (en) Screen capturing method and mobile terminal
CN110908750B (en) Screen capturing method and electronic equipment
CN110866465A (en) Control method of electronic equipment and electronic equipment
CN109407948B (en) Interface display method and mobile terminal
CN111010523B (en) Video recording method and electronic equipment
CN109257505B (en) Screen control method and mobile terminal
CN108681427B (en) Access right control method and terminal equipment
CN110944236B (en) Group creation method and electronic device
CN110012151B (en) Information display method and terminal equipment
CN111338525A (en) Control method of electronic equipment and electronic equipment
CN109189514B (en) Terminal device control method and terminal device
CN110750200A (en) Screenshot picture processing method and terminal equipment
CN109117037B (en) Image processing method and terminal equipment
CN108833791B (en) Shooting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200306

RJ01 Rejection of invention patent application after publication