CN108153568B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN108153568B
CN108153568B CN201711394687.1A CN201711394687A CN108153568B CN 108153568 B CN108153568 B CN 108153568B CN 201711394687 A CN201711394687 A CN 201711394687A CN 108153568 B CN108153568 B CN 108153568B
Authority
CN
China
Prior art keywords
information
electronic device
state information
adjusting
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711394687.1A
Other languages
Chinese (zh)
Other versions
CN108153568A (en
Inventor
张学荣
李斌
陈宏星
陈茂刚
罗应文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201711394687.1A priority Critical patent/CN108153568B/en
Publication of CN108153568A publication Critical patent/CN108153568A/en
Application granted granted Critical
Publication of CN108153568B publication Critical patent/CN108153568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An information processing method and an electronic device are provided. An information processing method is applied to an electronic device having an object recognition function, and includes: acquiring an image of the object; determining first state information of an object in the image when the image satisfies a first condition; and adjusting settings of the electronic device based at least on the first state information.

Description

Information processing method and electronic equipment
Technical Field
The embodiment of the disclosure relates to an information processing method and an electronic device.
Background
Information push, i.e. web broadcasting, is a new technology for reducing information overload by periodically transmitting information required by users on the internet through a certain technical standard or protocol. Push technology reduces the time a user uses to search for information on a network by automatically delivering the information to the user.
At present, a user passively acquires various kinds of push information through products such as a handheld terminal and the like. In a network environment, the types of pushed information include telephone, mail, short message, video, broadcast, mobile phone application or website, etc.
Disclosure of Invention
At least one embodiment of the present disclosure provides an information processing method that may be applied to an electronic device having an object recognition function, the information processing method including: acquiring an image of the object; determining first state information of an object in the image when the image satisfies a first condition; and adjusting settings of the electronic device based at least on the first state information.
For example, in at least one embodiment, the adjusting settings of the electronic device based at least on the first state information comprises: adjusting display settings of the electronic device according to the first state information; or adjusting the playing setting of the electronic equipment according to the first state information.
For example, in at least one embodiment, the display settings include display content settings or display attribute settings including a color or color temperature mode of the display screen.
For example, in at least one embodiment, the adjusting the display content settings of the electronic device based at least on the first status information comprises: acquiring push information of a corresponding category according to the first state information; generating a display instruction based on the push information; and executing a display instruction to display the push information.
For example, in at least one embodiment, the playback settings include playback attribute settings or settings for a played audio file.
For example, in at least one embodiment, the adjusting settings of the electronic device based at least on the first state information comprises: adjusting a setting of the electronic device based on second state information and the first state information, wherein the second information includes one or more of environmental information, time information, or age information of the subject.
At least one embodiment of the present disclosure also provides an electronic device including: an image acquisition unit configured to acquire an image of a subject; and a processor configured to determine first state information of an object in the image when the image satisfies a first condition, and adjust a setting of the electronic device based at least on the first state information.
For example, in at least one embodiment, the electronic device further comprises a display, wherein the adjusting the settings of the electronic device based on at least the first status information comprises: adjusting settings of the display according to the first state information; wherein the adjusting the setting of the display comprises: adjusting the attribute setting of the display or adjusting the push information displayed by the display.
For example, in at least one embodiment, the electronic device further comprises a player, wherein the adjusting the settings of the electronic device based on at least the first status information comprises: adjusting settings of the player according to the first state information; wherein, the setting of adjustment player includes: adjusting the attribute setting of the player or adjusting the audio file played by the player.
For example, in at least one embodiment, the electronic device further comprises a wireless transceiver configured to send a request instruction and receive the audio file or the push information; the request instruction is used for requesting the audio file or the push information from another device; and the processor is further configured to generate the request instruction based on the first state information.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description relate only to some embodiments of the present disclosure and are not limiting to the present disclosure.
Fig. 1A is one of schematic application scenarios provided in an embodiment of the present disclosure;
fig. 1B is a second schematic view of an application scenario provided by an embodiment of the present disclosure;
fig. 2 is one of flowcharts of an information processing method provided by an embodiment of the present disclosure;
fig. 3 is a second flowchart of an information processing method according to an embodiment of the disclosure;
FIG. 4A is a schematic diagram of an image containing an object provided by one embodiment of the present disclosure;
FIG. 4B is a schematic diagram of display content provided based on the image of FIG. 4A according to one embodiment of the present disclosure;
FIG. 5 is a block diagram of an electronic device provided by an embodiment of the present disclosure;
FIG. 6A is a block diagram of an example one of obtaining display content provided by one embodiment of the present disclosure;
fig. 6B is a system diagram of example two of obtaining display content provided for one embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Reference throughout this disclosure to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure, and multiple references to "one embodiment" or "an embodiment" should not be understood as necessarily all referring to the same embodiment. Moreover, the exemplary illustrations employed by the present disclosure of a same reference number should not be read to refer to all such products or terminals. For example, the electronic device 100 in fig. 1A is a smartphone, but the present disclosure does not limit that the electronic device 100 must be a smartphone.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs.
The traditional information and content push cannot carry out targeted and targeted push, so that a large number of users habitually ignore push information, and the pushed information cannot meet the requirement of user differentiation.
The electronic device, that is, the smart client (for example, a mobile phone) related to the embodiment of the present disclosure may acquire authentication information during a power-on or power-off process, extract first state information based on the acquired authentication information (for example, the first state information may represent a requirement for user differentiation), and adjust some settings of the electronic device by using the first state information (for example, including but not limited to obtaining differentiated push information based on the state information).
The following describes a process of acquiring authentication information by taking an intelligent terminal as an example in conjunction with fig. 1A and 1B.
The electronic device 100 shown in fig. 1A is an intelligent terminal, and the intelligent terminal 100 may access the base station 110 through a mobile communication network, and then may complete an authentication process through communication between the base station 110 and a core network. It should be noted that the electronic device 100 may also access the core network through another wireless access network (e.g., wifi or bluetooth, etc.) to complete the authentication process.
In at least one example, the authentication information may be image information of a human face acquired by an image acquisition device, and at this time, the electronic device 100 is required to have functions of human face recognition and human face emotion recognition.
In at least one example, the authentication information may also be image information of a fingerprint captured by the image capture device, and the electronic device 100 further has a fingerprint and finger vibration recognition function.
The process of unlocking the display screen of the electronic device 100 is described below with reference to fig. 1B.
The electronic device 100 of FIG. 1B has a lock screen state 101 and an unlock state 102. In the lock screen state 101, a user or operator cannot start a plurality of applications in the electronic device 100, but the user or operator can perform authentication information input to complete an authentication process to unlock the screen. For example, the authentication information in fig. 1B is an image 103 captured by an image capture device, that is, the electronic device 100 in fig. 1B may complete the authentication process through the image 103 captured by the image capture device when the electronic device is in the lock screen state 101, and if the authentication is passed, the electronic device 100 enters the unlock state 102. The user or the operator may start the plurality of applications on the display screen by a touch operation such as a click when the electronic device 100 is in the unlocked state 102, but the present disclosure does not limit the manner in which the plurality of applications on the display screen are driven.
It is understood that the electronic device 100 of fig. 1A and 1B may also be a computer terminal, a PAD, etc., and the disclosed embodiments do not limit the type of the terminal.
Information processing method 200 is described below in conjunction with fig. 2.
Fig. 2 provides an information processing method 200, and the information processing method 200 can be applied to the electronic apparatus 100 having an object recognition function. Referring to fig. 2, the information processing method 200 may include: s210, acquiring an image of the object; s220, when the image meets a first condition, determining first state information of an object in the image; and S230, adjusting settings of the electronic device 100 based at least on the first state information.
S210 of fig. 2 in at least one embodiment, capturing an image of a subject may be capturing an image containing authentication information. For example, a fingerprint image or a face image is acquired.
S220 of fig. 2 in at least one embodiment, the first condition may be an authentication condition for unlocking the display screen (as shown in fig. 1B), or may be an authentication condition at power-on. For example, when the face authentication is adopted, the first condition may be that the acquired image of the face satisfies the authentication condition for unlocking the screen, or when the fingerprint authentication is adopted, the first condition may be that the acquired image of the fingerprint satisfies the authentication condition for unlocking the screen.
Taking face recognition authentication as an example, the process of performing S210 and S220 is exemplarily illustrated. Four steps of face image detection, face image preprocessing, face image feature extraction and matching identification are required to be executed to finish the identification and authentication based on the face. The following four paragraphs briefly describe the implementation process of each step.
The face image detection is also used for preprocessing of face recognition, namely, the position and the size of a face are accurately calibrated in an image. The face image contains abundant pattern features, such as histogram features, color features, template features, structural features, and the like. The face detection is to extract the useful information and to use the features to realize the face detection.
The process of face image preprocessing is a process of processing images based on face detection results and finally serving for feature extraction. Since the acquired original image is limited by various conditions and random interference, the original image cannot be directly used, and image preprocessing such as gray scale correction and noise filtering must be performed on the original image at an early stage of image processing. For the face image, the preprocessing process mainly includes light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering, sharpening, and the like of the face image.
The face image feature extraction is performed according to some features of the face. The features that can be used in face recognition are visual features, pixel statistical features, face image transform coefficient features, face image algebraic features, and the like. Face feature extraction, also known as face characterization, is a process of feature modeling for a face. The methods for extracting human face features are classified into two main categories: one is a knowledge-based characterization method, and the other is a characterization method based on algebraic features or statistics. The knowledge-based characterization method mainly obtains feature data which is helpful for face classification according to shape description of face organs and distance characteristics between the face organs, and feature components of the feature data generally comprise Euclidean distance, curvature, angle and the like between feature points. The human face is composed of parts such as eyes, nose, mouth, and chin, and geometric description of the parts and their structural relationship can be used as important features for recognizing the human face, and these features are called geometric features. The knowledge-based face characterization mainly comprises a geometric feature-based method and a template matching method.
In the process of matching and recognizing the face image, the extracted feature data of the face image is searched and matched with the feature template stored in the database of the electronic device 100, and by setting a threshold, when the similarity exceeds the threshold, the result obtained by matching is output. The face recognition is to compare the face features to be recognized with the obtained face feature template, and judge the identity information of the face according to the similarity degree. This process is divided into two categories: one is confirmation, which is a process of performing one-to-one image comparison, and the other is recognition, which is a process of performing one-to-many image matching comparison.
When the step S220 is executed, it is determined that the image of the human face satisfies the first condition, that is, it is recognized through the authentication process that the captured image of the object matches at least one image stored in the electronic device 100. The authenticated image is then further analyzed to derive first status information (e.g., including, but not limited to, emotional status information) reflected by the image. Subsequently, settings of the electronic device 100 (e.g., including but not limited to adjusting display settings, play settings, or security level settings, etc.) may be further adjusted based on the first status information.
The various steps shown in fig. 2 are exemplarily described below in connection with two examples.
As an example of authentication using face recognition, S210 is first performed to collect an image of a face, and then S220 is performed to determine whether the collected image of the face satisfies an authentication condition (i.e., the first condition), if the authentication condition is satisfied, then S220 determines emotional state information reflected by the face in the image. Adjusting settings of the electronic device 100 based on at least the first state information when S230 is finally performed may be adjusting settings (e.g., display settings, play settings, etc.) of the electronic device 100 based on the emotional state information. For example, the display settings may include adjusting a brightness setting of a display screen or a wallpaper setting or a display setting of push information, etc.
Example two authentication using fingerprint recognition, the step S210 of capturing a fingerprint image of a fingertip is performed first, and the step S220 of determining whether the captured fingerprint image satisfies an authentication condition is performed later, and if the authentication condition (i.e., the first condition) is satisfied, the step S220 further determines stability information of a finger corresponding to the fingerprint in the image. Adjusting the setting of the electronic apparatus 100 based on at least the first state information when S230 is finally performed may be adjusting the setting of the electronic apparatus 100 based on the stability information. For example, if the identified first state information characterizes a degree of finger tremor exceeding a threshold, the security level setting of electronic device 100 is adjusted up. Specifically, adjusting the settings of the electronic device 100 may be such that certain programs in the electronic device 100 are automatically deleted or hidden, etc.
The embodiment of the present disclosure may adjust various settings of the electronic device 100 based on further extraction and analysis of the image of the object (i.e., the information to be authenticated), so as to simultaneously utilize richer contents carried in the image information of the object (e.g., the authentication message when the screen is unlocked) collected for satisfying the authentication process, and further provide various differentiated requirements of the user based on the richer contents. For example, adjusting display settings, adjusting playback settings, or adjusting security level settings of the entire electronic device 100 or certain applications in the electronic device 100, etc.
The present disclosure does not limit the specific authentication method employed by the electronic device 100.
S230 in fig. 2 is explained below with reference to fig. 3.
Fig. 3 exemplarily provides an implementation procedure of S230. As can be seen from fig. 3, the step of S230 adjusting the setting of the electronic device 100 based on at least the first status information may include: s310, adjusting the display setting of the electronic device 100 according to the first state information; or S320, adjusting the play setting of the electronic device 100 according to the first state information.
For example, when the first state information referred to at S220 is emotional state information, performing step S230 may include performing: s310, adjusting the display setting of the electronic device 100 according to the emotional state information, or S320, adjusting the play setting of the electronic device 100 according to the emotional state information.
It should be noted that the execution order of S310 and S320 is not limited in the embodiments of the present disclosure. For example, in one embodiment, S310 may be performed first and then S320 may be performed, and in another embodiment, S320 may be performed first and then S310 may be performed, or S310 and S320 may be performed simultaneously. In one embodiment, it is also possible to perform only S310 or only S320.
In addition, fig. 3 is only used to exemplarily illustrate two operations when S230 is performed, and the embodiment of the present disclosure does not limit the type of the operation when S230 is performed. For example, executing S230 may also be to set a security level of a plurality of applications in the electronic device 100. Specifically, when the first status information related to S220 indicates that the user is in a dangerous state (for example, the number of times of finger tremors exceeds a threshold value or the emotional state of the facially expressed display is in a panic state), then performing S230 may also be to automatically delete or hide the application program related to the payment.
The display settings referred to in S310 of fig. 3 include display content settings or display attribute settings, for example, the display attributes may in turn include personalized attributes or attributes of the display screen. For example, the personalized properties include, but are not limited to, color temperature, wallpaper, text size, font style, etc., and the properties of the display screen include, but are not limited to, brightness, hibernation, auto-rotation, or sunlight readability improvement, etc.
For example, the steps of S310 in fig. 2 and 3 are exemplified by face authentication and adjustment of display color temperature. And executing S210 to collect the image of the face, and further determining that the emotional state reflected by the face in the image is in a hurry state after judging that the collected image of the face meets the authentication condition in S220. In the corresponding S230 (or S310), the color or color temperature mode of the display screen of the electronic device 100 may be adjusted based on the emotional state information of the casualty, so that the corresponding color or color temperature mode may possibly alleviate the emotional state of the user or the operator.
In at least one embodiment, the adjusting of the display content settings of the electronic device 100 based on at least the first state information at S310 may further include: acquiring push information of a corresponding category according to the first state information; generating a display instruction based on the push information; and executing a display instruction to display the push information. For example, the push information of the corresponding category may be obtained locally, or may be obtained from a server in the cloud via a network. For example, the display instruction may be a wallpaper switch instruction or the like. Reference may be made specifically to the description hereinafter.
The play setting of S320 of fig. 3 includes a play attribute setting or a setting of a played audio file. Adjusting the play attribute may be used to adjust the volume of the play sound (e.g., including but not limited to ring volume, media volume, alarm volume, or call volume). The adjustment of the setting of the played audio file comprises changing a prompt tone, changing a call or changing an audio file corresponding to an alarm. For example, the change alert tone includes, but is not limited to, a change dial pad tone, a change touch alert tone, a change lock tone, or a screen capture alert tone. For example, the played audio file includes an incoming call audio file or an alarm audio file, etc.
For example, the steps of fig. 2 and 3 related to S320 are exemplified by the face authentication and the adjustment play setting. And executing S210 to collect the image of the face, and S220 to judge whether the collected image of the face meets the authentication condition, and further determining that the emotional state reflected by the face in the image is a casualty after the authentication condition is met. In S320, the playing attribute setting of the electronic device 100 or the setting of the played audio file may be adjusted based on the emotional state information of the user, so that the corresponding playing volume or the played audio file may be as gentle as possible to relieve the emotional state of the user or the operator.
In at least one embodiment, the adjusting of the playing attribute of the electronic device 100 or the setting of playing the audio file based on at least the first state information at S320 may further include: acquiring push information of a corresponding category according to the first state information; generating a playing instruction based on the push information; and executing a play instruction to play the push information. For example, the push information of the corresponding category may be obtained locally, or may be obtained from a server side through a network. For example, the playing instruction may be a setting instruction to change an incoming call prompting ring tone or a setting instruction to change an alarm prompting ring tone. For example, the ring tone setting may be adjusted when electronic device 100 switches to unlocked state 102 of FIG. 1B.
The following further describes the process of performing S230 in fig. 2 by taking the first state information and the second state information as an example.
In at least one embodiment, the adjusting of the settings of the electronic device 100 based on at least the first state information S230 may further comprise: adjusting a setting of the electronic device 100 based on second state information and the first state information, wherein the second information includes one or more of environment information, time information, or age information of the subject.
Further, the adjusting of the setting of the electronic device 100 using the second state information and the first state information in S230 may include adjusting one or more of a display setting, a play setting, or a security attribute setting of a related application of the electronic device 100 based on the second state information and the first state information. For example, the display settings may in turn include display content settings (e.g., changing push information or changing wallpaper, etc.) or display attribute settings (e.g., changing color temperature, etc. settings related to adjusting display parameters).
For example, the environmental state information may include information related to the environment such as noise information, brightness information, or temperature information in the environment where the operator operates the electronic apparatus.
How to adjust the setting process of the electronic device using the first information and the second information is exemplarily described below in connection with example three and example four.
Example three, portions of this example that overlap with example one and example two are not repeatedly described. In example three, the noise information of the surrounding environment is used as the second state information, and it is assumed that the noise information is greater than the set threshold (i.e., the second state information indicates that the device is in a noisy environment) at this time, and the first state information is determined to be the emotional state of the patient by performing S220. At this time, after performing S230, the display setting may be adjusted without adjusting the play setting (due to the noise being large), and specifically, the setting of the display content may be adjusted (for example, a wallpaper containing the opening or motivational text content is obtained and displayed at the same time when the screen of the electronic device 100 is unlocked).
Example four, this example also takes noise information of the surrounding environment as the second state information. The difference between example four and example three is that the detected noise information at the time of unlocking is less than a set threshold (i.e., the second state information prompts the operation to be set in a quiet environment), and the first state information is determined to be an emotional state of injury by performing S220. At this time, S230 may be executed to adjust the display setting, the play setting, or the play setting in priority. Specifically, the setting for playing the audio content may be adjusted (e.g., music or songs that can ease mood are obtained and played simultaneously while the screen of the electronic device 100 is unlocked).
A process of adjusting display contents based on face recognition is exemplarily described below in conjunction with fig. 1B, fig. 2, fig. 3, and fig. 4A and 4B.
The initial state of the electronic device 100 in fig. 1B is in the lock screen state 101, and then the object image 103 as in fig. 4A is obtained by executing S210 in fig. 2, then S220 is executed to determine that the image 103 in fig. 4A satisfies the authentication condition (e.g., the authentication condition of unlocking), at this time, the first state information of the user or operator may be determined to be a casualty based on the image 103 in fig. 4A, and finally step 230 is executed (e.g., S310) to adjust the display content setting. At this time, the picture in the unlocked state 102 of fig. 1B is adjusted to the wallpaper 501 of fig. 4B. Since the content of the wallpaper 501 may be lovely warm, it may be used to adjust the mood of the object in the image 103.
It should be noted that, in order to better adjust the setting of the electronic device 100 after the first state information is determined, all pictures (including wallpaper) stored on the electronic device 100 may be classified in advance according to the first state information, and a corresponding relationship between each type of picture and the first information state may be established.
In addition, at least one embodiment of the present disclosure may also set a corresponding relationship between the push information and the first state information on a server that generates the push information, and then push information of a corresponding category based on a request of the electronic device 100. For example, a request instruction for requesting another device (e.g., the server 750 in fig. 6B) to push information is generated based on the first status information, and the electronic device 100 may further send the request instruction to the cloud server through a network in a wireless communication manner. Then, the cloud server responds to the request instruction to send the classification information stored on the server to the electronic device 100 as push information through the network, and the electronic device 100 receives the classification information.
The electronic device 100 of the present disclosure is described below with reference to fig. 5, 6A, and 6B.
The electronic device 100 of fig. 5 may comprise an image acquisition unit 601 and a processor 602. The image acquisition unit 601 is configured to acquire an image of an object, and the processor 602 is configured to determine first state information of the object in the image (e.g. image 103) when the image satisfies a first condition, and to adjust a setting of the electronic device 100 based on at least the first state information.
The image capturing unit 601 may also further comprise a camera, an image capturing card (not shown in the figure). The camera converts an optical image of the subject into a video or image signal, which is then digitized by an image capture card to form a digital image for use by the processor 602. For example, the video camera may be a camera.
The image acquisition unit 601 may acquire a still image of a human face or a moving image of a human face. The processor 602 may process the image acquisition unit 601 to acquire information data reflecting different expressions of a human face. When the user is within the shooting range of the acquisition device, the image acquisition unit 601 automatically searches and shoots the face image of the user.
In addition, in at least one example, the authentication information may also be image information of a fingerprint collected by the image collecting device, and in this case, the electronic device 100 further has a fingerprint and finger vibration recognition function. Specifically, the image capturing unit 601 is configured to capture a fingerprint image of a fingertip, and then the processor 602 is configured to determine whether the captured fingerprint image satisfies an authentication condition, and if the authentication condition (i.e., the first condition) is satisfied, further determine stability information of a finger corresponding to the fingerprint in the image. Processor 602 may then adjust the settings of electronic device 100 based at least on the first state information may be adjusting the settings of electronic device 100 based on the stability information. For example, if the identified first state information characterizes a degree of finger tremor exceeding a threshold, the security level setting of electronic device 100 is adjusted up. Specifically, adjusting the settings of the electronic device 100 may be such that certain programs in the electronic device 100 are automatically deleted or hidden, etc.
The functions of the processor 602 and the operations performed are exemplarily described below by taking an image of a human face as an example. That is, a process of how the processor 602 determines whether the image (e.g., the image 103) satisfies the first condition and a process of adjusting the electronic apparatus 100 are described in detail.
First, the processor 602 needs to perform face image detection, face image preprocessing, face image feature extraction, and matching identification when face-based identification authentication is completed.
The face image detection is also used for preprocessing of face recognition, namely, the position and the size of a face are accurately calibrated in an image. The face image contains abundant pattern features, such as histogram features, color features, template features, structural features, and the like. The face detection is to extract the useful information and to use the features to realize the face detection.
The process of face image preprocessing is a process of processing images based on face detection results and finally serving for feature extraction. Since the original image acquired by the image acquisition unit 601 is limited by various conditions and randomly disturbed, and often cannot be directly used, image preprocessing such as gray scale correction and noise filtering must be performed on the original image at an early stage of image processing. For the face image, the preprocessing process mainly includes light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering, sharpening, and the like of the face image.
The face image feature extraction is performed according to some features of the face. The features that can be used in face recognition are visual features, pixel statistical features, face image transform coefficient features, face image algebraic features, and the like. Face feature extraction, also known as face characterization, is a process of feature modeling for a face. The methods for extracting human face features are classified into two main categories: one is a knowledge-based characterization method, and the other is a characterization method based on algebraic features or statistics. The knowledge-based characterization method mainly obtains feature data which is helpful for face classification according to shape description of face organs and distance characteristics between the face organs, and feature components of the feature data generally comprise Euclidean distance, curvature, angle and the like between feature points. The human face is composed of parts such as eyes, nose, mouth, and chin, and geometric description of the parts and their structural relationship can be used as important features for recognizing the human face, and these features are called geometric features. The knowledge-based face characterization mainly comprises a geometric feature-based method and a template matching method.
In the process of matching and recognizing the face image, the feature data of the face image extracted by the processor 602 is searched and matched with the feature template stored in the database of the electronic device 100, and by setting a threshold, when the similarity exceeds the threshold, the result obtained by matching is output. The face recognition is to compare the face features to be recognized with the obtained face feature template, and judge the identity information of the face according to the similarity degree. This process is divided into two categories: one is confirmation, which is a process of performing one-to-one image comparison, and the other is recognition, which is a process of performing one-to-many image matching comparison.
Next, the processor 602 derives that the image of the human face satisfies the first condition, that is, the processor 602 identifies that the image of the object captured by the image capturing unit 601 matches with at least one image stored in the electronic device 100 through an authentication process. Thereafter, the processor 602 may further analyze the authenticated image to obtain first state information (e.g., including, but not limited to, emotional state information) reflected by the image. Subsequently, the processor 602 may further adjust the setting of the electronic device 100 (for example, including but not limited to adjusting the display setting, the play setting, or the security level setting) according to the first status information.
It should be noted that, the present disclosure does not limit the process of executing the face authentication recognition by the processor 602, and the above description of the face authentication recognition process is used to explain the technical solution of the present application.
As shown in FIG. 6A, in at least one embodiment, the electronic device 100 further comprises a display 604, wherein the adjusting the settings of the electronic device 100 based at least on the first state information may comprise: adjusting settings of the display 604 in accordance with the first status information; wherein the adjusting the setting of the display 604 comprises: the property settings of the display 604 are adjusted.
Adjusting the properties of the display 604, that is to say adjusting the personalized property settings or adjusting the property settings of the display screen. For example, the personalized properties include, but are not limited to, color temperature, wallpaper, text size, font style, etc., and the properties of the display screen include, but are not limited to, brightness, hibernation, auto-rotation, or sunlight readability improvement, etc.
In at least one embodiment, the processor 602 derives that the image of the human face satisfies the first condition, that is, the processor 602 identifies that the image of the object captured by the image capturing unit 601 matches with at least one image stored in the electronic device 100 through an authentication process. Thereafter, the processor 602 may further analyze the authenticated image to obtain first state information (e.g., including, but not limited to, emotional state information) reflected by the image. The processor 602 may then further adjust the display 604 settings of the electronic device 100 according to the first status information.
The processing of the processor 602 is briefly described below with reference to an example five.
In the fifth example, if the processor 602 determines that the object is in the impaired state, the processor 602 generates an instruction for adjusting the personalized attribute, and then adjusts the personalized setting of the display 604 according to the generated instruction. For example, the color temperature can be adjusted more softly by the command. For example, the wallpaper when unlocked can be adjusted to a wallpaper that makes people feel happy through the instruction.
In at least one embodiment, processor 602 may adjust display 604 settings of electronic device 100 based on the first status information and the second status information (e.g., including, but not limited to, environmental status information, age status information, or time status information). For example, the property settings of the display screen of the display 604 are adjusted (e.g., including but not limited to brightness, hibernation, auto-rotation, or sunlight readability enhancement).
A brief description of a setting process for adjusting the display 604 of the electronic device 100 using the first state information and the second state information is provided below in connection with example six and example seven. Example six has time information as the second state information, and example seven has environment information as the second state information.
In the sixth example, if the processor 602 determines that the object is in the impaired state and the time information indicates that the object is in the morning, the processor 602 generates an instruction for adjusting the attribute of the display screen according to the morning time information and the impaired state information, and then adjusts the brightness of the display screen according to the generated instruction to make the brightness of the display screen more suitable for the evening vision habit and simultaneously adjusts the color temperature to make the screen display softer.
In the seventh example, if the processor 602 determines that the object is in a state of being injured, and the environmental information indicates that the operator operating the electronic device is in a sunny environment at the time, the processor 602 generates an instruction for adjusting the attribute of the display screen and adjusting the display content according to the environmental information and the state information of being injured, and then displays the pushed file content capable of curing the injury according to the generated instruction, and adjusts and improves the readability of the display screen in the sun according to the instruction, so that the displayed file is easier to read by the operator.
It should be noted that when the display contents of the display 604 are adjusted, the display contents may be stored in the memory 603 in a classified manner in advance (as shown in fig. 6A, classification information is stored in the memory 603 in advance, and the classification information may include classified pictures).
However, in at least one embodiment, the memory 603 does not store the classification information in advance, but stores the classification information in the cloud (as shown in fig. 6B). Accordingly, the processor 602 is further configured to generate a request instruction 711 based on the first status information, the request instruction 711 being used to request push information from another device (e.g., the server 750), and the electronic device 100 may further include a wireless transceiver 606 (as shown in fig. 6B). The wireless transceiver 606 is configured to send request instructions 711 to the cloud server 750 over the network 760. The cloud server 750 then responds to the request 711 by sending the classification information stored on the server 750 as a push message to the electronic device 100 via the network 760, and the electronic device 100 further receives the classification information via the wireless transceiver 606 (for example, as shown in fig. 6B, the classification information may include, but is not limited to, the picture 712 when adjusting the setting of the display).
As shown in fig. 6A, in at least one embodiment, the electronic device 100 further includes a player 605, wherein the adjusting the setting of the electronic device 100 based on at least the first state information includes: adjusting settings of the player 605 according to the first state information; wherein, the adjusting the setting of the player 605 includes: adjusting the property settings of the player 605 or adjusting the audio files played by the player 605.
Adjusting a property of the player 605 may be used to adjust the volume of the playing sound (e.g., including but not limited to ring volume, media volume, alarm volume, or call volume). Adjusting the setting of the audio file played by the player 605 includes replacing an alert tone, replacing a call, or replacing an audio file corresponding to an alarm. For example, the change alert tone includes, but is not limited to, a change dial pad tone, a change touch alert tone, a change lock tone, or a screen capture alert tone.
In at least one embodiment, the processor 602 derives that the image of the human face satisfies the first condition, that is, the processor 602 identifies that the image of the object captured by the image capturing unit 601 matches with at least one image stored in the electronic device 100 through an authentication process. Thereafter, the processor 602 may further analyze the authenticated image to obtain first state information (e.g., including, but not limited to, emotional state information) reflected by the image. The processor 602 then adjusts the settings of the player 605 further according to the first status information.
For example, in one example, if the processor 602 determines that the object is in an impaired state, the processor 602 generates an instruction to adjust the volume of the played sound, and then adjusts the volume of the player 605 playing one or more audio files according to the generated instruction (e.g., the volume can be adjusted down by the instruction to avoid an excessive volume from making the impaired mood even worse).
For example, in one example, if the processor 602 determines that the object is in an impaired state, the processor 602 may generate a setting instruction for adjusting the played audio files, and then adjust one or more audio files of the player 605 according to the generated instruction (e.g., the ring tone when unlocked may be adjusted to more soothing music by the instruction).
For example, in one example, if the processor 602 determines that the object is in a casualty state and the time information indicates that the object is in the morning, the processor 602 generates an instruction for adjusting the player 605 according to the morning time information and the casualty state information, and then adjusts the attribute of the player 605 or the played audio file according to the generated instruction so as to better conform to the processing habit of relieving the casualty at night.
It should be noted that when the audio files played by the adjustment player 605 are classified, the classified audio files may be stored in the memory 603 in a classified manner (as shown in fig. 6A, classification information is stored in the memory 603 in advance, and the classification information may include the classified audio files).
However, in at least one embodiment, the memory 603 does not store the classification information in advance, but stores the classification information in the cloud. Accordingly, the processor 602 is further configured to generate a request instruction 711 based on the first status information, the request instruction 711 being used to request push information from another device (e.g., the server 750), and the electronic device 100 may further include a wireless transceiver 606 (as shown in fig. 6B). The wireless transceiver 606 is configured to send request instructions 711 to the cloud server 750 over the network 760. The cloud server 750 then responds to the request 711 by sending the classification information stored on the server 750 as a push message to the electronic device 100 via the network 760, and the electronic device 100 further receives the classification information via the wireless transceiver 606 (for example, as shown in fig. 6B, the classification information may include, but is not limited to, the audio file 713 when the setting of the player is adjusted).
The drawings of the embodiments of the disclosure only relate to the structures related to the embodiments of the disclosure, and other structures can refer to the common design. The embodiments of the present disclosure and the features of the embodiments may be combined with each other without conflict.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. An information processing method applied to an electronic device having an object recognition function, the information processing method comprising:
acquiring an image of the object;
determining first state information of an object in the image when the image satisfies a first condition; and
adjusting a security level setting of the electronic device based at least on the first state information, wherein,
the first condition is an authentication condition of the electronic device, wherein,
and performing authentication through fingerprint identification to determine that the authentication condition of the electronic equipment is met, wherein the first state information is stability information of a finger corresponding to the identified fingerprint image.
2. The information processing method of claim 1, wherein the adjusting a security level setting of the electronic device based at least on the first state information comprises:
adjusting display settings of the electronic device according to the first state information; or
And adjusting the playing setting of the electronic equipment according to the first state information.
3. The information processing method of claim 2, wherein the display setting comprises a display content setting or a display attribute setting, the display attribute comprising a color or color temperature mode of the display screen.
4. The information processing method of claim 3, wherein adjusting the display content settings of the electronic device based at least on the first state information comprises:
acquiring push information of a corresponding category according to the first state information;
generating a display instruction based on the push information; and
executing a display instruction to display the push information.
5. The information processing method according to claim 2, wherein the playback setting includes a playback attribute setting or a setting of a played audio file.
6. The information processing method of any of claims 1-5, wherein the adjusting a security level setting of the electronic device based at least on the first state information comprises:
adjusting a setting of the electronic device based on second state information and the first state information, wherein the second state information includes one or more of environmental information, time information, or age information of the subject.
7. An electronic device, comprising:
an image acquisition unit configured to acquire an image of a subject; and
a processor configured to determine first state information of an object in the image when the image satisfies a first condition, and adjust a security level setting of the electronic device based at least on the first state information, wherein,
the first condition is an authentication condition of the electronic device, wherein,
and performing authentication through fingerprint identification to determine that the authentication condition of the electronic equipment is met, wherein the first state information is stability information of a finger corresponding to the identified fingerprint image.
8. The electronic device of claim 7, further comprising, a display, wherein the adjusting a security level setting of the electronic device based at least on the first status information comprises:
adjusting settings of the display according to the first state information; wherein the content of the first and second substances,
the adjusting the setting of the display comprises: adjusting the attribute setting of the display or adjusting the push information displayed by the display.
9. The electronic device of claim 7, further comprising, a player, wherein,
the adjusting a security level setting of the electronic device based at least on the first state information comprises: adjusting settings of the player according to the first state information; wherein the content of the first and second substances,
the setting of the player is adjusted, including: adjusting the attribute setting of the player or adjusting the audio file played by the player.
10. The electronic device of claim 8, further comprising a wireless transceiver configured to send a request instruction and receive the push information;
the request instruction is used for requesting the push information from another device; and
the processor is further configured to generate the request instruction based on the first state information.
11. The electronic device of claim 9, further comprising a wireless transceiver configured to send request instructions and receive the audio file;
the request instruction is used for requesting the audio file from another device; and
the processor is further configured to generate the request instruction based on the first state information.
CN201711394687.1A 2017-12-21 2017-12-21 Information processing method and electronic equipment Active CN108153568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711394687.1A CN108153568B (en) 2017-12-21 2017-12-21 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711394687.1A CN108153568B (en) 2017-12-21 2017-12-21 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN108153568A CN108153568A (en) 2018-06-12
CN108153568B true CN108153568B (en) 2021-04-13

Family

ID=62463989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711394687.1A Active CN108153568B (en) 2017-12-21 2017-12-21 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN108153568B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359572A (en) * 2018-09-30 2019-02-19 联想(北京)有限公司 Information processing method, device and electronic equipment
CN109857510A (en) * 2019-03-14 2019-06-07 苏州华盖信息科技有限公司 A kind of inter-vehicle information system theme selection method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574386A (en) * 2015-06-16 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Terminal mode management method and apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102874259B (en) * 2012-06-15 2015-12-09 浙江吉利汽车研究院有限公司杭州分公司 A kind of automobile driver mood monitors and vehicle control system
CN103377293B (en) * 2013-07-05 2016-04-27 河海大学常州校区 The holographic touch interactive exhibition system of multi-source input, information intelligent optimization process
CN105559804A (en) * 2015-12-23 2016-05-11 上海矽昌通信技术有限公司 Mood manager system based on multiple monitoring
CN106445349B (en) * 2016-10-18 2019-06-25 珠海格力电器股份有限公司 A kind of method, apparatus and electronic equipment adjusting mobile terminal system parameter
CN106875885A (en) * 2017-02-14 2017-06-20 广东欧珀移动通信有限公司 Color temperature adjusting method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574386A (en) * 2015-06-16 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Terminal mode management method and apparatus

Also Published As

Publication number Publication date
CN108153568A (en) 2018-06-12

Similar Documents

Publication Publication Date Title
US11100208B2 (en) Electronic device and method for controlling the same
CN102779509B (en) Voice processing equipment and voice processing method
US9547760B2 (en) Method and system for authenticating user of a mobile device via hybrid biometics information
CN109992237B (en) Intelligent voice equipment control method and device, computer equipment and storage medium
CN107832784B (en) Image beautifying method and mobile terminal
WO2021135685A1 (en) Identity authentication method and device
CN104933344A (en) Mobile terminal user identity authentication device and method based on multiple biological feature modals
KR101884291B1 (en) Display apparatus and control method thereof
WO2020207413A1 (en) Content pushing method, apparatus, and device
KR20160147515A (en) Method for authenticating user and electronic device supporting the same
WO2020211387A1 (en) Electronic contract displaying method and apparatus, electronic device, and computer readable storage medium
WO2020192222A1 (en) Method and device for intelligent analysis of user context and storage medium
CN112148922A (en) Conference recording method, conference recording device, data processing device and readable storage medium
CN204791017U (en) Mobile terminal users authentication device based on many biological characteristics mode
CN111626371A (en) Image classification method, device and equipment and readable storage medium
CN112312215B (en) Startup content recommendation method based on user identification, smart television and storage medium
CN108733429A (en) Method of adjustment, device, storage medium and the mobile terminal of system resource configuration
CN110647732B (en) Voice interaction method, system, medium and device based on biological recognition characteristics
CN113033245A (en) Function adjusting method and device, storage medium and electronic equipment
CN108153568B (en) Information processing method and electronic equipment
CN103905837B (en) Image processing method and device and terminal
CN110633677A (en) Face recognition method and device
CN114626036B (en) Information processing method and device based on face recognition, storage medium and terminal
CN103984415B (en) A kind of information processing method and electronic equipment
CN112235602A (en) Personalized screen protection system and method of smart television and smart television

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant