CN111258414A - Method and device for adjusting screen - Google Patents

Method and device for adjusting screen Download PDF

Info

Publication number
CN111258414A
CN111258414A CN201811459959.6A CN201811459959A CN111258414A CN 111258414 A CN111258414 A CN 111258414A CN 201811459959 A CN201811459959 A CN 201811459959A CN 111258414 A CN111258414 A CN 111258414A
Authority
CN
China
Prior art keywords
screen
expression
face image
image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811459959.6A
Other languages
Chinese (zh)
Other versions
CN111258414B (en
Inventor
朱祥祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811459959.6A priority Critical patent/CN111258414B/en
Publication of CN111258414A publication Critical patent/CN111258414A/en
Application granted granted Critical
Publication of CN111258414B publication Critical patent/CN111258414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The embodiment of the application discloses a method and a device for adjusting a screen. One embodiment of the above method comprises: acquiring at least one image of a preset space in front of a screen; in response to determining that the at least one image includes a face image, extracting feature information of the face image; and adjusting the display information of the screen in response to determining that the characteristic information meets the preset condition. The implementation method can adjust the screen according to the state of the face, and increases the interactivity between the user and the terminal.

Description

Method and device for adjusting screen
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for adjusting a screen.
Background
With the development of communication technology, people are increasingly using terminal devices. Through the terminal equipment, people can surf the internet to acquire data, browse pictures or videos and the like, and great convenience is brought to life and work of people.
The existing terminal equipment cannot adjust the display screen according to the state of the current user.
Disclosure of Invention
The embodiment of the application provides a method and a device for adjusting a screen.
In a first aspect, an embodiment of the present application provides a method for adjusting a screen, including: acquiring at least one image of a preset space in front of a screen; in response to determining that the at least one image includes a face image, extracting feature information of the face image; and adjusting the display information of the screen in response to determining that the characteristic information meets a preset condition.
In some embodiments, the above method further comprises: and locking the screen in response to determining that the at least one image does not include a face image.
In some embodiments, the extracting the feature information of the face image includes: extracting the expression characteristics of the face image and carrying out expression recognition on the face image according to the extracted expression characteristics to obtain an expression recognition result.
In some embodiments, the adjusting the display information of the screen in response to determining that the characteristic information satisfies a preset condition includes: and in response to the fact that the preset expression set comprises the expression indicated by the expression recognition result, selecting an expression picture corresponding to the expression indicated by the expression recognition result from the preset expression picture set and controlling the screen to display the selected expression picture.
In some embodiments, the extracting the feature information of the face image includes: extracting eye features of the face image and identifying the eye state of the face object indicated by the face image according to the extracted eye features; and determining the degree of the human eye closure according to the human eye state.
In some embodiments, the adjusting the display information of the screen in response to determining that the characteristic information satisfies a preset condition includes: and in response to determining that the number of the images with the human eye closing degree larger than the first preset threshold is larger than the second preset threshold, reducing the display brightness of the screen.
In a second aspect, an embodiment of the present application provides an apparatus for adjusting a screen, including: an image acquisition unit configured to acquire at least one image of a preset space in front of a screen; a feature extraction unit configured to extract feature information of the at least one image in response to determining that the at least one image includes a face image; and a screen adjusting unit configured to adjust display information of the screen in response to determining that the characteristic information satisfies a preset condition.
In some embodiments, the above apparatus further comprises: a screen locking unit configured to lock the screen in response to determining that the at least one image does not include a face image.
In some embodiments, the above-mentioned feature extraction unit is further configured to: extracting the expression characteristics of the face image and carrying out expression recognition on the face image according to the extracted expression characteristics to obtain an expression recognition result.
In some embodiments, the screen adjustment unit is further configured to: in response to the fact that the preset expression set comprises the expression indicated by the expression recognition result, selecting an expression picture corresponding to the expression indicated by the expression recognition result from a preset expression picture set; and controlling the screen to display the selected expression picture.
In some embodiments, the above-mentioned feature extraction unit is further configured to: extracting eye features of the face image and identifying the eye state of the face object indicated by the face image according to the extracted eye features; and determining the degree of the human eye closure according to the human eye state.
In some embodiments, the screen adjustment unit is further configured to: and in response to determining that the number of the images with the human eye closing degree larger than the first preset threshold is larger than the second preset threshold, reducing the display brightness of the screen.
In a third aspect, an embodiment of the present application provides an apparatus, including: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the embodiments of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method as described in any one of the embodiments of the first aspect.
The method and the device for adjusting the screen according to the above embodiments of the present application may first obtain at least one image of a preset space in front of the screen. Then, when the at least one image is determined to comprise the face image, the feature information of the face image is extracted. And finally, when the characteristic information is determined to meet the preset condition, the display information of the screen is adjusted, so that the screen can be adjusted according to the state of the face, and the interactivity between the user and the terminal is increased.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for adjusting a screen according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for adjusting a screen according to the present application;
FIG. 4 is a flow diagram of another embodiment of a method for adjusting a screen according to the present application;
FIG. 5 is a flow diagram of yet another embodiment of a method for adjusting a screen according to the present application;
FIG. 6 is a schematic diagram of an embodiment of an apparatus for adjusting a screen according to the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing the apparatus of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for adjusting a screen or the apparatus for adjusting a screen of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. An image capturing device (not shown in the figure) may be further disposed near the display screens of the terminal devices 101, 102, and 103, and is configured to capture images in front of the display screens of the terminal devices 101, 102, and 103. The image capturing device may be a camera or a front camera mounted on the terminal device 101, 102, 103, or may be a monitoring camera mounted in a space where the terminal device 101, 102, 103 is located.
The server 105 may be a server that provides various services, such as a feature extraction server that performs feature extraction on face images of users in front of the terminal apparatuses 101, 102, 103. The feature extraction server may perform processing such as analysis on the received data such as an image, and feed back the processing result (e.g., screen adjustment information) to the terminal apparatuses 101, 102, 103.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the steps included in the method for adjusting a screen provided in the embodiment of the present application may all be executed by the terminal devices 101, 102, and 103, or may all be executed by the server 105. Alternatively, a part of the steps may be performed by the terminal devices 101, 102, 103, and another part of the steps may be performed by the server 105. Accordingly, the units or modules included in the device for adjusting the screen may be all disposed in the terminal apparatuses 101, 102, and 103, or all disposed in the server 105, or a part of the units or modules may be disposed in the terminal apparatuses 101, 102, and 103, and another part of the units or modules may be disposed in the server 105. The above-described system architecture 100 may not include the network 104 and the server 105 when the method for adjusting a screen is performed by the terminal devices 101, 102, 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for adjusting a screen in accordance with the present application is shown. The method for adjusting the screen of the embodiment comprises the following steps:
step 201, at least one image of a preset space in front of a screen is acquired.
In the present embodiment, an execution subject of the method for adjusting a screen (e.g., the terminal device 101, 102, 103 or the server 105 shown in fig. 1) may acquire at least one image of a preset space in front of the screen through a wired connection manner or a wireless connection manner. The screen herein refers to a screen of a terminal device. The terminal device can be further provided with an image acquisition device, such as a camera, for acquiring an image of a preset space in front of the screen. Or, a monitoring camera can be arranged in the space where the terminal device is located, and is used for collecting images in a preset space in front of the screen. The preset space may be a space within a preset distance in front of the screen, for example, a space within 1 meter in front of the screen. The execution main body can acquire the image of the preset space in front of the screen in real time through the image acquisition device.
It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
Step 202, in response to determining that the at least one image includes a face image, extracting feature information of the face image.
After the executing subject acquires the at least one image, it may be determined whether each of the at least one image includes a face image. And if the images comprise the face images, extracting the feature information of the face images. The feature information can describe the shape and distance of the face organ, so that the feature information can reflect the expression of the user, the eye state of the user, the smiling degree and the like. It can be understood that face image feature extraction is a technology which is widely applied at present, and is not described herein again.
In some optional implementations of the embodiment, the screen may be locked if the execution subject determines that each of the at least one image does not include a face image.
In this implementation manner, when each of the at least one image does not include a face image, the execution subject may determine that the user is not in front of the screen or the user does not watch the screen, and the execution subject may lock the screen of the terminal device, so that on one hand, the electric energy may be saved, and on the other hand, the privacy of the user may be protected.
And step 203, responding to the fact that the characteristic information meets the preset condition, and adjusting display information of the screen.
In this embodiment, after extracting the feature information of the face image, the execution main body may determine whether the feature information satisfies a preset condition. If so, the display information of the screen may be adjusted. The preset condition may be a condition for the feature information. For example, when the feature information includes a facial expression, the preset condition may be that the expression is a heart injury. The display information of the screen may be content displayed by the screen, display brightness of the screen, display contrast of the screen, or the like.
With continued reference to fig. 3, fig. 3 is a schematic diagram of one application scenario of the method for adjusting a screen according to the present embodiment. In the application scenario of fig. 3, the user looks at news on the website in front of the screen, and when seeing interesting news, the user shows a smiling expression. The camera installed on the screen collects images including face images of the user, and expressions of the face images are extracted. And then, if the expression meets the preset condition, displaying a laugh picture on a screen.
The method for adjusting the screen according to the above embodiment of the present application may first obtain at least one image of a preset space in front of the screen. Then, when the at least one image is determined to comprise the face image, the feature information of the face image is extracted. And finally, when the characteristic information is determined to meet the preset condition, the display information of the screen is adjusted, so that the screen can be adjusted according to the state of the face, and the interactivity between the user and the terminal is increased.
With continued reference to FIG. 4, a flow 400 of another embodiment of a method for adjusting a screen according to the present application is shown. As shown in fig. 4, the method for adjusting a screen of the present embodiment includes the steps of:
step 401, at least one image of a preset space in front of a screen is acquired.
This step is similar to the principle of step 201 shown in fig. 2, and is not described here again.
Step 402, in response to determining that at least one image includes a face image, extracting expression features of the face image and performing expression recognition on the face image according to the extracted expression features to obtain an expression recognition result.
After the execution main body determines that each image comprises the face image, the expression characteristics of the face image can be extracted so as to perform expression recognition on the face image, and an expression recognition result is obtained. The execution subject may implement expression recognition in various ways. For example, the executing entity may perform expression recognition on the face image by using a template-based matching method, a neural network-based method, a probabilistic model-based method, a support vector machine-based method, or the like.
In some optional implementations of the embodiment, the executing subject may implement expression recognition on the face image by: and importing the face image into a pre-established expression recognition model to obtain an expression recognition result of the face image. The expression recognition model can be used for representing the corresponding relation between the face image and the expression recognition result.
As an example, the expression recognition model described above may include a feature extraction section and a correspondence table. The feature extraction part can be used for extracting features of the face image so as to obtain feature vectors of the face image. The correspondence table may store correspondence between a plurality of feature vectors and expression recognition results, and the correspondence table may be prepared in advance by a technician based on statistics of a large number of feature vectors and expression recognition results. In this way, the expression recognition model may first perform feature extraction on the imported face image to obtain a target feature vector. And then, sequentially comparing the target characteristic vector with a plurality of characteristic vectors in a corresponding relation table, and if one characteristic vector in the corresponding relation table is the same as or similar to the target characteristic vector, taking an expression recognition result corresponding to the characteristic vector in the corresponding relation table as an expression recognition result of the target characteristic vector.
In some alternative implementations, the expression recognition model may be a neural network, the neural network may include an input network, an intermediate network, and an output network, and the input network, the intermediate network, and the output network may include a separable convolution layer and an activation function layer. Here, the neural network may be obtained by training the execution subject or other execution subjects for training the neural network by:
first, a sample set is obtained, where the samples in the sample set may include a sample face image and an expression of a face corresponding to the sample face image. The sample face image may refer to a face image directly captured by an image capture device (e.g., a camera).
Then, the sample face images of the samples in the sample set can be used as input, the expression of the face corresponding to the input sample face images can be used as expected output, and the neural network can be obtained through training. As an example, when training the neural network, first, the sample face image may be used as an input of the initial neural network, and a predicted expression corresponding to the input sample face image is obtained. Here, the initial neural network may refer to an untrained or untrained completed neural network. And secondly, comparing the predicted expression corresponding to the sample face image with the corresponding expression, and determining whether the initial neural network reaches a preset condition according to a comparison result. The preset condition may be that a difference between a predicted expression corresponding to the sample face image and a corresponding expression is smaller than a preset difference threshold. Then, in response to determining that the preset condition is reached, the initial neural network may be determined as a trained neural network. Finally, in response to determining that the preset condition is not met, network parameters of the initial neural network may be adjusted, and the training process described above may continue to be performed using the unused samples. As an example, the network parameters of the initial neural network may be adjusted by using a Back propagation Algorithm (BP Algorithm) and a gradient descent method. It should be noted that the back propagation algorithm and the gradient descent method are well-known technologies that are currently widely researched and applied, and are not described herein again.
Step 403, in response to determining that the preset expression set includes the expression indicated by the expression recognition result, selecting an expression picture corresponding to the expression indicated by the expression recognition result from the preset expression picture set.
After the facial image is subjected to expression recognition, whether the expression indicated by the expression recognition result is included in the preset expression set or not can be determined. And if so, selecting an expression picture corresponding to the expression indicated by the expression recognition result from a preset expression picture set. In this embodiment, the expression set may include expressions like smiling, laughing, face covering, cheering, crying, and so on. The expression picture set may include pictures of various expressions, for example, a picture of laugh, a picture of smile, and the like. It can be understood that each emoticon in the emoticon set is labeled with an emoticon label. The execution subject may determine the expression of the expression picture representation through the expression label.
Step 404, controlling the screen to display the selected expression picture.
The execution main body can display the selected emoticon on a screen to enhance interactivity with the user.
In some optional implementation manners of this embodiment, the execution main body may further set a display manner of the above-mentioned expression picture. For example, the executive body can control the emoticon to fall from the upper part of the screen to the lower part of the screen. Or, the execution main body may control the above expression picture to gradually disappear after shaking for a plurality of times in the middle of the screen, and the like.
The method for adjusting the screen provided by the embodiment of the application can adjust the display picture of the screen according to the expression of the user, and enhances the interaction between the terminal and the user.
With continued reference to FIG. 5, a flow 500 of yet another embodiment of a method for adjusting a screen according to the present application is shown. As shown in fig. 5, the method for adjusting a screen of the present embodiment includes the steps of:
step 501, at least one image of a preset space in front of a screen is acquired.
In this embodiment, the execution main body may control the image acquisition device to acquire an image of a preset space in front of the screen according to a certain acquisition frequency. The shooting time interval between the images is less than the preset time length.
Step 502, in response to determining that the at least one image includes a face image, extracting eye features of the face image and identifying a state of eyes of a face object indicated by the face image according to the extracted eye features.
After the execution main body determines that each image comprises the face image, the eye features of the face image can be extracted so as to identify the eye state of the face object indicated by the face image. The human eye state may include a closed eye state and an open eye state, among others. The execution subject may first calibrate feature points of the upper eyelid and the lower eyelid of the human face object in each face image. Then, a distance value between the upper eyelid feature point and the lower eyelid feature point is determined. When the distance value is greater than 0, the eye-open state is indicated. When the above distance value is equal to 0, it indicates that the human eye state is the eye-closed state.
Step 503, determining the degree of eye closure according to the state of the human eyes.
In this embodiment, the execution subject may determine the degree of eye closure after determining the state of the human eyes of the human face object indicated by the human face image. Specifically, the execution subject may determine the maximum value of the distance between the upper eyelid feature point and the lower eyelid feature point from each face image. And the maximum value of the above distance is taken as the maximum state in which the human eyes can be opened. For each face image, the execution subject may calculate a ratio of a distance between the upper eyelid feature point and the lower eyelid feature point in the face image to a maximum value of the distance. And then calculating the difference between 1 and the ratio to obtain a value which is the closing degree of the human eyes.
And 504, in response to the fact that the number of the images with the human eye closing degree larger than the first preset threshold is larger than the second preset threshold, reducing the display brightness of the screen.
And the executive body determines that the user is tired relatively when the closing degree of the human eyes is determined to be greater than a first preset threshold value. At this time, the executing subject may further count whether the number of images in which the degree of closing of the human eye is greater than the first preset threshold in each image is greater than a second preset threshold. And if the number is larger than a second preset threshold value, the fatigue time of the user is determined to be longer. At this time, the execution subject may reduce the display brightness of the screen to avoid the stimulation of the excessively bright screen to the human eye, and may also save the electric power.
The method for adjusting the screen provided by the embodiment of the application can adjust the display brightness of the screen according to the eye state of the user, is beneficial to protecting eyes and saves electric energy.
With further reference to fig. 6, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for adjusting a screen, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 6, the apparatus 600 for adjusting a screen of the present embodiment includes: an image acquisition unit 601, a feature extraction unit 602, and a screen adjustment unit 603.
An image acquisition unit 601 configured to acquire at least one image of a preset space in front of a screen.
A feature extraction unit 602 configured to extract feature information of the face image in response to determining that the at least one image includes the face image.
A screen adjusting unit 603 configured to adjust display information of the screen in response to determining that the feature information satisfies a preset condition.
In some optional implementations of the embodiment, the apparatus 600 may further include a screen locking unit, not shown in fig. 6, configured to lock the screen in response to determining that the at least one image does not include the face image.
In some optional implementations of the present embodiment, the feature extraction unit 602 is further configured to: and extracting the expression characteristics of the face image and carrying out expression recognition on the face image according to the extracted expression characteristics to obtain an expression recognition result.
In some optional implementations of the present embodiment, the screen adjusting unit 603 is further configured to: in response to the fact that the preset expression set comprises the expressions indicated by the expression recognition results, selecting expression pictures corresponding to the expressions indicated by the expression recognition results from the preset expression picture set; and controlling the screen to display the selected expression picture.
In some optional implementations of the present embodiment, the feature extraction unit 602 is further configured to: extracting eye features of the face image and identifying the eye state of a face object indicated by the face image according to the extracted eye features; and determining the degree of eye closure according to the state of the eyes.
In some optional implementations of the present embodiment, the screen adjusting unit 603 is further configured to: and in response to determining that the number of images with the human eye closure degree larger than the first preset threshold is larger than the second preset threshold, reducing the display brightness of the screen.
The device for adjusting the screen according to the above embodiment of the present application may first obtain at least one image of a preset space in front of the screen. Then, when the at least one image is determined to comprise the face image, the feature information of the face image is extracted. And finally, when the characteristic information is determined to meet the preset condition, the display information of the screen is adjusted, so that the screen can be adjusted according to the state of the face, and the interactivity between the user and the terminal is increased.
It should be understood that units 601 to 603 recited in the apparatus 600 for adjusting a screen correspond to respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above for the method for adjusting a screen are equally applicable to the apparatus 600 and the units included therein, and are not described in detail here.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use in implementing the apparatus of an embodiment of the present application. The apparatus shown in fig. 7 is only an example, and should not bring any limitation to the function and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU)701, performs the above-described functions defined in the method of the present application.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an image acquisition unit, a feature extraction unit, and a screen adjustment unit. Where the names of the units do not in some cases constitute a limitation on the units themselves, for example, the image capturing unit may also be described as a "unit that captures at least one image of a preset space in front of the screen".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring at least one image of a preset space in front of a screen; in response to determining that the at least one image includes a face image, extracting feature information of the face image; and adjusting the display information of the screen in response to determining that the characteristic information meets the preset condition.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A method for adjusting a screen, comprising:
acquiring at least one image of a preset space in front of a screen;
in response to determining that the at least one image includes a facial image, extracting feature information of the facial image;
and adjusting the display information of the screen in response to determining that the characteristic information meets a preset condition.
2. The method of claim 1, wherein the method further comprises:
in response to determining that the at least one image does not include a facial image, locking the screen.
3. The method of claim 1, wherein the extracting feature information of the face image comprises:
and extracting the expression characteristics of the face image and carrying out expression recognition on the face image according to the extracted expression characteristics to obtain an expression recognition result.
4. The method of claim 3, wherein the adjusting the display information of the screen in response to determining that the characteristic information satisfies a preset condition comprises:
and in response to the fact that the preset expression set comprises the expression indicated by the expression recognition result, selecting an expression picture corresponding to the expression indicated by the expression recognition result from the preset expression picture set and controlling the screen to display the selected expression picture.
5. The method of claim 1, wherein the extracting feature information of the face image comprises:
extracting eye features of the face image and identifying the eye state of a face object indicated by the face image according to the extracted eye features;
and determining the closing degree of the human eyes according to the human eye state.
6. The method of claim 5, wherein the adjusting the display information of the screen in response to determining that the characteristic information satisfies a preset condition comprises:
and in response to determining that the number of images with the human eye closure degree larger than a first preset threshold is larger than a second preset threshold, reducing the display brightness of the screen.
7. An apparatus for adjusting a screen, comprising:
an image acquisition unit configured to acquire at least one image of a preset space in front of a screen;
a feature extraction unit configured to extract feature information of the face image in response to determining that the at least one image includes a face image;
a screen adjusting unit configured to adjust display information of the screen in response to determining that the feature information satisfies a preset condition.
8. The apparatus of claim 7, wherein the apparatus further comprises:
a screen lock unit configured to lock the screen in response to determining that the at least one image does not include a face image.
9. The apparatus of claim 7, wherein the feature extraction unit is further configured to:
and extracting the expression characteristics of the face image and carrying out expression recognition on the face image according to the extracted expression characteristics to obtain an expression recognition result.
10. The apparatus of claim 9, wherein the screen adjustment unit is further configured to:
in response to the fact that the preset expression set comprises the expression indicated by the expression recognition result, selecting an expression picture corresponding to the expression indicated by the expression recognition result from a preset expression picture set;
and controlling the screen to display the selected expression picture.
11. The apparatus of claim 7, wherein the feature extraction unit is further configured to:
extracting eye features of the face image and identifying the eye state of a face object indicated by the face image according to the extracted eye features;
and determining the closing degree of the human eyes according to the human eye state.
12. The apparatus of claim 11, wherein the screen adjustment unit is further configured to:
and in response to determining that the number of images with the human eye closure degree larger than a first preset threshold is larger than a second preset threshold, reducing the display brightness of the screen.
13. An apparatus, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201811459959.6A 2018-11-30 2018-11-30 Method and device for adjusting screen Active CN111258414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811459959.6A CN111258414B (en) 2018-11-30 2018-11-30 Method and device for adjusting screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811459959.6A CN111258414B (en) 2018-11-30 2018-11-30 Method and device for adjusting screen

Publications (2)

Publication Number Publication Date
CN111258414A true CN111258414A (en) 2020-06-09
CN111258414B CN111258414B (en) 2023-08-04

Family

ID=70944774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811459959.6A Active CN111258414B (en) 2018-11-30 2018-11-30 Method and device for adjusting screen

Country Status (1)

Country Link
CN (1) CN111258414B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112416284A (en) * 2020-12-10 2021-02-26 三星电子(中国)研发中心 Method, apparatus, device and storage medium for sharing screen
WO2022088254A1 (en) * 2020-10-26 2022-05-05 武汉华星光电技术有限公司 Vehicle-mounted display screen adjustment device and vehicle

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013175847A1 (en) * 2012-05-22 2013-11-28 ソニー株式会社 Image processing device, image processing method, and program
CN103777760A (en) * 2014-02-26 2014-05-07 北京百纳威尔科技有限公司 Method and device for switching screen display direction
CN104460995A (en) * 2014-11-28 2015-03-25 广东欧珀移动通信有限公司 Display processing method, display processing device and terminal
CN104866082A (en) * 2014-02-25 2015-08-26 北京三星通信技术研究有限公司 User behavior based reading method and device
CN105353875A (en) * 2015-11-05 2016-02-24 小米科技有限责任公司 Method and apparatus for adjusting visible area of screen
CN105630143A (en) * 2014-11-18 2016-06-01 中兴通讯股份有限公司 Screen display adjusting method and device
CN105653041A (en) * 2016-01-29 2016-06-08 北京小米移动软件有限公司 Display state adjusting method and device
CN106057171A (en) * 2016-07-21 2016-10-26 广东欧珀移动通信有限公司 Control method and device
EP3154270A1 (en) * 2015-10-08 2017-04-12 Xiaomi Inc. Method and device for adjusting and displaying an image
CN106569611A (en) * 2016-11-11 2017-04-19 努比亚技术有限公司 Apparatus and method for adjusting display interface, and terminal
CN106855744A (en) * 2016-12-30 2017-06-16 维沃移动通信有限公司 A kind of screen display method and mobile terminal
CN107077593A (en) * 2014-07-14 2017-08-18 华为技术有限公司 For the enhanced system and method for display screen
CN107092352A (en) * 2017-03-27 2017-08-25 深圳市金立通信设备有限公司 A kind of screen control method answered based on distance perspective and terminal
CN108037824A (en) * 2017-12-06 2018-05-15 广东欧珀移动通信有限公司 Screen parameter adjusting method, device and equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013175847A1 (en) * 2012-05-22 2013-11-28 ソニー株式会社 Image processing device, image processing method, and program
CN104866082A (en) * 2014-02-25 2015-08-26 北京三星通信技术研究有限公司 User behavior based reading method and device
CN103777760A (en) * 2014-02-26 2014-05-07 北京百纳威尔科技有限公司 Method and device for switching screen display direction
CN107077593A (en) * 2014-07-14 2017-08-18 华为技术有限公司 For the enhanced system and method for display screen
CN105630143A (en) * 2014-11-18 2016-06-01 中兴通讯股份有限公司 Screen display adjusting method and device
CN104460995A (en) * 2014-11-28 2015-03-25 广东欧珀移动通信有限公司 Display processing method, display processing device and terminal
EP3154270A1 (en) * 2015-10-08 2017-04-12 Xiaomi Inc. Method and device for adjusting and displaying an image
CN105353875A (en) * 2015-11-05 2016-02-24 小米科技有限责任公司 Method and apparatus for adjusting visible area of screen
CN105653041A (en) * 2016-01-29 2016-06-08 北京小米移动软件有限公司 Display state adjusting method and device
CN106057171A (en) * 2016-07-21 2016-10-26 广东欧珀移动通信有限公司 Control method and device
CN106569611A (en) * 2016-11-11 2017-04-19 努比亚技术有限公司 Apparatus and method for adjusting display interface, and terminal
CN106855744A (en) * 2016-12-30 2017-06-16 维沃移动通信有限公司 A kind of screen display method and mobile terminal
CN107092352A (en) * 2017-03-27 2017-08-25 深圳市金立通信设备有限公司 A kind of screen control method answered based on distance perspective and terminal
CN108037824A (en) * 2017-12-06 2018-05-15 广东欧珀移动通信有限公司 Screen parameter adjusting method, device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙奕;: "Android安全保护机制及解密方法研究", 信息网络安全, no. 01, pages 71 - 74 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022088254A1 (en) * 2020-10-26 2022-05-05 武汉华星光电技术有限公司 Vehicle-mounted display screen adjustment device and vehicle
CN112416284A (en) * 2020-12-10 2021-02-26 三星电子(中国)研发中心 Method, apparatus, device and storage medium for sharing screen
CN112416284B (en) * 2020-12-10 2022-09-23 三星电子(中国)研发中心 Method, apparatus, device and storage medium for sharing screen

Also Published As

Publication number Publication date
CN111258414B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN108830235B (en) Method and apparatus for generating information
CN108509941B (en) Emotion information generation method and device
WO2020000879A1 (en) Image recognition method and apparatus
CN110827378B (en) Virtual image generation method, device, terminal and storage medium
CN107622240B (en) Face detection method and device
US20210042504A1 (en) Method and apparatus for outputting data
CN111476871B (en) Method and device for generating video
CN109214343A (en) Method and apparatus for generating face critical point detection model
US11087140B2 (en) Information generating method and apparatus applied to terminal device
US11461995B2 (en) Method and apparatus for inspecting burrs of electrode slice
CN109308490A (en) Method and apparatus for generating information
CN108388889B (en) Method and device for analyzing face image
CN108133197B (en) Method and apparatus for generating information
CN109271929B (en) Detection method and device
CN109145813B (en) Image matching algorithm testing method and device
CN108399401B (en) Method and device for detecting face image
CN110110666A (en) Object detection method and device
CN112351327A (en) Face image processing method and device, terminal and storage medium
CN108470131B (en) Method and device for generating prompt message
CN111258414B (en) Method and device for adjusting screen
CN113033677A (en) Video classification method and device, electronic equipment and storage medium
CN110570383B (en) Image processing method and device, electronic equipment and storage medium
CN108038473B (en) Method and apparatus for outputting information
CN109949213B (en) Method and apparatus for generating image
CN108256451B (en) Method and device for detecting human face

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant