CN112560785A - Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence - Google Patents

Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence Download PDF

Info

Publication number
CN112560785A
CN112560785A CN202011575165.3A CN202011575165A CN112560785A CN 112560785 A CN112560785 A CN 112560785A CN 202011575165 A CN202011575165 A CN 202011575165A CN 112560785 A CN112560785 A CN 112560785A
Authority
CN
China
Prior art keywords
neural network
face
learning
brightness
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011575165.3A
Other languages
Chinese (zh)
Inventor
宋彦震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011575165.3A priority Critical patent/CN112560785A/en
Publication of CN112560785A publication Critical patent/CN112560785A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/10Intensity circuits
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/02Composition of display devices

Abstract

The invention divides the face directions into nine types, takes the image information of various face directions as the input parameters before the neural network training and learning, and takes the nine face directions as the output parameters before the neural network training and learning. The neural network carries out training and learning according to the known input parameters and the known output parameters, and the learning result is used as the hidden layer data of the neural network in the using stage. The picture information of the user collected by the camera device in real time is input to the neural network which has completed training and learning, the neural network automatically matches the learning result known by the hidden layer according to the new information of the input layer, and the face direction of the current user is output at the output layer. After the processor obtains the face direction, the face direction-display screen ID comparison table is read firstly, then the brightness of the display screen corresponding to the current face direction is improved, and the brightness of the rest display screens is reduced. Therefore, the brightness of each display screen is adjusted in real time according to the requirements of users, and the harm of high-brightness display screens and electromagnetic radiation to human eyes is reduced.

Description

Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence
Technical Field
The invention relates to the technical field of computer display screen expansion, in particular to a control method for adjusting multi-screen brightness through face tracking based on artificial intelligence.
Background
The computer screen display is divided into four modes: computer screen only mode, copy mode, extended mode and second screen only mode.
An extension mode: the computer desktop is extended to the external screen, the extended desktop displayed by projection does not have desktop icons and taskbar shortcut icons, and only has a blank desktop, so that the content displayed on the computer screen can be dragged to the projection screen to be displayed, the area of the computer desktop is expanded, and the multi-screen office and study of a user are facilitated.
In the expansion mode, the computer displays the information to be displayed on the external display screen through the data line.
A user can expand a plurality of display screens according to office and study requirements, and when the same computer or host expands the plurality of display screens for interface display, due to the limitation of the visual range of human eyes and the limitation of the attention mechanism of human brain, people can only obtain high-density information in a visual focus area, and the information density obtained by the human brain is lower as the distance from the visual focus is farther. Therefore, even if a plurality of display screens are expanded, attention of people is often focused on a certain area of a certain display screen.
In the prior art, when a plurality of display screens are expanded by using the same computer or host to perform interface display, all the display screens are usually adjusted to proper brightness while all the display screens are kept normally bright in order to meet the requirement that a user can browse information on each display screen at any time. The computer display screens with high brightness increase electromagnetic radiation, enable eyes to be more fatigued, enable facial skin to be poor, enable muscles of eyeballs and muscles of intraocular lenses to be fatigued and relaxed for a long time, enable cornea outside the eyeballs to be dry, enable eye diseases to be easily caused, and enable facial skin to be easily dry and cause spots and wrinkles.
Disclosure of Invention
The invention divides the face directions into nine types, takes the image information of various face directions as the input parameters before the neural network training and learning, and takes the nine face directions as the output parameters before the neural network training and learning. The neural network carries out training and learning according to the known input parameters and the known output parameters, and the learning result is used as the hidden layer data of the neural network in the using stage. The picture information of the user collected by the camera device in real time is input to the neural network which has completed training and learning, the neural network automatically matches the learning result known by the hidden layer according to the new information of the input layer, and the face direction of the current user is output at the output layer. After the processor obtains the face direction, the face direction-display screen ID comparison table is read firstly, then the brightness of the display screen corresponding to the current face direction is improved, and low-brightness display is carried out on the rest display screens. Therefore, the brightness of each display screen is adjusted in real time according to the requirements of users, and the harm of high-brightness display screens and electromagnetic radiation to eyes and facial skin is reduced.
The invention is realized by adopting the following technical scheme, and the control method for adjusting the multi-screen brightness based on artificial intelligence face tracking comprises the following steps.
Step S11: entering a parameter acquisition stage before neural network training and learning: the face directions are divided into nine directions, namely, the upper direction, the lower direction, the left direction, the right direction, the upper left direction, the upper right direction, the lower left direction, the lower right direction and the middle direction, and images of the nine face directions are collected to be used as training set samples. The image information of various facial directions is used as input parameters before neural network training and learning, and the nine facial directions are used as output parameters before neural network training and learning.
Step S12: and establishing a face direction-display screen ID comparison table.
Step S13: entering a neural network training and learning stage: the neural network carries out training and learning according to the known input parameters and the known output parameters, and the learning result is used as the hidden layer data of the neural network in the using stage.
Step S14: entering a use stage after the neural network finishes training and learning: the method comprises the steps that photo information of a user, collected by a camera device in real time, is preprocessed and then input into a neural network which is trained and learned, the neural network automatically matches a learning result known by a hidden layer according to new information of an input layer, and the face direction of the current user is output on an output layer.
Step S15: after the processor obtains the face direction of the current user, the processor firstly accesses and reads the stored face direction-display screen ID comparison table, then opens the display screen corresponding to the current face direction or improves the brightness of the display screen corresponding to the current face direction through the driving unit, and closes or displays the low brightness of the display screens in the rest face directions, so that the brightness of each display screen is adjusted in real time according to the requirements of the user.
In step S11, a face picture whose face direction is rotated leftward by an angle greater than 15 degrees is used as a training set sample in the left direction; and taking the face picture with the face direction rotating angle to the right by more than 15 degrees as a training set sample in the right direction.
The learning result of the neural network of the step S13 is the weight of the connection between the neural network layers.
The image capturing device in step S14 is installed at the middle of the display screen in the middle direction of the user, and the image capturing device captures images toward the face of the user, so as to adjust the brightness of each display screen in real time according to the user' S needs.
The preprocessing process in step S14 mainly includes light compensation, gray level transformation, histogram equalization, normalization, filtering, and sharpening of the face image.
In step S14, the user 'S picture collected by the camera device is input to the trained neural network, then the feature is extracted through convolution operation, and then the user' S picture is output to the next layer through down-sampling operation, and after a plurality of convolution, activation and pooling layers, the result is output to the full connection layer, and the result is mapped to the final classification result through full connection: nine facial orientations.
Drawings
FIG. 1 is a flowchart of the process of the present invention.
Fig. 2 is a schematic diagram of the present invention showing the interface display by expanding the same computer to 9 display screens.
In fig. 2, numeral 1 denotes a display screen, and numeral 2 denotes an imaging device.
FIG. 3 is a data structure diagram of the neural network learning phase and the use phase of the present invention.
Detailed Description
The invention is further illustrated with reference to the accompanying drawings and specific examples.
A first embodiment, as shown in fig. 1 and 3.
The training process of the neural network can be divided into the following steps.
The first step is as follows: a neural network is defined, containing some learnable parameters or weights.
The second step is that: the data is input into the network for training and loss values are calculated.
The third step: the gradient is propagated back to the parameters or weights of the network, the weights of the network are updated accordingly, and trained again.
The fourth step: and storing the finally trained model.
After the facial image is input into the neural network, the features are extracted through convolution operation, and then the facial image is output to the next layer after down-sampling operation. After a plurality of convolution, activation and pooling layers, the result is output to a full-connection layer, and the result is mapped to a final classification result through full connection: nine facial orientations.
Loading the model performs a face direction recognition operation as follows.
The picture information of the user collected by the camera device in real time is input to the neural network which has completed training and learning, the neural network automatically matches the learning result known by the hidden layer according to the new information of the input layer, and the face direction of the current user is output at the output layer.
A second embodiment.
Since the original image acquired by the image pickup device is limited by various conditions and randomly disturbed, and is often not directly used, it must be subjected to image preprocessing such as gradation correction and noise filtering at an early stage of image processing. The preprocessing process mainly comprises the steps of light compensation, gray level transformation, histogram equalization, normalization, filtering, sharpening and the like of the face image. In brief, the shot image is subjected to refinement processing, and the detected face is divided into pictures with certain sizes, so that the face recognition and processing are facilitated.
In the neural network, firstly, an image is input, the size of the input image is W multiplied by H multiplied by 3, then the input image is convoluted with a convolution kernel, then activation and pooling operations are carried out, 6 convolution kernels are adopted in the convolution process, each convolution kernel has R, G, B three channels, the number of the channels of the convolution kernels is the same as that of the channels of the input image, and the size of an output result is W 'multipliedby H' multipliedby 6. The output of the first layer has 6 channels, i.e. the input layer of the second layer has 6 channels, and each convolution kernel of the second layer has 6 channels. Then repeating the convolution, activation and pooling processes; and the last layer or two layers of the model are of a full-connection structure, a final result is output, and the output result is processed by a softmax function to obtain the face direction. Wherein, the first layer is an input layer, the last layer is an output layer, and the middle layer is a hidden layer.
And optimizing network parameters by back propagation to minimize a loss function so that the model achieves the expected result.
Each layer is composed of neurons and weights among the neurons, the weights are layer-to-layer connections, the weight means the percentage of the effect of one neuron on another neuron, the percentage is expressed by a specific decimal, and in actual use, each neuron and each connection are marked with specific numbers.
The convolution operation in the convolutional neural network is to multiply and add a convolution kernel on the original graph continuously, and the obtained output result is called a feature map. Wherein the internal parameter values for the initial convolution kernels are generated by random initialization and then the parameter values inside these convolution kernels are optimized by back propagation to get better recognition results.
When there are multiple channels of input, the convolution kernel also needs to have the same number of channels. When the input image is a color image, the color image contains R, G, B three channels, and the convolution kernel also needs three corresponding channels. During calculation, each color channel of the convolution kernel slides on the corresponding color channel of the input picture, namely, the foremost channel of the convolution kernel slides on the red channel of the input picture, the middle channel of the convolution kernel slides on the green channel of the input picture, the rearmost channel of the convolution kernel slides on the blue channel of the input picture, the calculation results of the corresponding positions of the three different color channels are added to obtain output, and the output only has one channel.
Pooling is a down-sampling process, and because the size of an input picture is large, the size and the calculated amount of the picture need to be reduced through down-sampling, the model scale is reduced, the operation speed is improved, and meanwhile, the robustness of the extracted features is improved. Pooling layer sampling is either maximum pooling or average pooling.
And activating by a relu function or a tanh function, and introducing nonlinearity into the model by the activating function so as to solve the linear indifference problem. If the activation function is not used, no matter how many layers of the neural network exist, the linear mapping is finally carried out, and the problem of linear indifference cannot be solved by the pure linear mapping.
In the middle layer of the neural network, a relu activation function is used, wherein the relu function is simple to calculate, and the convergence speed of the model can be increased; meanwhile, in the process of back propagation, a partial derivative needs to be calculated, and the disappearance of the gradient in the process of back propagation is avoided.
As shown in fig. 3, two stages of the neural network are shown, in the neural network training learning stage, parameters of an input layer and parameters of an output layer are known, data of a hidden layer are unknown, through training and learning for the last ten thousands of times, a learning result is finally stored in the hidden layer, and the learning result is weights between layers of the neural network.
A third embodiment, as shown in fig. 1-3.
The present invention includes the following steps.
Step S11: entering a parameter acquisition stage before neural network training and learning: the face directions are divided into nine directions, namely, the upper direction, the lower direction, the left direction, the right direction, the upper left direction, the upper right direction, the lower left direction, the lower right direction and the middle direction, and images of the nine face directions are collected to be used as training set samples. The image information of various facial directions is used as input parameters before neural network training and learning, and the nine facial directions are used as output parameters before neural network training and learning.
Step S12: and establishing a face direction-display screen ID comparison table.
Step S13: entering a neural network training and learning stage: the neural network carries out training and learning according to the known input parameters and the known output parameters, and the learning result is used as the hidden layer data of the neural network in the using stage.
Supplementary notes 1: during training, 50000 pictures of various face directions are respectively adopted to train various face direction models; the human face image with the left rotation angle of more than 15 degrees in the face direction is used as a training set sample in the left direction; the human face picture with the right rotation angle of the face direction larger than 15 degrees is used as a training set sample of the right direction, so that the display screens in all directions are accurately triggered according to the face direction, and random triggering of all the display screens is avoided.
Step S14: entering a use stage after the neural network finishes training and learning: the method comprises the steps that photo information of a user, collected by a camera device in real time, is preprocessed and then input into a neural network which is trained and learned, the neural network automatically matches a learning result known by a hidden layer according to new information of an input layer, and the face direction of the current user is output on an output layer.
Step S15: after the processor obtains the face direction of the current user, the processor firstly accesses and reads the stored face direction-display screen ID comparison table, then opens the display screen corresponding to the current face direction or improves the brightness of the display screen corresponding to the current face direction through the driving unit, and closes or displays the low brightness of the display screens in the rest face directions, so that the brightness of each display screen is adjusted in real time according to the requirements of the user.
In step S11, a face picture whose face direction is rotated leftward by an angle greater than 15 degrees is used as a training set sample in the left direction; and taking the face picture with the face direction rotating angle to the right by more than 15 degrees as a training set sample in the right direction.
The learning result of the neural network of the step S13 is the weight of the connection between the neural network layers.
The image capturing device in step S14 is installed at the middle of the display screen in the middle direction of the user, and the image capturing device captures images toward the face of the user, so as to adjust the brightness of each display screen in real time according to the user' S needs.
The preprocessing process in step S14 mainly includes light compensation, gray level transformation, histogram equalization, normalization, filtering, and sharpening of the face image.
In step S14, the user 'S picture collected by the camera device is input to the trained neural network, then the feature is extracted through convolution operation, and then the user' S picture is output to the next layer through down-sampling operation, and after a plurality of convolution, activation and pooling layers, the result is output to the full connection layer, and the result is mapped to the final classification result through full connection: nine facial orientations.
The use effect is as follows.
When the same computer or host computer is used for expanding to 9 display screens for interface display, all the display screens are respectively positioned in the upper, lower, left, right, left upper, right upper, left lower, right lower and middle directions of a user, and at the moment, all the display screens are displayed in low brightness. Wherein, the camera device is arranged at the right middle position of the upper side of the middle display screen. The camera device collects the facial feature information of the current user in real time, when the face of the user faces the display screen in the middle direction, the photo information of the user is firstly preprocessed and then input into the neural network which finishes training and learning, the neural network automatically matches the learning result known by the hidden layer according to the facial feature information of the input layer, and the facial direction of the current user is output at the output layer: the middle direction. After the processor obtains the face direction of the current user, the processor firstly reads a stored face direction-display screen ID comparison table to obtain a display screen ID corresponding to the current middle direction, then the brightness of the display screen corresponding to the middle direction is improved through the driving unit, and the rest display screens are displayed in low brightness; when the face direction of the user faces to the display screen in the left direction, the display screen in the left direction is displayed in a high brightness mode, and the rest display screens are displayed in a low brightness mode; when the face direction of the user faces the display screen in the right direction, the display screen in the right direction is displayed in a high brightness mode, and the rest display screens are displayed in a low brightness mode. The low brightness is a brightness state of the display screen, and the low brightness display state can also be set to be completely off.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention; all equivalent changes and modifications made according to the present invention are covered by the scope of the claims of the present invention.

Claims (6)

1. A control method for adjusting multi-screen brightness based on artificial intelligence face tracking is characterized by comprising the following steps:
step S11: entering a parameter acquisition stage before neural network training and learning: dividing the face directions into nine directions, namely, the upper direction, the lower direction, the left direction, the right direction, the upper left direction, the upper right direction, the lower left direction, the lower right direction and the middle direction, and collecting images of the nine face directions as training set samples; the image information of various facial directions is used as input parameters before neural network training and learning, and the nine facial directions are used as output parameters before neural network training and learning;
step S12: establishing a face direction-display screen ID comparison table;
step S13: entering a neural network training and learning stage: the neural network carries out training and learning according to the known input parameters and the known output parameters, and the learning result is used as the hidden layer data of the neural network in the use stage;
step S14: entering a use stage after the neural network finishes training and learning: the method comprises the steps that photo information of a user, collected by a camera device in real time, is preprocessed and then input into a neural network which is trained and learned, the neural network automatically matches a learning result known by a hidden layer according to new information of an input layer, and the face direction of the current user is output on an output layer;
step S15: after the processor obtains the face direction of the current user, the processor firstly accesses and reads the stored face direction-display screen ID comparison table, then opens the display screen corresponding to the current face direction or improves the brightness of the display screen corresponding to the current face direction through the driving unit, and closes or displays the low brightness of the display screens in the rest face directions, so that the brightness of each display screen is adjusted in real time according to the requirements of the user.
2. A control method for adjusting multi-screen brightness based on artificial intelligence face tracking as claimed in claim 1, wherein in step S11, a face picture whose face direction is rotated leftward by an angle greater than 15 degrees is used as a training set sample in the left direction; and taking the face picture with the face direction rotating angle to the right by more than 15 degrees as a training set sample in the right direction.
3. A control method for adjusting multi-screen brightness based on artificial intelligence face tracking as described in claim 1, wherein the learning result of the neural network of step S13 is the weight of the connection between layers of the neural network.
4. A control method for adjusting brightness of multiple screens based on artificial intelligence face tracking according to claim 1, wherein the camera device in step S14 is installed at a position right in the middle of the display screens in the middle direction of the user, and the camera device shoots towards the face of the user.
5. A control method for adjusting brightness of multiple screens based on artificial intelligence face tracking as claimed in claim 1, wherein the preprocessing procedure in step S14 mainly includes light compensation, gray level transformation, histogram equalization, normalization, filtering and sharpening of the human face image.
6. The control method for adjusting multi-screen brightness based on artificial intelligence face tracking as claimed in claim 1, wherein in step S14, the user' S picture collected by the camera device is input to a trained neural network, then features are extracted by convolution operation, then down-sampled and output to the next layer, and after a plurality of layers of convolution, activation and pooling, the result is output to the full connection layer, and then mapped to the final classification result by full connection: nine facial orientations.
CN202011575165.3A 2020-12-28 2020-12-28 Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence Pending CN112560785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011575165.3A CN112560785A (en) 2020-12-28 2020-12-28 Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011575165.3A CN112560785A (en) 2020-12-28 2020-12-28 Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN112560785A true CN112560785A (en) 2021-03-26

Family

ID=75033645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011575165.3A Pending CN112560785A (en) 2020-12-28 2020-12-28 Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN112560785A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077048A (en) * 2021-04-09 2021-07-06 上海西井信息科技有限公司 Seal matching method, system, equipment and storage medium based on neural network
CN113170966A (en) * 2021-04-16 2021-07-27 宋彦震 Artificial intelligence-based self-adaptive electric nail polisher control method
CN114895829A (en) * 2022-07-15 2022-08-12 广东信聚丰科技股份有限公司 Display state optimization method and system based on artificial intelligence
WO2022247123A1 (en) * 2021-05-26 2022-12-01 京东方科技集团股份有限公司 Display device, container system, and method for controlling display device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104076898A (en) * 2013-03-27 2014-10-01 腾讯科技(深圳)有限公司 Method and device for controlling luminance of mobile terminal screen
CN206563943U (en) * 2016-12-30 2017-10-17 河南星云慧通信技术有限公司 A kind of display screen of laptop automatic brightness-regulating system
CN109257484A (en) * 2017-07-14 2019-01-22 西安中兴新软件有限责任公司 A kind of brightness of display screen processing method and terminal device
CN110210555A (en) * 2019-05-29 2019-09-06 西南交通大学 Rail fish scale hurt detection method based on deep learning
CN110647865A (en) * 2019-09-30 2020-01-03 腾讯科技(深圳)有限公司 Face gesture recognition method, device, equipment and storage medium
CN111522619A (en) * 2020-05-03 2020-08-11 宋彦震 Method for automatically reducing refresh frequency of extended screen based on software type and mouse pointer position

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104076898A (en) * 2013-03-27 2014-10-01 腾讯科技(深圳)有限公司 Method and device for controlling luminance of mobile terminal screen
CN206563943U (en) * 2016-12-30 2017-10-17 河南星云慧通信技术有限公司 A kind of display screen of laptop automatic brightness-regulating system
CN109257484A (en) * 2017-07-14 2019-01-22 西安中兴新软件有限责任公司 A kind of brightness of display screen processing method and terminal device
CN110210555A (en) * 2019-05-29 2019-09-06 西南交通大学 Rail fish scale hurt detection method based on deep learning
CN110647865A (en) * 2019-09-30 2020-01-03 腾讯科技(深圳)有限公司 Face gesture recognition method, device, equipment and storage medium
CN111522619A (en) * 2020-05-03 2020-08-11 宋彦震 Method for automatically reducing refresh frequency of extended screen based on software type and mouse pointer position

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
机器之心PRO: "资源 | CPU实时人脸检测,各种朝向都逃不过", pages 1 - 4, Retrieved from the Internet <URL:https://baijiahao.baidu.com/s?id=1616176555249899986&wfr=spider&for=pc> *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077048A (en) * 2021-04-09 2021-07-06 上海西井信息科技有限公司 Seal matching method, system, equipment and storage medium based on neural network
CN113170966A (en) * 2021-04-16 2021-07-27 宋彦震 Artificial intelligence-based self-adaptive electric nail polisher control method
WO2022247123A1 (en) * 2021-05-26 2022-12-01 京东方科技集团股份有限公司 Display device, container system, and method for controlling display device
CN114895829A (en) * 2022-07-15 2022-08-12 广东信聚丰科技股份有限公司 Display state optimization method and system based on artificial intelligence
CN114895829B (en) * 2022-07-15 2022-09-27 广东信聚丰科技股份有限公司 Display state optimization method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
US11295178B2 (en) Image classification method, server, and computer-readable storage medium
US11361192B2 (en) Image classification method, computer device, and computer-readable storage medium
CN112560785A (en) Control method for adjusting multi-screen brightness through face tracking based on artificial intelligence
US20210052135A1 (en) Endoscopic image processing method and system, and computer device
US11644898B2 (en) Eye tracking method and system
WO2020192483A1 (en) Image display method and device
WO2021043273A1 (en) Image enhancement method and apparatus
US8280179B2 (en) Image processing apparatus using the difference among scaled images as a layered image and method thereof
CN106683080B (en) A kind of retinal fundus images preprocess method
WO2021063341A1 (en) Image enhancement method and apparatus
WO2022088665A1 (en) Lesion segmentation method and apparatus, and storage medium
Deligiannidis et al. Emerging trends in image processing, computer vision and pattern recognition
CN111147751B (en) Photographing mode generation method and device and computer readable storage medium
CN111488912B (en) Laryngeal disease diagnosis system based on deep learning neural network
CN110728627A (en) Image noise reduction method, device, system and storage medium
Yeganeh Cross dynamic range and cross resolution objective image quality assessment with applications
WO2024062839A1 (en) Identification device, identification method, and program
Goldbaum et al. Digital image processing for ophthalmology
US20240087190A1 (en) System and method for synthetic data generation using dead leaves images
WO2024080506A1 (en) Method for restoring udc image, and electrode device
KR20240050257A (en) Method and electronic device performing under-display camera (udc) image restoration
CN114546112A (en) Method, device and storage medium for estimating fixation point
WO2023274519A1 (en) Device and method for 3d eyeball modeling
CN116385944A (en) Image frame selection method, device, electronic equipment and storage medium
CN112818782A (en) Generalized silence living body detection method based on medium sensing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination