CN110059666A - A kind of attention detection method and device - Google Patents
A kind of attention detection method and device Download PDFInfo
- Publication number
- CN110059666A CN110059666A CN201910353333.5A CN201910353333A CN110059666A CN 110059666 A CN110059666 A CN 110059666A CN 201910353333 A CN201910353333 A CN 201910353333A CN 110059666 A CN110059666 A CN 110059666A
- Authority
- CN
- China
- Prior art keywords
- facial image
- information
- eyes
- whole face
- watching area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0252—Targeted advertisements based on events or environment, e.g. weather or festivals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Finance (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Strategic Management (AREA)
- Development Economics (AREA)
- Accounting & Taxation (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Environmental & Geological Engineering (AREA)
- Ophthalmology & Optometry (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
This application discloses a kind of attention detection method and device.This method comprises: obtaining the relative position information of facial image and/or face in facial image when user watches attentively;Obtain the whole face and/or local feature information of the facial image;And according to one or more of following information: the relative position information in the facial image of local feature information, face of the whole face characteristic information of the facial image, the facial image obtains the probability of watching area.Also disclose corresponding device.Relative position information of the facial image and/or face when watching attentively in facial image, the probability of available watching area, so as to realize accurate, reliable attention detection are inputted using mode end to end using the scheme of the application.
Description
Technical field
This application involves technical field of image processing more particularly to a kind of attention detection method and device.
Background technique
The research of attention detection is had a very important significance.Attention detection technique can be applied each in each row
Industry, for example, public transport monitoring system, advertisement machine monitoring system etc..By taking advertisement machine monitoring system as an example, advertisement machine watches monitoring attentively
System acquires the face image of pedestrian by camera detection pedestrian, and analysis in real time counts whether the pedestrian is look at extensively
Announcement machine, it is significant to the advertisement serving policy for improving advertisement machine.It is noted, however, that the detection of power is there are many technological difficulties,
There is presently no a feasible attention detection schemes.
Summary of the invention
The application provides a kind of attention detection method and device, to realize the detection of accurate, reliable attention.
In a first aspect, providing a kind of attention detection method, which comprises
Obtain relative position information of the facial image and/or face when user watches attentively in the facial image;
Obtain the whole face and/or local feature information of the facial image;
According to one or more of following information: the whole face characteristic information of the facial image, the facial image
The relative position information of local feature information, face in the facial image, obtains the probability of watching area.
In one implementation, the whole face and/or local feature information for obtaining the facial image, comprising:
Obtain the whole face characteristic pattern of the facial image;
According to the whole face characteristic pattern of the facial image, the whole face feature vector of the whole face characteristic pattern is obtained.
In another realization, according to the whole face characteristic information of the facial image, the probability of watching area is obtained, is wrapped
It includes:
According to the whole face feature vector, the probability of the watching area is obtained.
In another realization, the whole face and/or local feature information for obtaining the facial image, comprising:
Detect position of the eyes in the facial image;
According to position of the eyes in the facial image, position of the eyes in the whole face characteristic pattern is determined.
In another realization, the whole face and/or local feature information for obtaining the facial image, comprising:
According to position of the eyes in the whole face characteristic pattern, eyes local feature figure is obtained;
According to the eyes local feature figure, the eyes local feature vectors of the eyes local feature figure are obtained.
In another realization, the position according to eyes in the facial image determines the eyes described
Position in whole face characteristic pattern, comprising:
By aligned in position of position and eyes of the eyes in the whole face characteristic pattern in the facial image, eye is obtained
Position of the eyeball in the whole face characteristic pattern.
In another realization, the position according to eyes in the whole face characteristic pattern obtains eyes local feature
Figure, comprising:
Based on convolutional neural networks, the whole face characteristic pattern of the facial image is obtained;
According to position of the eyes in the whole face characteristic pattern, the eyes local feature in the whole face characteristic pattern is obtained
Figure.
In another realization, according to the local feature information of the facial image, the probability of watching area is obtained, is wrapped
It includes:
According to the eyes local feature vectors, the probability of watching area is obtained.
In another realization, the method also includes:
The whole face feature vector and the eyes local feature vectors are merged, obtain fused feature to
Amount.
In another realization, according to one or more of following information: the whole face characteristic information of the facial image,
Relative position information of the local feature information, face of the facial image in the facial image, obtains watching area
Probability, comprising:
According to the relative position information of the fused feature vector and/or the face in facial image, obtain
The probability of the watching area.
In another realization, after the facial image obtained when user watches attentively, the method also includes:
The facial image is pre-processed.
It is described that the facial image is pre-processed in another realization, comprise at least one of the following operation:
Nonlinear extension is carried out to the facial image, image pixel value is redistributed, makes transformed facial image
The probability density of gray scale is evenly distributed;
Distortion correction is carried out to the facial image.
In another realization, relative position information of the face in facial image includes the user standing area
Relative to by the head pose information of the location information of watching area and/or the user.
In another realization, the method also includes:
In user standing area, equally distributed multiple points are set;
The user is successively acquired in an orderly manner in the facial image of the multiple point;
The user is obtained in the multiple point relative to the location information by watching area, as the user
Standing area is relative to the location information by watching area.
In another realization, the method also includes:
In user standing area, equally distributed multiple points are set;
The point of setting quantity is randomly selected in the multiple point;
The user is successively acquired in the facial image of multiple points of the selection;
The user is obtained in multiple points of the selection relative to the location information by watching area, as institute
User standing area is stated relative to the location information by watching area.
In another realization, when it is described user's standing region fore is located at by watching area when, the user's
Head pose information includes following any: upwards, downwards, forward, to the left, to the right;
When it is described be located on the left of the user standing area by watching area when, the head pose information of the user includes
It is any below: upwards, downwards, forward, to the left;
When it is described be located on the right side of the user standing area by watching area when, the head pose information of the user includes
It is any below: upwards, downwards, forward, to the right.
Second aspect, provides a kind of attention force checking device, and described device includes:
First acquisition unit, for obtaining facial image when user watches attentively and/or face in the facial image
Relative position information;
Second acquisition unit, for obtaining the whole face and/or local feature information of the facial image;
Third acquiring unit, for according to one or more of following information: the whole face feature letter of the facial image
It ceases, the relative position information of the local feature information, face of the facial image in the facial image, obtains watching area
Probability.
In one implementation, the second acquisition unit is used for:
Obtain the whole face characteristic pattern of the facial image;
According to the whole face characteristic pattern of the facial image, the whole face feature vector of the whole face characteristic pattern is obtained.
In another realization, the third acquiring unit is used for:
According to the whole face feature vector, the probability of the watching area is obtained.
In another realization, the second acquisition unit includes:
Detection unit, for detecting position of the eyes in the facial image;
Determination unit determines eyes in the whole face characteristic pattern for the position according to eyes in the facial image
In position.
In another realization, the second acquisition unit includes:
4th acquiring unit obtains eyes local feature figure for the position according to eyes in the whole face characteristic pattern;
5th acquiring unit, for obtaining the eyes of the eyes local feature figure according to the eyes local feature figure
Local feature vectors.
In another realization, the determination unit is used for:
By aligned in position of position and eyes of the eyes in the whole face characteristic pattern in the facial image, eye is obtained
Position of the eyeball in the whole face characteristic pattern.
In another realization, the 4th acquiring unit is used for:
Based on convolutional neural networks, the whole face characteristic pattern of the facial image is obtained;
According to position of the eyes in the whole face characteristic pattern, the eyes local feature in the whole face characteristic pattern is obtained
Figure.
In another realization, the third acquiring unit is used for:
According to the eyes local feature vectors, the probability of watching area is obtained.
In another realization, described device further include:
Integrated unit is melted for merging the whole face feature vector and the eyes local feature vectors
Feature vector after conjunction.
In another realization, the third acquiring unit is used for:
According to the relative position information of the fused feature vector and/or the face in facial image, obtain
The probability of the watching area.
In another realization, described device further include:
Pretreatment unit, for being pre-processed to the facial image.
In another realization, the pretreatment unit is for executing following at least one operation:
Nonlinear extension is carried out to the facial image, image pixel value is redistributed, makes transformed facial image
The probability density of gray scale is evenly distributed;
Distortion correction is carried out to the facial image.
In another realization, relative position information of the face in facial image includes the user standing area
Relative to by the head pose information of the location information of watching area and/or the user.
In another realization, described device further include:
First setting unit, for equally distributed multiple points to be arranged in user standing area;
First acquisition unit, for successively acquiring the user in an orderly manner in the facial image of the multiple point;
6th acquiring unit, for obtaining the user in the multiple point relative to the position by watching area
Information, as the user standing area relative to the location information by watching area.
In another realization, described device further include:
Second setting unit, for equally distributed multiple points to be arranged in user standing area;
Selection unit, for randomly selecting the point of setting quantity in the multiple point;
Second acquisition unit, for successively acquiring the user in the facial image of multiple points of the selection;
7th acquiring unit, for obtain the user the selection multiple points relative to described by watching area
Location information, as the user standing area relative to the location information by watching area.
In another realization, when it is described user's standing region fore is located at by watching area when, the user's
Head pose information includes following any: upwards, downwards, forward, to the left, to the right;
When it is described be located on the left of the user standing area by watching area when, the head pose information of the user includes
It is any below: upwards, downwards, forward, to the left;
When it is described be located on the right side of the user standing area by watching area when, the head pose information of the user includes
It is any below: upwards, downwards, forward, to the right.
The third aspect provides a kind of attention force checking device, which is characterized in that described device includes: input unit, defeated
Device, memory and processor out;Wherein, batch processing code is stored in the memory, and the processor is for calling
The program code stored in the memory executes method described in above-mentioned first aspect or its various possible realization.
Fourth aspect provides a kind of attention force checking device, comprising: processor;For the executable finger of storage processor
The memory of order;Wherein, the processor is configured to when executing the executable instruction of the memory, above-mentioned first party is executed
Method described in face or its various possible realization.
Fourth aspect provides a kind of computer readable storage medium, is stored in the computer readable storage medium
Instruction, when run on a computer, so that computer executes described in above-mentioned first aspect or its various possible realization
Method.
5th aspect, provides a kind of computer program product comprising instruction, when run on a computer, so that
Computer executes method described in above-mentioned first aspect or its various possible realization.
Using the scheme of the application, have the following beneficial effects:
The relative position of facial image and/or face in facial image using mode end to end, when input is watched attentively
Information, the probability of available watching area, so as to realize accurate, reliable attention detection.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application or in background technique below will be implemented the application
Attached drawing needed in example or background technique is illustrated.
Fig. 1 is a kind of flow diagram of attention detection method provided by the embodiments of the present application;
Fig. 2 is Image Acquisition schematic diagram of a scenario;
Fig. 3 is the region division schematic diagram of watching area;
Fig. 4 is the flow diagram of another attention detection method provided by the embodiments of the present application;
Fig. 5 is the contrast schematic diagram that exemplary image non-linear stretches front and back;
Fig. 6 is the contrast schematic diagram of exemplary pattern distortion correction front and back;
Fig. 7 is a kind of neural network detailed structure view of exemplary attention detection based on convolutional neural networks;
Fig. 8 is the flow diagram of another attention detection method provided by the embodiments of the present application;
Fig. 9 is the neural network detailed structure view of the exemplary another attention detection based on convolutional neural networks;
Figure 10 is the flow diagram of another attention detection method provided by the embodiments of the present application;
Figure 11 is the neural network detailed structure view of another exemplary attention detection based on convolutional neural networks;
Figure 12 is the flow diagram of another attention detection method provided by the embodiments of the present application;
Figure 13 is the neural network detailed structure view of another exemplary attention detection based on convolutional neural networks;
Figure 14 is a kind of structural schematic diagram for paying attention to force checking device provided by the embodiments of the present application;
Figure 15 is another structural schematic diagram for paying attention to force checking device provided by the embodiments of the present application.
Specific embodiment
The embodiment of the present application is described below with reference to the attached drawing in the embodiment of the present application.
Attention detection technique based on depth convolutional neural networks, exactly realizes row using depth convolutional neural networks
The estimation of people's sight, recycling watch determination strategy attentively to judge which region pedestrian is look at.Based on depth convolutional neural networks
Pedestrian pay attention to force detection system, there are the acquisition of many technological difficulties, such as data, the training of model, and watch judgement attentively
The selection etc. of strategy.
According to a kind of attention detection method and device provided by the embodiments of the present application, using mode end to end, input
The relative position information of facial image and/or face in facial image when watching attentively, the probability of available watching area, from
And accurate, reliable attention detection may be implemented.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " and "comprising" instruction
Described feature, entirety, step, operation, the presence of element and/or component, but one or more of the other feature, whole is not precluded
Body, step, operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
As shown in Figure 1, be a kind of flow diagram of attention detection method provided by the embodiments of the present application, it is exemplary
, method includes the following steps:
The relative position information of facial image and/or face in facial image when S101, acquisition user watch attentively.
Firstly, carrying out the acquisition of the relative position information of facial image, face in facial image.Wherein, face is in people
Relative position information in face image includes the user standing area relative to by the location information of watching area and/or described
The head pose information of user.Image Acquisition schematic diagram of a scenario as shown in Figure 2, people stand and are look at before region.It needs
Illustrate, people stands and is look at before region here, positive front is not necessarily, as long as the front of watching area.
As shown in Fig. 2, a region delimited in the position that can be stood to people, i.e. standing area, standing area refers to that user exists
The position stood before watching area.People can be located at any one position in the standing area.In order to enable the data of acquisition
More comprehensively, the selection of the standing place of user is subdivided into two kinds of strategies: fixed point strategy and random point strategy.Fixed point
Position strategy refers to and sets several equally distributed points in the band of position that user stands, and successively acquires user in an orderly manner upper
The facial image of several points is stated, user is obtained in several above-mentioned points relative to by the location information of watching area, makees
It is user standing area relative to by the location information of watching area.Random point strategy, refers to the band of position that user stands
Several smaller regions are subdivided into, the point of setting quantity is randomly selected in several above-mentioned points, successively acquires user
In the facial image of several points of selection, user is obtained in several points of selection relative to by the position of watching area
Information, as user standing area relative to by the location information of watching area.
As shown in figure 3, being the region division schematic diagram of watching area.Watching area refers to the region that user's sight is directed toward, point
For several regions.It is comprehensive in view of data, several equally distributed points are set in each watching area.And
And buffer area is set up between adjacent watching area.Specific to different application scenarios, the setting of watching area only need to be modified i.e.
It can.By taking advertisement machine watches monitoring system attentively as an example, as shown in figure 3, watching area can be divided into watch attentively with non-watching area, therebetween
Set up buffer area, other application scenarios can with and so on.
In addition to watching area, standing area will affect the acquisition and testing result of facial image, the head pose of user
It will affect the acquisition and testing result of facial image.Head pose refers to the direction on head when user is look at observation point, root
According to the standing place of user and the relationship of observation position, when observing point in front of standing point, user's head appearance to be acquired
State has upwards, downwards, forward, to the left, to the right;When observing point on the left of standing point, user's head pose to be acquired has
Upwards, downwards, forward, to the left;When observing point on the right side of standing point, user's head pose to be acquired has upwards, downwards,
Forward, to the right.
It can be look at one or more positions installation camera in region, acquisition user watches face when watching area attentively
Image.When acquiring facial image, above-mentioned user standing area is can be considered relative to by the position of watching area in the position of face frame
Confidence breath and the head pose of user etc..
S102, the whole face and/or local feature information for obtaining the facial image.
Entire facial image when the user of acquisition watches attentively has some characteristic informations, by extracting these characteristic informations,
It is known that the user is just being look at some region.In addition, again including some important local feature information in facial image, such as
Eyes local feature information, because eyes are that the most important organ watched attentively can be more acurrate according to eyes local feature information
Ground knows whether the user is look at some region.
S103, according to one or more of following information: whole face characteristic information, the face figure of the facial image
Relative position information of the local feature information, face of picture in the facial image, obtains the probability of watching area.
The whole face characteristic information of above-mentioned facial image, the local feature information of facial image, face are obtained in facial image
In one or more of relative position information after, one or more information inputs in above-mentioned multiple information can be classified
Device can obtain the probability of watching area and non-watching area, using the number in the highest region of probability as output.Specific to wide
The monitoring system of announcement machine, watching area only have in advertisement screen and outside advertisement screen, that is, watch attentively and watch attentively with non-.Directly most by probability
High region then can be regarded as a threshold value 0.5 as output, when the probability watched attentively is greater than 0.5, is then judged as and watches attentively,
Otherwise it is judged as non-to watch attentively.It should be noted that can set different threshold values for different application scenarios, threshold value is set
Surely it can be estimated by experience, can also accurately be obtained by acquiring a part of test data.
Further, as shown in Fig. 2, watching area can be divided into multiple small regions again, for trained convolution mind
Through network, the probability of the corresponding all areas number of a facial image can also be exported, the highest region of probability is as final
Output.
As it can be seen that input one facial image, relative position information of the face in face in facial image, can be obtained note
The probability of viewed area, to realize that attention detects end to end.
According to a kind of attention detection method provided by the embodiments of the present application, using mode end to end, when input is watched attentively
Relative position information in facial image of facial image and/or face, the probability of available watching area, so as to
Realize accurate, reliable attention detection.
As shown in figure 4, for the flow diagram of another attention detection method provided by the embodiments of the present application, it is exemplary
, this approach includes the following steps;
S201, facial image of user when watching attentively is obtained.
The specific implementation of the step can refer to the step S101 of embodiment illustrated in fig. 1.
S202, the facial image is pre-processed.
The facial image of acquisition can have the problems such as most important part is darker or the distortion of image border is serious, because
This, can pre-process facial image.
The first situation be generally used for pedestrian monitoring camera be all wide-angle camera, cause face to account for entire figure
The ratio of picture is lower, that is, resolution ratio is low, then can then obscure very much if refineing to the image of eyes part;In addition generally by
It is changed greatly in illumination condition, forms shade on face sometimes, cause most important eye portion darker, influence most terminates
The accuracy of fruit.Then in one implementation, step specifically: Nonlinear extension is carried out to the facial image, is redistributed
Image pixel value makes the probability density of the gray scale of transformed facial image be evenly distributed.As shown in figure 5, being exemplary figure
As the contrast schematic diagram before and after Nonlinear extension, the method that histogram equalization can be used carries out Nonlinear extension to image,
Image picture elements value is redistributed, the probability density of transformed image grayscale is made to be evenly distributed.This means that image grayscale
Dynamic range increased, improve the contrast of image.
Second case be another problem of wide-angle camera bring be then image border distortion it is serious, lead to face
And eye image has a greater change, the deformation and distortion of image, also can cause serious interference to the judgement of watching area.
Then in another realization, the step specifically: distortion correction is carried out to the facial image.As shown in fig. 6, being exemplary figure
The contrast schematic diagram of image distortion correction front and back overcomes above-mentioned anamorphose by carrying out distortion correction to the facial image of acquisition
The problem of with distortion.
S203, the whole face characteristic pattern for obtaining the facial image.
Entire facial image when the user of acquisition watches attentively has some characteristic informations, by extracting these characteristic informations,
It is known that the user is just being look at some region.
In one implementation, convolutional neural networks can be based on, the whole face characteristic pattern of facial image is obtained.Convolutional Neural net
Network is a kind of comprising convolutional calculation and with the feedforward neural network of depth structure, is one of representative algorithm of deep learning.Volume
Product neural network copys the visual perception mechanism construction of biology, and the convolution kernel parameter sharing in hidden layer connects sparse with interlayer
Property, enable convolutional neural networks to reveal feature with lesser calculation amount plaid matching, such as pixel and audio are learnt, has steady
Fixed effect.
One or more facial images can be inputted, the convolutional neural networks for carrying out attention detection are trained, from
And allow to extract the feature vector of facial image based on the convolutional neural networks trained.The facial image of acquisition
Feature vector includes the feature vector of multiple characteristic dimensions, and this feature vector includes multiple features of facial image, such as head
With the angle character of watching area and other face characteristics etc..
A kind of neural network detailed structure view of attention detection based on convolutional neural networks as shown in Figure 7, will adopt
The facial image of collection inputs depth convolutional neural networks, several available whole face characteristic patterns, which includes
Multiple characteristics.
S204, the whole face characteristic pattern according to the facial image obtain the whole face feature vector of the whole face characteristic pattern.
Still referring to FIG. 7, carrying out convolution operation, the whole face feature vector of available face to several characteristic patterns.
S205, according to the whole face feature vector, obtain the probability of watching area.
After the whole face feature vector for obtaining above-mentioned facial image, whole face feature vector can be inputted into classifier, can obtained
The probability for obtaining watching area and non-watching area, using the number in the highest region of probability as output.Specific to the prison of advertisement machine
Control system, watching area only have in advertisement screen and outside advertisement screen, that is, watch attentively and watch attentively with non-.Directly by the highest region of probability
It as output, then can be regarded as a threshold value 0.5, when the probability watched attentively is greater than 0.5, has then been judged as and watches attentively, otherwise judge
Watch attentively to be non-.It should be noted that can set different threshold values for different application scenarios, the setting of threshold value can lead to
Experience estimation is crossed, can also accurately be obtained by acquiring a part of test data.
According to a kind of attention detection method provided by the embodiments of the present application, using mode end to end, when input is watched attentively
Facial image, the probability of available watching area, so as to realize the detection of accurate, reliable attention.
As shown in figure 8, being the flow diagram of another attention detection method provided by the embodiments of the present application, in which:
S301, facial image of user when watching attentively is obtained.
The specific implementation of the step can refer to the step of step S101 or embodiment illustrated in fig. 4 of embodiment illustrated in fig. 1
S201。
S302, the facial image is pre-processed.
The specific implementation of the step can refer to the step S202 of embodiment illustrated in fig. 4.
The position of S303, detection eyes in the facial image.
The neural network detailed structure view of another attention detection based on convolutional neural networks as shown in Figure 9, by
It is the most critical part for carrying out attention detection in eyes, the position of eyes in facial image can be marked.
S304, the position according to eyes in the facial image determine position of the eyes in the whole face characteristic pattern.
Based on convolutional neural networks, the whole face characteristic pattern of the facial image is obtained.And the eye obtained according to above-mentioned detection
Position of the eyeball in facial image determines position of the eyes in whole face characteristic pattern.Specifically, as shown in figure 9, by eyes in institute
Aligned in position of the position and eyes in whole face characteristic pattern in the facial image is stated, obtains eyes in the whole face characteristic pattern
In position.When carrying out convolution operation, the position of the label will not become in whole face characteristic pattern, therefore, every available
The position of eyes in whole face characteristic pattern.
S305, the position according to eyes in the whole face characteristic pattern obtain eyes local feature figure.
Specifically, the position according to eyes in whole face characteristic pattern obtains the eyes local feature figure in whole face characteristic pattern.
S306, according to the eyes local feature figure, obtain the eyes local feature vectors of the eyes local feature figure.
Convolution operation, the feature vector of available eyes part are carried out to several eyes local feature figures.
S307, according to the eyes local feature vectors, obtain the probability of watching area.
As previously mentioned, eyes are the key components for carrying out attention detection, and therefore, according to eyes local feature vectors,
The probability of watching area can accurately be obtained.The specific reality of the probability of watching area is obtained according to eyes local feature vectors
It now can be with reference to the step S103 shown in FIG. 1 or step S205 of embodiment illustrated in fig. 4.
According to a kind of attention detection method provided by the embodiments of the present application, when being watched attentively based on convolutional neural networks to user
Facial image in eyes local feature analyzed, using mode end to end, facial image when input is watched attentively can be with
The probability of watching area is obtained, so as to realize accurate, reliable attention detection.
It as shown in Figure 10, is the flow diagram of another attention detection method provided by the embodiments of the present application, in which:
The relative position information of facial image and face in facial image when S401, acquisition user watch attentively.
The specific implementation of the step can refer to step S101 shown in FIG. 1.
S402, the facial image is pre-processed.
The specific implementation of the step can refer to step S202 shown in Fig. 4.
S403, convolutional neural networks are based on, obtain the whole face characteristic pattern of the facial image, and according to the facial image
Whole face characteristic pattern, obtain the whole face feature vector of the whole face characteristic pattern.
The specific implementation of the step can refer to step S203 and S204 shown in Fig. 4.
The position of S404, detection eyes in the facial image.
The specific implementation of the step can refer to the step S303 of embodiment illustrated in fig. 8.
S405, the position according to eyes in the facial image determine position of the eyes in whole face characteristic pattern.
The specific implementation of the step can refer to step S304 shown in Fig. 8.
S406, the position according to eyes in the whole face characteristic pattern obtain eyes local feature figure.
The specific implementation of the step can refer to step S305 shown in Fig. 8.
S407, according to the eyes local feature figure, obtain the eyes local feature vectors of the eyes local feature figure.
The specific implementation of the step can refer to step S306 shown in Fig. 8.
S408, the whole face feature vector and the eyes local feature vectors are merged, obtains fused spy
Levy vector.
The neural network detailed structure view that another as shown in figure 11 is detected based on the attention of convolutional neural networks, can
To merge to whole face feature vector and eyes local feature vectors, fused feature vector is obtained.The fused spy
Sign vector not only includes the characteristic information of eyes part, further comprises the characteristic information of whole face.The characteristic information of whole face
There is certain booster action to the detection of attention, testing result can be made more accurate.
S409, it is ceased according to the fused feature vector, obtains the probability of watching area.
According to fused feature vector, the probability of watching area can be accurately obtained.Probability about watching area
Acquisition can refer to the step S103 of embodiment illustrated in fig. 1, the step S205 of embodiment illustrated in fig. 4 or embodiment illustrated in fig. 8
Step S307.
According to a kind of attention detection method provided by the embodiments of the present application, using mode end to end, when input is watched attentively
Facial image, the probability of available watching area, so as to realize the detection of accurate, reliable attention.
It as shown in figure 12, is the flow diagram of another attention detection method provided by the embodiments of the present application, in which:
The relative position information of facial image and face in facial image when S501, acquisition user watch attentively.
The specific implementation of the step can refer to step S101 shown in FIG. 1.
S502, the facial image is pre-processed.
The specific implementation of the step can refer to step S202 shown in Fig. 4.
S503, the whole face characteristic pattern for obtaining the facial image, and according to the whole face characteristic pattern of the facial image, it obtains
The whole face feature vector of the whole face characteristic pattern.
The specific implementation of the step can refer to step S203 and S204 shown in Fig. 4.
The position of S504, detection eyes in the facial image.
The specific implementation of the step can refer to the step S303 of embodiment illustrated in fig. 8.
S505, the position according to eyes in the facial image determine position of the eyes in whole face characteristic pattern.
The specific implementation of the step can refer to step S304 shown in Fig. 8.
S506, the position according to eyes in the whole face characteristic pattern obtain eyes local feature figure.
The specific implementation of the step can refer to step S305 shown in Fig. 8.
S507, according to the eyes local feature figure, obtain the eyes local feature vectors of the eyes local feature figure.
The specific implementation of the step can refer to step S306 shown in Fig. 8.
S508, the whole face feature vector and the eyes local feature vectors are merged, obtains fused spy
Levy vector.
The neural network detailed structure view of another attention detection based on convolutional neural networks as shown in fig. 13 that, can
To merge to whole face feature vector and eyes local feature vectors, fused feature vector is obtained.The fused spy
Sign vector not only includes the characteristic information of eyes part, further comprises the characteristic information of whole face.The characteristic information of whole face
There is certain booster action to the detection of attention, testing result can be made more accurate.
In addition, obtaining the feature vector of facial image in addition to convolutional neural networks can be based on, people can also be individually acquired
Relative position information of the face in facial image.
S509, the relative position information according to the fused feature vector and the face in facial image, are obtained
Take the probability of watching area.
According to the relative position information of fused feature vector and the face in facial image, can accurately obtain
Obtain the probability of watching area.The acquisition of probability about watching area can refer to step S103, Fig. 4 institute of embodiment illustrated in fig. 1
Show the step S409 of the step S205 of embodiment or the step S307 of embodiment illustrated in fig. 8 or embodiment illustrated in fig. 10.
According to a kind of attention detection method provided by the embodiments of the present application, using mode end to end, when input is watched attentively
Relative position information in facial image of facial image and face, the probability of available watching area, so as to reality
Now accurate, reliable attention detection.
The same design of attention detection method in based on the above embodiment, as described in Figure 13, the embodiment of the present application is also
A kind of attention force checking device 1100 is provided, which can be applied to shown in above-mentioned Fig. 1, Fig. 4, Fig. 8, Figure 10, Figure 12
In method.The device 1100 includes: first acquisition unit 111, second acquisition unit 112, third acquiring unit 113, can be with
Including integrated unit 114, pretreatment unit 115, the first setting unit 116, the first acquisition unit 117, the 6th acquiring unit
118.Illustratively:
First acquisition unit 111, for obtaining facial image when user watches attentively and/or face in the facial image
Relative position information;
Second acquisition unit 112, for obtaining the whole face and/or local feature information of the facial image;
Third acquiring unit 113, for according to one or more of following information: the whole face feature of the facial image
The relative position information of information, the local feature information, face of the facial image in the facial image obtains field of regard
The probability in domain.
In one implementation, the second acquisition unit 112 is used for:
Obtain the whole face characteristic pattern of the facial image;
According to the whole face characteristic pattern of the facial image, the whole face feature vector of the whole face characteristic pattern is obtained.
In another realization, the third acquiring unit 113 is used for:
According to the whole face feature vector, the probability of the watching area is obtained.
In another realization, the second acquisition unit 112 includes (not shown):
Detection unit, for detecting position of the eyes in the facial image;
Determination unit determines eyes in the whole face characteristic pattern for the position according to eyes in the facial image
In position.
In another realization, the second acquisition unit 112 includes:
4th acquiring unit obtains eyes local feature figure for the position according to eyes in the whole face characteristic pattern;
5th acquiring unit, for obtaining the eyes of the eyes local feature figure according to the eyes local feature figure
Local feature vectors.
In another realization, the determination unit 1122 is used for:
By aligned in position of position and eyes of the eyes in the whole face characteristic pattern in the facial image, eye is obtained
Position of the eyeball in the whole face characteristic pattern.
In another realization, the 4th acquiring unit 1123 is used for:
Obtain the whole face characteristic pattern of the facial image;
According to position of the eyes in the whole face characteristic pattern, the eyes local feature in the whole face characteristic pattern is obtained
Figure.
In another realization, the third acquiring unit 113 is used for:
According to the eyes local feature vectors, the probability of watching area is obtained.
In another realization, described device further include:
Integrated unit 114 is obtained for merging the whole face feature vector and the eyes local feature vectors
Fused feature vector.
In another realization, the third acquiring unit 113 is used for:
According to the relative position information of the fused feature vector and/or the face in facial image, obtain
The probability of the watching area.
In another realization, described device further include:
Pretreatment unit 115, for being pre-processed to the facial image.
In another realization, the pretreatment unit 115 is for executing following at least one operation:
Nonlinear extension is carried out to the facial image, image pixel value is redistributed, makes transformed facial image
The probability density of gray scale is evenly distributed;
Distortion correction is carried out to the facial image.
In another realization, relative position information of the face in facial image includes the user standing area
Relative to by the head pose information of the location information of watching area and/or the user.
In another realization, described device further include:
First setting unit 116, for equally distributed multiple points to be arranged in user standing area;
First acquisition unit 117, for successively acquiring the user in an orderly manner in the face figure of the multiple point
Picture;
6th acquiring unit 118, for obtain the user in the multiple point relative to described by watching area
Location information, as the user standing area relative to the location information by watching area.
In another realization, described device further includes (not shown):
Second setting unit, for equally distributed multiple points to be arranged in user standing area;
Selection unit, for randomly selecting the point of setting quantity in the multiple point;
Second acquisition unit, for successively acquiring the user in the facial image of multiple points of the selection;
7th acquiring unit, for obtain the user the selection multiple points relative to described by watching area
Location information, as the user standing area relative to the location information by watching area.
In another realization, when it is described user's standing region fore is located at by watching area when, the user's
Head pose information includes following any: upwards, downwards, forward, to the left, to the right;
When it is described be located on the left of the user standing area by watching area when, the head pose information of the user includes
It is any below: upwards, downwards, forward, to the left;
When it is described be located on the right side of the user standing area by watching area when, the head pose information of the user includes
It is any below: upwards, downwards, forward, to the right.
Related said units more detailed description can be with reference to embodiment of the method shown in Fig. 1, Fig. 4, Fig. 8, Figure 10, Figure 12
In associated description obtain, be not added repeat here.
According to a kind of attention force checking device provided by the embodiments of the present application, using mode end to end, when input is watched attentively
Relative position information in facial image of facial image and/or face, the probability of available watching area, so as to
Realize accurate, reliable attention detection.
The embodiment of the present application also provides a kind of attention force checking device, and the device is for executing above-mentioned attention detection side
Method.It can be realized, can also be realized by software or firmware by hardware some or all of in the above method.
Optionally, device can be chip or integrated circuit in specific implementation.
Optionally, it is realized when some or all of in the attention detection method of above-described embodiment by software or firmware
When, it can be realized by a kind of attention force checking device 1200 that Figure 13 is provided.As shown in figure 13, which can wrap
It includes:
(processor 124 in device can be for input unit 121, output device 122, memory 123 and processor 124
One or more takes a processor as an example in Figure 13).In the present embodiment, input unit 121, output device 122, storage
Device 123 and processor 124 can be connected by bus or other means, wherein in Figure 12 for being connected by bus.
Wherein, processor 124 is for executing method and step performed by device in Fig. 1, Fig. 4, Fig. 8, Figure 10, Figure 12.
Optionally, the program of above-mentioned attention detection method can store in memory 123.The memory 123 can be with
It is physically separate unit, can also be integrated with processor 124.The memory 123 can be used for storing data.
Optionally, when passing through software realization some or all of in the attention detection method of above-described embodiment, the dress
Processor can also be only included by setting.Memory for storing program is located at except device, processor by circuit or electric wire with
Memory connection, for reading and executing the program stored in memory.
Processor can be central processing unit (central processing unit, CPU), network processing unit
(network processor, NP) or wlan device.
Processor can further include hardware chip.Above-mentioned hardware chip can be specific integrated circuit
(application-specific integrated circuit, ASIC), programmable logic device (programmable
Logic device, PLD) or combinations thereof.Above-mentioned PLD can be Complex Programmable Logic Devices (complex
Programmable logic device, CPLD), field programmable gate array (field-programmable gate
Array, FPGA), Universal Array Logic (generic array logic, GAL) or any combination thereof.
Memory may include volatile memory (volatile memory), such as random access memory
(random-access memory, RAM);Memory also may include nonvolatile memory (non-volatile
), such as flash memory (flash memory), hard disk (hard disk drive, HDD) or solid state hard disk memory
(solid-state drive, SSD);Memory can also include the combination of the memory of mentioned kind.
According to a kind of attention force checking device provided by the embodiments of the present application, using mode end to end, when input is watched attentively
Relative position information in facial image of facial image and/or face, the probability of available watching area, so as to
Realize accurate, reliable attention detection.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with
It realizes by another way.For example, the division of the unit, only a kind of logical function partition, can have in actual implementation
Other division mode, for example, multiple units or components can be combined or can be integrated into another system or some features
It can ignore, or not execute.Shown or discussed mutual coupling or direct-coupling or communication connection can be logical
Some interfaces are crossed, the indirect coupling or communication connection of device or unit can be electrical property, mechanical or other forms.
Unit may or may not be physically separated as illustrated by the separation member, shown as a unit
Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks
On unit.It can some or all of the units may be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real
It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.The computer program produces
Product include one or more computer instructions.It is all or part of when loading and execute on computers the computer program instructions
Ground generates the process or function according to the embodiment of the present application.The computer can be general purpose computer, special purpose computer, computer
Network or other programmable devices.The computer instruction may be stored in a computer readable storage medium, or by being somebody's turn to do
Computer readable storage medium is transmitted.The computer instruction can be from a web-site, computer, server or data
Center passes through wired (such as coaxial cable, optical fiber, Digital Subscriber Line (digital subscriber line, DSL)) or wireless
(such as infrared, wireless, microwave etc.) mode is transmitted to another web-site, computer, server or data center.It should
Computer readable storage medium can be any usable medium that computer can access or include one or more available
The data storage devices such as medium integrated server, data center.The usable medium can be read-only memory (read-only
Memory, ROM) or random access memory (random access memory, RAM) or magnetic medium, for example, floppy disk,
Hard disk, tape, magnetic disk or optical medium, for example, digital versatile disc (digital versatile disc, DVD) or half
Conductive medium, for example, solid state hard disk (solid state disk, SSD) etc..
Claims (10)
1. a kind of attention detection method, which is characterized in that the described method includes:
Obtain relative position information of the facial image and/or face when user watches attentively in the facial image;
Obtain the whole face and/or local feature information of the facial image;
According to one or more of following information: the part of the whole face characteristic information of the facial image, the facial image
The relative position information of characteristic information, face in the facial image, obtains the probability of watching area.
2. the method according to claim 1, wherein being obtained according to the whole face characteristic information of the facial image
The probability of watching area, comprising:
According to whole face feature vector, the probability of the watching area is obtained.
3. according to the method described in claim 2, it is characterized in that, the whole face for obtaining the facial image and/or part
Characteristic information, comprising:
Detect position of the eyes in the facial image;
According to position of the eyes in the facial image, position of the eyes in the whole face characteristic pattern is determined.
4. according to the method described in claim 3, it is characterized in that, the position according to eyes in the facial image,
Determine position of the eyes in the whole face characteristic pattern, comprising:
By aligned in position of position and eyes of the eyes in the whole face characteristic pattern in the facial image, obtains eyes and exist
Position in the whole face characteristic pattern.
5. the method according to claim 3 or 4, which is characterized in that according to the local feature information of the facial image, obtain
Take the probability of watching area, comprising:
According to the eyes local feature vectors, the probability of watching area is obtained.
6. according to the method described in claim 3, it is characterized in that, the method also includes:
The whole face feature vector and the eyes local feature vectors are merged, fused feature vector is obtained.
7. according to the method described in claim 6, it is characterized in that, according to one or more of following information: the face
The relative position in the facial image of local feature information, face of the whole face characteristic information of image, the facial image
Information obtains the probability of watching area, comprising:
According to the relative position information of the fused feature vector and/or the face in facial image, described in acquisition
The probability of watching area.
8. a kind of attention force checking device, which is characterized in that described device includes:
First acquisition unit, it is opposite in the facial image for obtaining facial image when user watches attentively and/or face
Location information;
Second acquisition unit, for obtaining the whole face and/or local feature information of the facial image;
Third acquiring unit, for according to one or more of following information: the whole face characteristic information of the facial image, institute
The relative position information of the local feature information, face of facial image in the facial image is stated, the general of watching area is obtained
Rate.
9. a kind of attention force checking device, which is characterized in that described device includes: input unit, output device, memory and place
Manage device;Wherein, batch processing code is stored in the memory, and the processor is used to call to store in the memory
Program code executes such as method according to any one of claims 1 to 7.
10. a kind of non-volatile computer readable storage medium storing program for executing, is stored thereon with computer program instructions, which is characterized in that institute
It states and realizes method described in any one of claim 1~7 when computer program instructions are executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910353333.5A CN110059666B (en) | 2019-04-29 | 2019-04-29 | Attention detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910353333.5A CN110059666B (en) | 2019-04-29 | 2019-04-29 | Attention detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110059666A true CN110059666A (en) | 2019-07-26 |
CN110059666B CN110059666B (en) | 2022-04-01 |
Family
ID=67321419
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910353333.5A Active CN110059666B (en) | 2019-04-29 | 2019-04-29 | Attention detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110059666B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028014A (en) * | 2019-12-11 | 2020-04-17 | 秒针信息技术有限公司 | Method and device for evaluating resource delivery effect |
CN111680546A (en) * | 2020-04-26 | 2020-09-18 | 北京三快在线科技有限公司 | Attention detection method, attention detection device, electronic equipment and storage medium |
CN111695516A (en) * | 2020-06-12 | 2020-09-22 | 百度在线网络技术(北京)有限公司 | Thermodynamic diagram generation method, device and equipment |
CN112560775A (en) * | 2020-12-25 | 2021-03-26 | 深圳市商汤科技有限公司 | Switch control method and device, computer equipment and storage medium |
CN112560768A (en) * | 2020-12-25 | 2021-03-26 | 深圳市商汤科技有限公司 | Gate channel control method and device, computer equipment and storage medium |
CN112580553A (en) * | 2020-12-25 | 2021-03-30 | 深圳市商汤科技有限公司 | Switch control method, device, computer equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105138947A (en) * | 2014-05-30 | 2015-12-09 | 由田新技股份有限公司 | Guard reminding method, reminding device and reminding system |
US20160187998A1 (en) * | 2005-08-18 | 2016-06-30 | Scenera Technologies, Llc | Systems And Methods For Processing Data Entered Using An Eye-Tracking System |
CN107193383A (en) * | 2017-06-13 | 2017-09-22 | 华南师范大学 | A kind of two grades of Eye-controlling focus methods constrained based on facial orientation |
CN107403166A (en) * | 2017-08-02 | 2017-11-28 | 广东工业大学 | A kind of method and apparatus for extracting facial image pore feature |
US9866916B1 (en) * | 2016-08-17 | 2018-01-09 | International Business Machines Corporation | Audio content delivery from multi-display device ecosystem |
CN108545019A (en) * | 2018-04-08 | 2018-09-18 | 多伦科技股份有限公司 | A kind of safety driving assist system and method based on image recognition technology |
CN108615159A (en) * | 2018-05-03 | 2018-10-02 | 百度在线网络技术(北京)有限公司 | Access control method and device based on blinkpunkt detection |
-
2019
- 2019-04-29 CN CN201910353333.5A patent/CN110059666B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160187998A1 (en) * | 2005-08-18 | 2016-06-30 | Scenera Technologies, Llc | Systems And Methods For Processing Data Entered Using An Eye-Tracking System |
CN105138947A (en) * | 2014-05-30 | 2015-12-09 | 由田新技股份有限公司 | Guard reminding method, reminding device and reminding system |
US9866916B1 (en) * | 2016-08-17 | 2018-01-09 | International Business Machines Corporation | Audio content delivery from multi-display device ecosystem |
CN107193383A (en) * | 2017-06-13 | 2017-09-22 | 华南师范大学 | A kind of two grades of Eye-controlling focus methods constrained based on facial orientation |
CN107403166A (en) * | 2017-08-02 | 2017-11-28 | 广东工业大学 | A kind of method and apparatus for extracting facial image pore feature |
CN108545019A (en) * | 2018-04-08 | 2018-09-18 | 多伦科技股份有限公司 | A kind of safety driving assist system and method based on image recognition technology |
CN108615159A (en) * | 2018-05-03 | 2018-10-02 | 百度在线网络技术(北京)有限公司 | Access control method and device based on blinkpunkt detection |
Non-Patent Citations (3)
Title |
---|
RIZWAN ALI NAQVI ET.AL: "Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor", 《SENSORS》 * |
SOURABH VORA ET.AL: "On Generalizing Driver Gaze Zone Estimation using Convolutional Neural Networks", 《2017 IEEE INTELLIGENT VEHICLES SYMPOSIUM》 * |
YUSUKE SUGANO ET.AL: "AggreGaze: Collective Estimation of Audience Attention on Public Displays", 《UIST 16: PROCEEDINGS OF THE 29TH ANNUAL SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028014A (en) * | 2019-12-11 | 2020-04-17 | 秒针信息技术有限公司 | Method and device for evaluating resource delivery effect |
CN111680546A (en) * | 2020-04-26 | 2020-09-18 | 北京三快在线科技有限公司 | Attention detection method, attention detection device, electronic equipment and storage medium |
CN111695516A (en) * | 2020-06-12 | 2020-09-22 | 百度在线网络技术(北京)有限公司 | Thermodynamic diagram generation method, device and equipment |
CN111695516B (en) * | 2020-06-12 | 2023-11-07 | 百度在线网络技术(北京)有限公司 | Thermodynamic diagram generation method, device and equipment |
CN112560775A (en) * | 2020-12-25 | 2021-03-26 | 深圳市商汤科技有限公司 | Switch control method and device, computer equipment and storage medium |
CN112560768A (en) * | 2020-12-25 | 2021-03-26 | 深圳市商汤科技有限公司 | Gate channel control method and device, computer equipment and storage medium |
CN112580553A (en) * | 2020-12-25 | 2021-03-30 | 深圳市商汤科技有限公司 | Switch control method, device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110059666B (en) | 2022-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110059666A (en) | A kind of attention detection method and device | |
JP7415251B2 (en) | Apparatus and method for image processing and system for training neural networks | |
US20180157899A1 (en) | Method and apparatus detecting a target | |
CN110765860B (en) | Tumble judging method, tumble judging device, computer equipment and storage medium | |
CN107766786B (en) | Activity test method and activity test computing device | |
CN107077602B (en) | System and method for activity analysis | |
US10963676B2 (en) | Image processing method and apparatus | |
US9679212B2 (en) | Liveness testing methods and apparatuses and image processing methods and apparatuses | |
KR20180109665A (en) | A method and apparatus of image processing for object detection | |
CN106339673A (en) | ATM identity authentication method based on face recognition | |
KR20180065889A (en) | Method and apparatus for detecting target | |
Jia et al. | A two-step approach to see-through bad weather for surveillance video quality enhancement | |
CN108229418B (en) | Human body key point detection method and apparatus, electronic device, storage medium, and program | |
US11915430B2 (en) | Image analysis apparatus, image analysis method, and storage medium to display information representing flow quantity | |
EP3992904A1 (en) | Image restoration method and apparatus | |
US11762454B2 (en) | Method and apparatus with image augmentation | |
CN111444555B (en) | Temperature measurement information display method and device and terminal equipment | |
CN111986202B (en) | Glaucoma auxiliary diagnosis device, method and storage medium | |
WO2020048359A1 (en) | Method, system, and computer-readable medium for improving quality of low-light images | |
CA3026968A1 (en) | Method and device for identifying pupil in an image | |
CN112464803A (en) | Image comparison method and device | |
CN110287930B (en) | Wrinkle classification model training method and device | |
CN115880765A (en) | Method and device for detecting abnormal behavior of regional intrusion and computer equipment | |
KR101961462B1 (en) | Object recognition method and the device thereof | |
CN108805883B (en) | Image segmentation method, image segmentation device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |