CN109874054A - A kind of advertisement recommended method and device - Google Patents
A kind of advertisement recommended method and device Download PDFInfo
- Publication number
- CN109874054A CN109874054A CN201910114138.7A CN201910114138A CN109874054A CN 109874054 A CN109874054 A CN 109874054A CN 201910114138 A CN201910114138 A CN 201910114138A CN 109874054 A CN109874054 A CN 109874054A
- Authority
- CN
- China
- Prior art keywords
- spectators
- face
- facial image
- video frame
- frame images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000001815 facial effect Effects 0.000 claims abstract description 98
- 238000013145 classification model Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 13
- 238000003860 storage Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 8
- 206010057315 Daydreaming Diseases 0.000 claims description 5
- 239000012141 concentrate Substances 0.000 claims description 2
- 230000032258 transport Effects 0.000 claims 1
- 230000008569 process Effects 0.000 description 24
- 238000012549 training Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 17
- 238000013528 artificial neural network Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 206010053238 Amimia Diseases 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of advertisement recommended method and devices, to solve the problems, such as that the existing advertisement way of recommendation recommends low efficiency.The advertisement recommended method, comprising: the video frame images of the spectators of viewing advertisement in acquisition setting advertising time section;For each video frame images, the facial image in the video frame images is obtained;The status information that spectators are corresponded in the video frame images is determined according to the facial image;It is determined according to the status information of spectators each in each video frame images and the viewing duration of each spectators to the interested spectators' type of the advertisement;Recommend the advertisement automatically to the spectators of the type in the advertising time section.
Description
Technical field
The present invention relates to technical field of information recommendation more particularly to a kind of advertisement recommended method and devices.
Background technique
With the development of Web TV, compared with traditional advertisement, internet television have broad covered area, audient extensively,
The advantages that cost performance is high, interactive good, therefore, internet television becomes the main way that more and more businessmans recommend oneself product
Diameter.However, for advertiser, obtaining its launched advertisement due to the magnanimity of manpower, the finiteness of physics and data
Audience information to verify whether advertisement is effectively extremely not easy.
Existing advertisement recommender system feels user using features such as the demand of user, interest as the condition of filtering information
Some product informations of interest recommend user, and the information source of advertisement recommender system often affects the effect of recommendation, and passes
The information source of system generally judged by means such as the sales figure of commodity, questionnaire surveys user to the fancy grade of advertisement,
Recommend low efficiency.
Summary of the invention
In order to solve the problems, such as that the existing advertisement way of recommendation recommends low efficiency, the embodiment of the invention provides a kind of advertisements
Recommended method and device.
In a first aspect, the embodiment of the invention provides a kind of advertisement recommended methods, comprising:
The video frame images of the spectators of viewing advertisement in acquisition setting advertising time section;
For each video frame images, the facial image in the video frame images is obtained;
The status information that spectators are corresponded in the video frame images is determined according to the facial image;
It is determined according to the status information of spectators each in each video frame images and the viewing duration of each spectators to described wide
Accuse interested spectators' type;
Recommend the advertisement automatically to the spectators of the type in the advertising time section.
Advertisement recommended method provided in an embodiment of the present invention, collection of server set the sight of viewing advertisement in advertising time section
Many video frame images obtain the facial image in the video frame images for each video frame images, according to the people of acquisition
Face image determines the status information that spectators are corresponded in the video frame images, in turn, according in the setting advertising time section
Each video frame images in each spectators status information and each spectators viewing duration determination it is interested in the advertisement
Spectators' type, the advertising time section backward to the spectators of the type recommends the advertisement automatically.Compared to existing skill
Art, in the present invention according to from the video frame images of the spectators of viewing advertisement the human face image information extracted determine the state of spectators
Information is obtained according to the status information of spectators to the interested crowd of the advertisement, so that advertisement backstage can be automatically smart in real time
The quasi- push interested advertisement of user reaches saving advertising cost, improves advertisement recommendation efficiency and increases advertisement and launches precisely
The effect of degree can automatically generate the suggestion of advertisement serving policy, save manpower and time cost that advertisement artificially counts.
Preferably, obtaining the facial image in the video frame images, specifically include:
According to the face location coordinate in video frame images described in preset Face datection model extraction, the face location
Coordinate includes face key point position coordinates;
Facial image is intercepted according to the face location coordinate.
It, can be in the video frame images according to preset Face datection model extraction in above-mentioned preferable embodiment
Face location coordinate, wherein face location coordinate includes face key point position coordinates, can be intercepted according to face location coordinate
Facial image.
Preferably, the status information includes gender, the age, expression, human eye opens closed state and face rotates angle;
The status information for corresponding to spectators in the video frame images is determined according to the facial image, is specifically included:
The facial image is input to preset Gender Classification model, obtains the property of the corresponding spectators of the facial image
Not;And
The facial image is input to preset character classification by age model, obtains the year of the corresponding spectators of the facial image
Age;And
The facial image is input to preset expression classification model, obtains the table of the corresponding spectators of the facial image
Feelings;And
The position of human eye coordinate is extracted from the face key point position coordinates, is mentioned according to the position of human eye coordinate
Eye image is taken, the eye image is input to preset human eye disaggregated model, obtains the corresponding spectators of the facial image
Human eye open closed state;And
People is determined according to the face key point position coordinates of the face key point position coordinates and the positive face of preset standard
Face rotates angle.
In above-mentioned preferable embodiment, the status information of the corresponding spectators of facial image may include gender, age, table
Feelings, human eye open closed state and face rotation angle, the gender of spectators, the age, expression, human eye opens closed state can be according to respective
Corresponding preset neural network classification model obtains, and face rotates angle can be according to face key point position coordinates and default
The positive face of standard face key point position coordinates determine.Spectators are obtained using each neural network classification model trained in advance
Status information mode, improve the accuracy and computational efficiency of classification.
Optionally, it is sat according to the face key point position of the face key point position coordinates and the positive face of preset standard
It marks after determining face rotation angle, further includes:
Closed state is opened according to the human eye and face rotation angle judges the note of the corresponding spectators of the facial image
Whether meaning power is concentrated.
Preferably, according to the human eye open closed state and the face rotation angle judge the corresponding sight of the facial image
Whether many attentions are concentrated, and specifically include:
If the human eye open closed state for open eyes, and the face rotation angle within the scope of predetermined angle when, then
Determine that the attention of the corresponding spectators of the facial image is concentrated;
If the human eye open closed state for close one's eyes or the face rotate angle not the predetermined angle range it
When interior, then determine that the corresponding spectators' of the facial image is absent minded.
In above-mentioned preferable embodiment, closed state and face rotation angle are opened according to human eye to judge the attention of spectators
Whether concentrate, whether attention, which is concentrated can be used as, judges spectators to one of whether interested index of the advertisement.
Preferably, according to determining pair of the status information of spectators each in each video frame images and the viewing duration of each spectators
The interested spectators' type of advertisement, specifically includes:
Counting viewing duration in the video frame images to be greater than preset duration and attention concentration and expression is smile shape
The frame number of the video frame images of state is greater than the spectators of preset quantity;
The age of the spectators counted and gender are determined as to the interested spectators' type of the advertisement.
In above-mentioned preferable embodiment, is concentrated according to the viewing duration of spectators, attention and expression is smile state
Three indexs determine jointly to the type of the interested spectators of the advertisement, more to the positioning of the interested spectators of the advertisement
Precisely.
Second aspect, the embodiment of the invention provides a kind of advertisement recommendation apparatus, comprising:
Acquisition unit, for acquiring the video frame images of the spectators of viewing advertisement in setting advertising time section;
Acquiring unit obtains the facial image in the video frame images for being directed to each video frame images;
First determination unit, for determining the state letter for corresponding to spectators in the video frame images according to the facial image
Breath;
Second determination unit, for according to the status information of spectators each in each video frame images and the viewing of each spectators
Duration is determined to the interested spectators' type of the advertisement;
Recommendation unit, for recommending the advertisement automatically to the spectators of the type in the advertising time section.
Preferably, the acquiring unit, is specifically used for the video frame images according to preset Face datection model extraction
In face location coordinate, the face location coordinate includes face key point position coordinates;According to the face location coordinate
Intercept facial image.
Preferably, the status information includes gender, the age, expression, human eye opens closed state and face rotates angle;
First determination unit is obtained specifically for the facial image is input to preset Gender Classification model
The gender of the corresponding spectators of the facial image;And the facial image is input to preset character classification by age model, it obtains
The age of the corresponding spectators of the facial image;And the facial image is input to preset expression classification model, it obtains
The expression of the corresponding spectators of the facial image;And the position of human eye is extracted from the face key point position coordinates and is sat
Mark extracts eye image according to the position of human eye coordinate, the eye image is input to preset human eye disaggregated model, is obtained
The human eye of the corresponding spectators of the facial image is taken to open closed state;And according to the face key point position coordinates with it is preset
The face key point position coordinates of the positive face of standard determine that face rotates angle.
Optionally, described device, further includes:
Judging unit, in the face key point according to the face key point position coordinates and the positive face of preset standard
After position coordinates determine face rotation angle, closed state is opened according to the human eye and face rotation angle judges the people
Whether the attention of the corresponding spectators of face image is concentrated.
Preferably, the judging unit, if opening closed state specifically for the human eye to open eyes, and the face rotates
When angle is within the scope of predetermined angle, then determine that the attention of the corresponding spectators of the facial image is concentrated;If the people
Eye open closed state for close one's eyes or the face rotate angle not within the scope of the predetermined angle when, then determine the people
The corresponding spectators' of face image is absent minded.
Preferably, second determination unit, is specifically used for counting in the video frame images and watches duration greater than default
Duration and attention concentration and expression are that the frame number of the video frame images of smile state is greater than the spectators of preset quantity;It will system
The age of the spectators counted out and gender are determined as to the interested spectators' type of the advertisement.
The technical effect of advertisement recommendation apparatus provided by the invention may refer to each of above-mentioned first aspect or first aspect
The technical effect of a implementation, details are not described herein again.
The third aspect the embodiment of the invention provides a kind of electronic equipment, including memory, processor and is stored in described
On memory and the computer program that can run on the processor, the processor realize the present invention when executing described program
The advertisement recommended method.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage mediums, are stored thereon with computer journey
Sequence, the program realize the step in advertisement recommended method of the present invention when being executed by processor.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
It obtains it is clear that understand through the implementation of the invention.The objectives and other advantages of the invention can be by written explanation
Specifically noted structure is achieved and obtained in book, claims and attached drawing.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes a part of the invention, this hair
Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the implementation process diagram of advertisement recommended method provided in an embodiment of the present invention;
Fig. 2 is to obtain the implementation process diagram of the facial image in video frame images in the embodiment of the present invention;
Fig. 3 is five key point schematic diagrames of the positive face of standard in the embodiment of the present invention;
Fig. 4 is the key point of face actual face key point and standard positive face when rotating left and right in the embodiment of the present invention
Distribution schematic diagram;
Fig. 5 is in the embodiment of the present invention, to the determination flow diagram of the interested spectators' type of advertisement;
Fig. 6 is the structural schematic diagram of advertisement recommendation apparatus provided in an embodiment of the present invention;
Fig. 7 is the structural schematic diagram of electronic equipment provided in an embodiment of the present invention.
Specific embodiment
In order to solve the problems, such as that the existing advertisement way of recommendation recommends low efficiency, the embodiment of the invention provides a kind of advertisements
Recommended method and device.
Below in conjunction with Figure of description, preferred embodiment of the present invention will be described, it should be understood that described herein
Preferred embodiment only for the purpose of illustrating and explaining the present invention and is not intended to limit the present invention, and in the absence of conflict, this hair
The feature in embodiment and embodiment in bright can be combined with each other.
Herein, it is to be understood that in technical term according to the present invention:
MTCNN (Multi-task Convolutional Neural Networks, multitask convolutional neural networks):
MTCNN forms (P- by 3 CNN (Convolutional Neural Networks, convolutional neural networks) in cascaded fashion
Net,R-Net,O-Net)。
Proposal Network (P-Net): the network structure mainly obtains candidate window and the boundary of human face region
The regression vector of frame.And returned with the bounding box, candidate window is calibrated, non-maxima suppression (Non- is then passed through
Maximum suppression, NMS) merge the candidate frame of high superposed.
Refine Network (R-Net): the network structure, which is still returned by bounding box with NMS, removes those
The region false-positive.Only because the network structure and P-Net network structure are variant, more full articulamentums,
So the effect for preferably inhibiting false-positive can be obtained.
Output Network (O-Net): more Liao Yicengjuan bases, the result of processing are more smart again for R-Net layers of the layer ratio
Carefully, effect is as R-Net layers of effect.But the layer has carried out more supervision to human face region, while can also export 5
Terrestrial reference (landmark).
As shown in Figure 1, it is the implementation process diagram of advertisement recommended method provided in an embodiment of the present invention, may include
Following steps:
The video frame images of the spectators of viewing advertisement in S11, acquisition setting advertising time section.
When it is implemented, watching the video frame images of the spectators of advertisement in collection of server setting advertising time section, wherein
Setting advertising time section can be the play time section of a certain setting advertisement.
Specifically, can use watching before the photographic device shooting Web TV being mounted on Web TV it is described wide
The video frame images of the spectators of announcement.
S12, each video frame images are directed to, obtain the facial image in the video frame images.
When it is implemented, server obtains the face in the video frame images for each video frame images of acquisition
Image.
Specifically, the facial image in the video frame images can be obtained by step as shown in Figure 2, including following
Step:
Face location coordinate in S121, the video frame images according to preset Face datection model extraction.
In this step, the face location coordinate includes face key point position coordinates.The preset Face datection mould
Type is the preparatory trained neural network model for Face datection, and in the training process, neural network can be, but not limited to
Using MTCNN, the embodiment of the present invention is not construed as limiting this.
By taking MTCNN as an example, classification based training process is as follows: it is trained using a large amount of face and non-face sample image,
The classifier for solving the problems, such as two classification is obtained, that is, judges whether it is face.The target of Face datection is found out in image
The corresponding position of all faces, the output of algorithm is the coordinate of face boundary rectangle in the picture, can also include that face is crucial
Point information, i.e. face key point position coordinates.Wherein, face key point include at least five key points: two eyes, nose,
Two corners of the mouth points, as shown in figure 3, it is five key point schematic diagrames of a positive face of standard.
Training process is as follows:
Training dataset: facial image and its markup information, wherein markup information contains the upper left corner of image detection frame
The coordinate of point and the coordinate information and face key point location coordinate information of bottom right angle point.
Training step: the face picture marked and its corresponding markup information are input in neural network, people is obtained
Face detection model.
S122, facial image is intercepted according to the face location coordinate.
Face is intercepted according to coordinate of the face location of the acquisition coordinate, that is, face boundary rectangle in the video frame images
Image.
S13, the status information that spectators are corresponded in the video frame images is determined according to the facial image.
When it is implemented, the status information includes that gender, age, expression, human eye open closed state and face rotation angle
Degree.Specifically, determine that the gender that spectators are corresponded in the video frame images, age, expression, human eye are opened according to the facial image
Closed state and face rotate angle.
Specifically, the gender that spectators are corresponded in the video frame images can be obtained in the following manner: by the face
Image is input to preset Gender Classification model, obtains the gender of the corresponding spectators of the facial image.Wherein, described preset
Gender Classification model is the preparatory trained neural network model for sex-screening, in the training process, nerve net used
Network can be, but not limited to using GoogLeNet, VGGNet (Visual Geometry Group Network) or AlexNet etc.,
The embodiment of the present invention is not construed as limiting this.
For using neural network GoogLeNet, the Gender Classification model training process is as follows: data set is using big
Measure facial image and its markup information, wherein markup information is male (0) or female (1), will have the face of markup information
Picture is input in GoogLeNet network, will export predicted value after network query function, by the true value of this predicted value and markup information
Difference is done, error amount is obtained, according to gradient descent method by error back propagation to neural network, amendment neural network is gone with this
Parameter, until neural network forecast value approaching to reality value, obtains Gender Classification model.
When it is implemented, facial image is inputted in the Gender Classification model, the gender for prediction that you can get it.
Similarly, the age that spectators are corresponded in the video frame images can be obtained by similar fashion: by the face figure
As being input to preset character classification by age model, the age of the corresponding spectators of the facial image is obtained.Wherein, the preset year
Age disaggregated model is the preparatory trained neural network model for age detection, in the training process, neural network used
It can be, but not limited to using GoogLeNet, VGGNet (Visual Geometry Group Network) or AlexNet etc., this
Inventive embodiments are not construed as limiting this.
The data set used in the training process of the character classification by age model is facial image and its markup information, mark letter
Breath is age number, and training process is similar with the training process of the Gender Classification model, does not repeat herein.
When it is implemented, the facial image is inputted in the character classification by age model, the age value for prediction that you can get it.
Similarly, the expression that spectators are corresponded in the video frame images can be obtained by similar fashion: by the face figure
As being input to preset expression classification model, the expression of the corresponding spectators of the facial image is obtained.Wherein, the preset table
Mutual affection class model is the neural network model for expression detection trained in advance, in the training process, neural network used
It can be, but not limited to using GoogLeNet, VGGNet (Visual Geometry Group Network) or AlexNet etc., this
Inventive embodiments are not construed as limiting this.
The data set used in the training process of the expression classification model is facial image and its markup information, mark letter
Breath is amimia (0), smiles (1), and training process is similar with the training process of the Gender Classification model, does not go to live in the household of one's in-laws on getting married herein
It states.
When it is implemented, the facial image is inputted in the expression classification model, the expression for prediction that you can get it, i.e.,
It smiles or amimia.
Similarly, it can be obtained by similar fashion and correspond to the human eye of spectators in the video frame images and open closed state: from institute
It states and extracts the position of human eye coordinate in face key point position coordinates, eye image is extracted according to the position of human eye coordinate,
The eye image is input to preset human eye disaggregated model, the human eye for obtaining the corresponding spectators of the facial image, which is opened, closes shape
State.Wherein, the preset human eye disaggregated model is the neural network model for human eye state detection trained in advance,
In training process, neural network used be can be, but not limited to using GoogLeNet, VGGNet (Visual Geometry Group
Network) or AlexNet etc., the embodiment of the present invention is not construed as limiting this.
The data set used in the training process of the human eye disaggregated model is facial image and its markup information, mark letter
For breath to open eyes (0), closing one's eyes (1), training process is similar with the training process of the Gender Classification model, does not repeat herein.
When it is implemented, the facial image is inputted in the human eye disaggregated model, the human eye for prediction that you can get it is opened
Closed state opens eyes or closes one's eyes.
Furthermore it is possible to obtain the face rotation angle for corresponding to spectators in the video frame images in the following manner: according to
The face key point position coordinates of the face key point position coordinates and the positive face of preset standard determine that face rotates angle.
When it is implemented, by taking face rotates left and right angle (i.e. the rotation angle of X-direction) as an example, as shown in figure 4, it is
The key point distribution schematic diagram of actual face key point and the positive face of standard when face rotates left and right, put 131,132,133,134,
135 be five key points of the positive face of standard, respectively represents left eye, right eye, nose, the left corners of the mouth, the right corners of the mouth, the place of point 137 and 138
Position is respectively the position for the maximum angle that face rotates left and right, i.e. the position that turn 90 degrees of left-right rotary, and 136 positions of point are real
One key point position of border face, i.e., actual nose position.L indicates the left eye 131 and reality of the positive face of standard
Nose 136 between vertical range, R indicate the positive face of standard right eye 132 and actual nose 136 between vertical range, S
The spacing of the right and left eyes X-direction of the positive face of expression standard.If L<R, then it represents that face is biased to the left side, if L>R, then it represents that people
Face is biased to the right.Then left and right face rotation angle can be calculated by the following formula:
The calculating and so on of upper human face rotation angle degree (i.e. the rotation angle of Y direction), does not repeat herein.
When the rotation angle of the rotation angle of face X-direction and Y direction within the scope of predetermined angle when, then really
The fixed face is positive face, when the rotation angle of face X-direction and the rotation angle of Y direction have at least one not in preset angle
When within the scope of degree, it is determined that the face is side face.For example, predetermined angle may range from [- 30 °, 30 °].It is being embodied
In the process, the predetermined angle range can be set based on experience value, and the embodiment of the present invention is not construed as limiting this.
It should be noted that face may have different visual angle and posture in the picture, so face is needed to be aligned, it is aligned
Method can carry out affine transformation according to five key points of standard faces and be corrected.
Further, closed state is opened according to human eye and face rotation angle judges the note of the corresponding spectators of the facial image
Whether meaning power is concentrated.
Specifically, if the human eye open closed state for open eyes, and the face rotation angle predetermined angle range it
When interior, then determine that the attention of the corresponding spectators of the facial image is concentrated;If the human eye opens closed state to close one's eyes, or
When the face rotation angle is not within the scope of the predetermined angle, then the attention of the corresponding spectators of the facial image is determined
Power is not concentrated.
S14, it is determined according to the status information of spectators each in each video frame images and the viewing duration of each spectators to institute
State the interested spectators' type of advertisement.
When it is implemented, can be determined according to process as shown in Figure 5 to the interested spectators' type of the advertisement, including
Following steps:
Viewing duration is greater than preset duration in S141, the statistics video frame images and attention is concentrated and expression is
The frame number of the video frame images of smile state is greater than the spectators of preset quantity.
When it is implemented, carrying out face matching firstly the need of to the face in each video frame images, statistics includes same sight
Many video frame frame numbers, with the viewing duration of the determination spectators.Specifically, to the facial image in different video frame image into
Pedestrian's face feature extraction obtains face feature vector, special according to the face extracted from the facial image in different video frame images
It levies vector and carries out face characteristic matching.Wherein, face feature vector can be extracted in the following manner: facial image is input to
In the sorter network that preset face characteristic extracts, the corresponding face feature vector of the facial image is obtained.In turn, can pass through
The Euclidean distance calculated between every two face feature vector carries out face characteristic matching, and Euclidean distance value is bigger, then illustrates similar
Degree is higher, conversely, then similarity is smaller, to carry out the face matching of different video frame image.
For example, face feature vector 1:T1=x11+x12+......+x1i
Face feature vector 2:T2=x21+x22+......+x2i
The then Euclidean distance of face feature vector 1 and face feature vector 2 are as follows:
Further, it counts viewing duration in the video frame images to be greater than in the spectators of preset duration, the video frame figure
Attention is concentrated as in and expression is greater than the spectators of preset quantity for the frame number of the video frame images of smile state.Wherein, it presets
Duration and preset quantity sets itself, the embodiment of the present invention can be not construed as limiting this based on experience value.
S142, the age of the spectators counted and gender are determined as to the interested spectators' type of the advertisement.
S15, recommend the advertisement automatically to the spectators of the type in the advertising time section.
Advertisement recommended method provided in an embodiment of the present invention, collection of server set the sight of viewing advertisement in advertising time section
Many video frame images obtain the facial image in the video frame images for each video frame images, according to the people of acquisition
Face image determines the status information that spectators are corresponded in the video frame images, in turn, according in the setting advertising time section
Each video frame images in each spectators status information and each spectators viewing duration determination it is interested in the advertisement
Spectators' type, the advertising time section backward to the spectators of the type recommends the advertisement automatically.Compared to existing skill
Art, in the present invention according to from the video frame images of the spectators of viewing advertisement the human face image information extracted determine the state of spectators
Information is obtained according to the status information of spectators to the interested crowd of the advertisement, so that advertisement backstage can be automatically smart in real time
The quasi- push interested advertisement of user reaches saving advertising cost, improves advertisement recommendation efficiency and increases advertisement and launches precisely
The effect of degree can automatically generate the suggestion of advertisement serving policy, save manpower and time cost that advertisement artificially counts.
Based on the same inventive concept, the embodiment of the invention also provides a kind of advertisement recommendation apparatus, since above-mentioned advertisement pushes away
Recommend the principle that device solves the problems, such as is similar to advertisement recommended method, therefore the implementation of above-mentioned apparatus may refer to the implementation of method,
Overlaps will not be repeated.
As shown in fig. 6, it is the structural schematic diagram of advertisement recommendation apparatus provided in an embodiment of the present invention, may include:
Acquisition unit 21, for acquiring the video frame images of the spectators of viewing advertisement in setting advertising time section;
Acquiring unit 22 obtains the facial image in the video frame images for being directed to each video frame images;
First determination unit 23, for determining the state for corresponding to spectators in the video frame images according to the facial image
Information;
Second determination unit 24, for according to the status information of spectators each in each video frame images and the sight of each spectators
See that duration is determined to the interested spectators' type of the advertisement;
Recommendation unit 25, for recommending the advertisement automatically to the spectators of the type in the advertising time section.
Preferably, the acquiring unit 22, is specifically used for the video frame figure according to preset Face datection model extraction
Face location coordinate as in, the face location coordinate includes face key point position coordinates;It is sat according to the face location
Mark interception facial image.
Preferably, the status information includes gender, the age, expression, human eye opens closed state and face rotates angle;
First determination unit 23 is obtained specifically for the facial image is input to preset Gender Classification model
Take the gender of the corresponding spectators of the facial image;And the facial image is input to preset character classification by age model, it obtains
Take the age of the corresponding spectators of the facial image;And the facial image is input to preset expression classification model, it obtains
Take the expression of the corresponding spectators of the facial image;And the position of human eye is extracted from the face key point position coordinates
Coordinate extracts eye image according to the position of human eye coordinate, the eye image is input to preset human eye disaggregated model,
The human eye for obtaining the corresponding spectators of the facial image opens closed state;And it according to the face key point position coordinates and presets
The positive face of standard face key point position coordinates determine face rotate angle.
Optionally, described device, further includes:
Judging unit, in the face key point according to the face key point position coordinates and the positive face of preset standard
After position coordinates determine face rotation angle, closed state is opened according to the human eye and face rotation angle judges the people
Whether the attention of the corresponding spectators of face image is concentrated.
Preferably, the judging unit, if opening closed state specifically for the human eye to open eyes, and the face rotates
When angle is within the scope of predetermined angle, then determine that the attention of the corresponding spectators of the facial image is concentrated;If the people
Eye open closed state for close one's eyes or the face rotate angle not within the scope of the predetermined angle when, then determine the people
The corresponding spectators' of face image is absent minded.
Preferably, second determination unit 24, is specifically used for counting viewing duration in the video frame images and is greater than in advance
If duration and attention concentration and expression are that the frame number of the video frame images of smile state is greater than the spectators of preset quantity;It will
The age of the spectators counted and gender are determined as to the interested spectators' type of the advertisement.
Based on same technical concept, the embodiment of the invention also provides a kind of electronic equipment 300, referring to shown in Fig. 7, electronics
Equipment 300 is used to implement the advertisement recommended method of above method embodiment record, and the electronic equipment 300 of the embodiment can wrap
Include: memory 301, processor 302 and storage are in the memory and the computer journey that can run on the processor
Sequence, such as advertisement recommended program.The processor realizes that above-mentioned each advertisement recommended method is real when executing the computer program
Apply the step in example, such as step S11 shown in FIG. 1.Alternatively, being realized when the processor execution computer program above-mentioned
The function of each module/unit in each Installation practice, such as 21.
The specific connection medium between above-mentioned memory 301, processor 302 is not limited in the embodiment of the present invention.The application
Embodiment is being connected in Fig. 7 with passing through bus 303 between memory 301, processor 302, and bus 303 is in Fig. 7 with thick line table
Show, the connection type between other components is only to be schematically illustrated, does not regard it as and be limited.The bus 303 can be divided into
Address bus, data/address bus, control bus etc..Only to be indicated with a thick line in Fig. 7 convenient for indicating, it is not intended that only
A piece bus or a type of bus.
Memory 301 can be volatile memory (volatile memory), such as random access memory
(random-access memory, RAM);Memory 301 is also possible to nonvolatile memory (non-volatile
Memory), such as read-only memory, flash memory (flash memory), hard disk (hard disk drive, HDD) or solid
State hard disk (solid-state drive, SSD) or memory 301 can be used for carrying or storing have instruction or data
The desired program code of structure type and can by any other medium of computer access, but not limited to this.Memory 301
It can be the combination of above-mentioned memory.
Processor 302, for realizing a kind of advertisement recommended method as shown in Figure 1, comprising:
The processor 302, it is as shown in fig. 1 for calling the computer program stored in the memory 301 to execute
Step S11~step S15.
The embodiment of the present application also provides a kind of computer readable storage medium, it is stored as holding needed for executing above-mentioned processor
Capable computer executable instructions, it includes the programs for execution needed for executing above-mentioned processor.
In some possible embodiments, the various aspects of advertisement recommended method provided by the invention are also implemented as
A kind of form of program product comprising program code, when described program product is run on an electronic device, described program generation
The advertisement for the illustrative embodiments various according to the present invention that code is used to that the electronic equipment to be made to execute this specification foregoing description
Step in recommended method, for example, the electronic equipment can execute step S11~step S15 as shown in fig. 1.
Described program product can be using any combination of one or more readable mediums.Readable medium can be readable letter
Number medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example may be-but not limited to-electricity, magnetic, optical, electromagnetic, red
The system of outside line or semiconductor, device or device, or any above combination.The more specific example of readable storage medium storing program for executing
(non exhaustive list) includes: the electrical connection with one or more conducting wires, portable disc, hard disk, random access memory
(RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc
Read memory (CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
The program product of embodiments of the present invention recommended for advertisement can use the read-only storage of portable compact disc
Device (CD-ROM) and including program code, and can run on the computing device.However, program product of the invention is not limited to
This, in this document, readable storage medium storing program for executing can be any tangible medium for including or store program, which can be commanded
Execution system, device or device use or in connection.
Readable signal medium may include in a base band or as the data-signal that carrier wave a part is propagated, wherein carrying
Readable program code.The data-signal of this propagation can take various forms, including --- but being not limited to --- electromagnetism letter
Number, optical signal or above-mentioned any appropriate combination.Readable signal medium can also be other than readable storage medium storing program for executing it is any can
Read medium, the readable medium can send, propagate or transmit for by instruction execution system, device or device use or
Program in connection.
The program code for including on readable medium can transmit with any suitable medium, including --- but being not limited to ---
Wirelessly, wired, optical cable, RF etc. or above-mentioned any appropriate combination.
The program for executing operation of the present invention can be write with any combination of one or more programming languages
Code, described program design language include object oriented program language-Java, C++ etc., further include conventional
Procedural programming language-such as " C " language or similar programming language.Program code can be fully in user
It calculates and executes in equipment, partly executes on a user device, being executed as an independent software package, partially in user's calculating
Upper side point is executed on a remote computing or is executed in remote computing device or server completely.It is being related to far
Journey calculates in the situation of equipment, and remote computing device can pass through the network of any kind --- including local area network (LAN) or extensively
Domain net (WAN)-be connected to user calculating equipment, or, it may be connected to external computing device (such as utilize Internet service
Provider is connected by internet).
It should be noted that although being referred to several unit or sub-units of device in the above detailed description, this stroke
It point is only exemplary not enforceable.In fact, embodiment according to the present invention, it is above-described two or more
The feature and function of unit can embody in a unit.Conversely, the feature and function of an above-described unit can
It is to be embodied by multiple units with further division.
In addition, although describing the operation of the method for the present invention in the accompanying drawings with particular order, this do not require that or
Hint must execute these operations in this particular order, or have to carry out shown in whole operation be just able to achieve it is desired
As a result.Additionally or alternatively, it is convenient to omit multiple steps are merged into a step and executed by certain steps, and/or by one
Step is decomposed into execution of multiple steps.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, apparatus or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (device) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications can be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (14)
1. a kind of advertisement recommended method characterized by comprising
The video frame images of the spectators of viewing advertisement in acquisition setting advertising time section;
For each video frame images, the facial image in the video frame images is obtained;
The status information that spectators are corresponded in the video frame images is determined according to the facial image;
It is determined according to the status information of spectators each in each video frame images and the viewing duration of each spectators to the advertisement sense
Spectators' type of interest;
Recommend the advertisement automatically to the spectators of the type in the advertising time section.
2. the method as described in claim 1, which is characterized in that the facial image in the video frame images is obtained, it is specific to wrap
It includes:
According to the face location coordinate in video frame images described in preset Face datection model extraction, the face location coordinate
Including face key point position coordinates;
Facial image is intercepted according to the face location coordinate.
3. method according to claim 2, which is characterized in that the status information includes that gender, age, expression, human eye are opened
Closed state and face rotate angle;
The status information for corresponding to spectators in the video frame images is determined according to the facial image, is specifically included:
The facial image is input to preset Gender Classification model, obtains the gender of the corresponding spectators of the facial image;
And
The facial image is input to preset character classification by age model, obtains the age of the corresponding spectators of the facial image;
And
The facial image is input to preset expression classification model, obtains the expression of the corresponding spectators of the facial image;
And
The position of human eye coordinate is extracted from the face key point position coordinates, and people is extracted according to the position of human eye coordinate
Eye image, is input to preset human eye disaggregated model for the eye image, obtains the people of the corresponding spectators of the facial image
Eye opens closed state;And
Determine that face revolves according to the face key point position coordinates of the face key point position coordinates and the positive face of preset standard
Gyration.
4. method as claimed in claim 3, which is characterized in that according to the face key point position coordinates and preset mark
The face key point position coordinates of quasi- positive face determine after face rotation angle, further includes:
Closed state is opened according to the human eye and face rotation angle judges the attention of the corresponding spectators of the facial image
Whether concentrate.
5. method as claimed in claim 4, which is characterized in that open closed state according to the human eye and the face rotates angle
Judge whether the attention of the corresponding spectators of the facial image is concentrated, specifically include:
If the human eye open closed state for open eyes, and the face rotation angle within the scope of predetermined angle when, then determine
The attention of the corresponding spectators of the facial image is concentrated;
If the human eye, which opens closed state, rotates angle not within the scope of the predetermined angle for eye closing or the face
When, then determine that the corresponding spectators' of the facial image is absent minded.
6. method as claimed in claim 4, which is characterized in that according to the status information of spectators each in each video frame images and institute
The viewing duration for stating each spectators is determined to the interested spectators' type of the advertisement, is specifically included:
Counting viewing duration in the video frame images to be greater than preset duration and attention concentration and expression is smile state
The frame number of video frame images is greater than the spectators of preset quantity;
The age of the spectators counted and gender are determined as to the interested spectators' type of the advertisement.
7. a kind of advertisement recommendation apparatus characterized by comprising
Acquisition unit, for acquiring the video frame images of the spectators of viewing advertisement in setting advertising time section;
Acquiring unit obtains the facial image in the video frame images for being directed to each video frame images;
First determination unit, for determining the status information for corresponding to spectators in the video frame images according to the facial image;
Second determination unit, for according to the status information of spectators each in each video frame images and the viewing duration of each spectators
It determines to the interested spectators' type of the advertisement;
Recommendation unit, for recommending the advertisement automatically to the spectators of the type in the advertising time section.
8. device as claimed in claim 7, which is characterized in that
The acquiring unit, specifically for the face location in the video frame images according to preset Face datection model extraction
Coordinate, the face location coordinate include face key point position coordinates;Facial image is intercepted according to the face location coordinate.
9. device as claimed in claim 8, which is characterized in that the status information includes that gender, age, expression, human eye are opened
Closed state and face rotate angle;
First determination unit, specifically for the facial image is input to preset Gender Classification model, described in acquisition
The gender of the corresponding spectators of facial image;And the facial image is input to preset character classification by age model, described in acquisition
The age of the corresponding spectators of facial image;And the facial image is input to preset expression classification model, described in acquisition
The expression of the corresponding spectators of facial image;And the position of human eye coordinate is extracted from the face key point position coordinates,
Eye image is extracted according to the position of human eye coordinate, the eye image is input to preset human eye disaggregated model, is obtained
The human eye of the corresponding spectators of the facial image opens closed state;And according to the face key point position coordinates and preset mark
The face key point position coordinates of quasi- positive face determine that face rotates angle.
10. device as claimed in claim 9, which is characterized in that further include:
Judging unit, in the face key point position according to the face key point position coordinates and the positive face of preset standard
After coordinate determines face rotation angle, closed state is opened according to the human eye and face rotation angle judges the face figure
As whether the attention of corresponding spectators is concentrated.
11. device as claimed in claim 10, which is characterized in that
The judging unit, if opening closed state specifically for the human eye to open eyes, and face rotation angle is default
When within angular range, then determine that the attention of the corresponding spectators of the facial image is concentrated;If the human eye opens closed state
When rotating angle not within the scope of the predetermined angle for eye closing or the face, then determine that the facial image is corresponding
Spectators it is absent minded.
12. device as claimed in claim 10, which is characterized in that
Second determination unit is specifically used for counting viewing duration in the video frame images and is greater than preset duration and pays attention to
Power is concentrated and expression is greater than the spectators of preset quantity for the frame number of the video frame images of smile state;The sight that will be counted
Many ages and gender are determined as to the interested spectators' type of the advertisement.
13. a kind of electronic equipment, including memory, processor and it is stored on the memory and can transports on the processor
Capable computer program, which is characterized in that the processor is realized when executing described program such as any one of claim 1~6 institute
The advertisement recommended method stated.
14. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
The step in advertisement recommended method as described in any one of claims 1 to 6 is realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910114138.7A CN109874054B (en) | 2019-02-14 | 2019-02-14 | Advertisement recommendation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910114138.7A CN109874054B (en) | 2019-02-14 | 2019-02-14 | Advertisement recommendation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109874054A true CN109874054A (en) | 2019-06-11 |
CN109874054B CN109874054B (en) | 2021-06-29 |
Family
ID=66918751
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910114138.7A Active CN109874054B (en) | 2019-02-14 | 2019-02-14 | Advertisement recommendation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109874054B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110321969A (en) * | 2019-07-11 | 2019-10-11 | 山东领能电子科技有限公司 | A kind of vehicle face alignment schemes based on MTCNN |
CN110543813A (en) * | 2019-07-22 | 2019-12-06 | 深思考人工智能机器人科技(北京)有限公司 | Face image and gaze counting method and system based on scene |
CN110880125A (en) * | 2019-10-11 | 2020-03-13 | 京东数字科技控股有限公司 | Virtual asset verification and cancellation method, device, server and storage medium |
CN111353461A (en) * | 2020-03-11 | 2020-06-30 | 京东数字科技控股有限公司 | Method, device and system for detecting attention of advertising screen and storage medium |
CN112492389A (en) * | 2019-09-12 | 2021-03-12 | 上海哔哩哔哩科技有限公司 | Video pushing method, video playing method, computer device and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101593352A (en) * | 2009-06-12 | 2009-12-02 | 浙江大学 | Driving safety monitoring system based on face orientation and visual focus |
CN104112209A (en) * | 2013-04-16 | 2014-10-22 | 苏州和积信息科技有限公司 | Audience statistical method of display terminal, and audience statistical system of display terminal |
CN104298682A (en) * | 2013-07-18 | 2015-01-21 | 广州华久信息科技有限公司 | Information recommendation effect evaluation method and mobile phone based on facial expression images |
CN104346503A (en) * | 2013-07-23 | 2015-02-11 | 广州华久信息科技有限公司 | Human face image based emotional health monitoring method and mobile phone |
CN104732413A (en) * | 2013-12-20 | 2015-06-24 | 中国科学院声学研究所 | Intelligent individuation video advertisement pushing method and system |
US20160048887A1 (en) * | 2014-08-18 | 2016-02-18 | Fuji Xerox Co., Ltd. | Systems and methods for gaining knowledge about aspects of social life of a person using visual content associated with that person |
CN106339680A (en) * | 2016-08-25 | 2017-01-18 | 北京小米移动软件有限公司 | Human face key point positioning method and device |
CN106971317A (en) * | 2017-03-09 | 2017-07-21 | 杨伊迪 | The advertisement delivery effect evaluation analyzed based on recognition of face and big data and intelligently pushing decision-making technique |
CN107169473A (en) * | 2017-06-10 | 2017-09-15 | 广东聚宜购家居网络科技有限公司 | A kind of recognition of face control system |
CN107194381A (en) * | 2017-07-06 | 2017-09-22 | 重庆邮电大学 | Driver status monitoring system based on Kinect |
CN107392159A (en) * | 2017-07-27 | 2017-11-24 | 竹间智能科技(上海)有限公司 | A kind of facial focus detecting system and method |
-
2019
- 2019-02-14 CN CN201910114138.7A patent/CN109874054B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101593352A (en) * | 2009-06-12 | 2009-12-02 | 浙江大学 | Driving safety monitoring system based on face orientation and visual focus |
CN104112209A (en) * | 2013-04-16 | 2014-10-22 | 苏州和积信息科技有限公司 | Audience statistical method of display terminal, and audience statistical system of display terminal |
CN104298682A (en) * | 2013-07-18 | 2015-01-21 | 广州华久信息科技有限公司 | Information recommendation effect evaluation method and mobile phone based on facial expression images |
CN104346503A (en) * | 2013-07-23 | 2015-02-11 | 广州华久信息科技有限公司 | Human face image based emotional health monitoring method and mobile phone |
CN104732413A (en) * | 2013-12-20 | 2015-06-24 | 中国科学院声学研究所 | Intelligent individuation video advertisement pushing method and system |
US20160048887A1 (en) * | 2014-08-18 | 2016-02-18 | Fuji Xerox Co., Ltd. | Systems and methods for gaining knowledge about aspects of social life of a person using visual content associated with that person |
CN106339680A (en) * | 2016-08-25 | 2017-01-18 | 北京小米移动软件有限公司 | Human face key point positioning method and device |
CN106971317A (en) * | 2017-03-09 | 2017-07-21 | 杨伊迪 | The advertisement delivery effect evaluation analyzed based on recognition of face and big data and intelligently pushing decision-making technique |
CN107169473A (en) * | 2017-06-10 | 2017-09-15 | 广东聚宜购家居网络科技有限公司 | A kind of recognition of face control system |
CN107194381A (en) * | 2017-07-06 | 2017-09-22 | 重庆邮电大学 | Driver status monitoring system based on Kinect |
CN107392159A (en) * | 2017-07-27 | 2017-11-24 | 竹间智能科技(上海)有限公司 | A kind of facial focus detecting system and method |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110321969A (en) * | 2019-07-11 | 2019-10-11 | 山东领能电子科技有限公司 | A kind of vehicle face alignment schemes based on MTCNN |
CN110321969B (en) * | 2019-07-11 | 2023-06-30 | 山东领能电子科技有限公司 | MTCNN-based face alignment method |
CN110543813A (en) * | 2019-07-22 | 2019-12-06 | 深思考人工智能机器人科技(北京)有限公司 | Face image and gaze counting method and system based on scene |
CN110543813B (en) * | 2019-07-22 | 2022-03-15 | 深思考人工智能机器人科技(北京)有限公司 | Face image and gaze counting method and system based on scene |
CN112492389A (en) * | 2019-09-12 | 2021-03-12 | 上海哔哩哔哩科技有限公司 | Video pushing method, video playing method, computer device and storage medium |
CN112492389B (en) * | 2019-09-12 | 2022-07-19 | 上海哔哩哔哩科技有限公司 | Video pushing method, video playing method, computer device and storage medium |
CN110880125A (en) * | 2019-10-11 | 2020-03-13 | 京东数字科技控股有限公司 | Virtual asset verification and cancellation method, device, server and storage medium |
CN111353461A (en) * | 2020-03-11 | 2020-06-30 | 京东数字科技控股有限公司 | Method, device and system for detecting attention of advertising screen and storage medium |
CN111353461B (en) * | 2020-03-11 | 2024-01-16 | 京东科技控股股份有限公司 | Attention detection method, device and system of advertising screen and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109874054B (en) | 2021-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109874054A (en) | A kind of advertisement recommended method and device | |
Zhang | Deepfake generation and detection, a survey | |
US10776970B2 (en) | Method and apparatus for processing video image and computer readable medium | |
US11538229B2 (en) | Image processing method and apparatus, electronic device, and computer-readable storage medium | |
CN110222554A (en) | Cheat recognition methods, device, electronic equipment and storage medium | |
Liu et al. | The path of film and television animation creation using virtual reality technology under the artificial intelligence | |
WO2021213067A1 (en) | Object display method and apparatus, device and storage medium | |
CN108886607A (en) | Video flowing enhancing | |
US20200236428A1 (en) | Facilitating Television Based Interaction With Social Networking Tools | |
CN110163053A (en) | Generate the method, apparatus and computer equipment of the negative sample of recognition of face | |
US10936877B2 (en) | Methods, systems, and media for detecting two-dimensional videos placed on a sphere in abusive spherical video content by tiling the sphere | |
CN110969673B (en) | Live broadcast face-changing interaction realization method, storage medium, equipment and system | |
KR20150070363A (en) | Rotation of an image based on image content to correct image orientation | |
CN109716386A (en) | The method for obtaining best roundness image using multiple cameras | |
CN109408672A (en) | A kind of article generation method, device, server and storage medium | |
Poier et al. | Murauer: Mapping unlabeled real data for label austerity | |
WO2020056027A1 (en) | 3d media elements in 2d video | |
WO2022188599A1 (en) | Selective redaction of images | |
US11954144B2 (en) | Training visual language grounding models using separation loss | |
US20220215660A1 (en) | Systems, methods, and media for action recognition and classification via artificial reality systems | |
CN109033264A (en) | video analysis method and device, electronic equipment and storage medium | |
US10909381B2 (en) | Methods, systems, and media for detecting two-dimensional videos placed on a sphere in abusive spherical video content | |
Huang et al. | Image dust storm synthetic method based on optical model | |
Zhang et al. | UCDCN: a nested architecture based on central difference convolution for face anti-spoofing | |
Meng et al. | Viewpoint quality evaluation for augmented virtual environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240514 Address after: Room 6227, No. 999, Changning District, Shanghai 200050 Patentee after: Shenlan robot (Shanghai) Co.,Ltd. Country or region after: China Address before: Unit 1001, 369 Weining Road, Changning District, Shanghai, 200336 (9th floor of actual floor) Patentee before: DEEPBLUE TECHNOLOGY (SHANGHAI) Co.,Ltd. Country or region before: China |