CN111428662A - Advertisement playing change method and system based on crowd attributes - Google Patents

Advertisement playing change method and system based on crowd attributes Download PDF

Info

Publication number
CN111428662A
CN111428662A CN202010234291.6A CN202010234291A CN111428662A CN 111428662 A CN111428662 A CN 111428662A CN 202010234291 A CN202010234291 A CN 202010234291A CN 111428662 A CN111428662 A CN 111428662A
Authority
CN
China
Prior art keywords
face
crowd
attributes
advertisement
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010234291.6A
Other languages
Chinese (zh)
Inventor
王庆祥
李兴运
伊新宇
关睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN202010234291.6A priority Critical patent/CN111428662A/en
Publication of CN111428662A publication Critical patent/CN111428662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0242Determining effectiveness of advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Human Computer Interaction (AREA)
  • Accounting & Taxation (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an advertisement playing change method and system based on crowd attributes, belongs to the field of advertisement recommendation, and aims to solve the technical problems that the information carrying capacity in a billboard is small, the information updating is lagged, the cost is high, and the advertisement playing condition cannot be analyzed and adjusted. The method comprises the following steps: regularly collecting face videos of people in front of the advertising board; carrying out face positioning on the video of the face of the crowd through an MTCNN algorithm, and carrying out face attribute identification; performing data analysis on the json data to obtain plaintext information of face attributes, and performing set division on the crowd attributes based on the face attributes, wherein each kind of face attributes corresponds to at least two sets; and carrying out corresponding advertisement recommendation according to the maximum value of the aggregate proportion. The system comprises an acquisition terminal and a server, wherein a face positioning module, a data analysis module and a data statistics module are arranged in the server, and the server is in wireless connection with the acquisition terminal.

Description

Advertisement playing change method and system based on crowd attributes
Technical Field
The invention relates to the field of advertisement recommendation, in particular to an advertisement playing change method and system based on crowd attributes.
Background
The human face attributes are a series of biological characteristics representing human face features, have strong self-stability and individual difference, and identify the identity of a person. It includes gender, skin color, age, expression, etc. The face attribute recognition is based on face recognition, so that the characteristics of the gender, the skin color, the age, the expression and the like of the face in the picture can be obtained, and the intelligent advertisement recommendation algorithm carries out advertisement recommendation according to the face characteristics.
The use of the group recommendation algorithm in advertisement recommendation is as follows: and connecting according to the attributes of the plurality of faces in the picture to obtain the relation among the attributes and correspondingly recommend.
The prior billboard has the following problems in use:
(1) the information carrying capacity is small.
(2) The information updating is lagged and the cost is high.
(3) The advertisement playing condition cannot be analyzed and adjusted.
How to perform group recommendation on advertisements based on the face attributes to solve the technical problems that the information carrying capacity in the billboard is small, the information updating is lagged and the cost is high, and the advertisement playing condition cannot be analyzed and adjusted.
Disclosure of Invention
The technical task of the invention is to provide an advertisement playing change method and system based on the crowd attributes to solve the problems of small information carrying capacity, lag information updating and high cost in the billboard and the condition that the advertisement playing condition cannot be analyzed and adjusted.
In a first aspect, the present invention provides a crowd attribute-based advertisement playing variation method, which adjusts advertisement playing contents and an advertisement playing sequence in real time based on the distribution characteristics of the crowd in front of an advertisement board, and the method comprises the following steps:
s100, regularly collecting face videos of people in front of the billboard;
s200, carrying out face positioning on the crowd face video through an MTCNN algorithm, and carrying out face attribute identification to obtain json data which is equal to the number of faces in the crowd face video;
s300, carrying out data analysis on the json data to obtain plaintext information of face attributes, and carrying out set division on the crowd attributes based on the face attributes, wherein each kind of face attributes corresponds to at least two sets;
s400, calculating the maximum value of the specific gravity of each set in the crowd through the plaintext information of the face attributes, carrying out corresponding advertisement recommendation according to the maximum value of the specific gravity of the sets, and switching advertisements when the face posture is not towards the billboard and/or the emotion is unpleasant;
the attributes of the human face include but are not limited to gender, age, smile degree, whether glasses are worn, human face posture and emotion;
the human face gestures comprise head raising, rotation and shaking;
the emotions include anger, disgust, fear, hurting, surprise, happiness and peace, anger, disgust, fear, hurting, surprise indicating that the emotion is unpleasant.
Preferably, the method also comprises the following steps:
s500, counting interactive information corresponding to each advertisement in a preset time period, wherein the interactive information comprises playing times, watching time proportion and set proportion, and increasing, deleting or adjusting playing areas of the advertisements based on the interactive information of all the advertisements;
the time difference from the detection of the face to the completion of the fixation of the advertisement is used as the watching time of the advertisement
Preferably, in step S100, the video of the face of the people in front of the billboard is collected by:
arranging a high-definition camera in front of the billboard, and connecting the high-definition camera and the advertising screen to the raspberry party;
calling a camera in a mode of accessing the domain name;
and returning the face video of the crowd collected by the camera to the server at preset time intervals, and executing the steps after the step S100 based on the server.
Preferably, for the advertisement currently played on the advertisement screen, the subsequent steps after step S100 are performed based on the current image with the target image being the face video of the crowd acquired a predetermined time before the end of the playing of the current advertisement.
Preferably, the step S200 of locating the face of the video of the human face through the MTCNN algorithm includes the following steps:
obtaining a regression of a window and a boundary Box of the face region based on the full convolution neural network;
correcting the obtained face region window through the result of BB regression;
merging overlapping windows by non-maximum suppression;
optimizing a full convolution neural network and filtering a non-face region candidate window;
correcting the candidate window of the non-face region through a BB regression result;
combining the face region window and the non-face region candidate window through NMS;
and extracting the face attribute through the network model O-Pet, and outputting N point positions calibrated by the face.
Preferably, in step S300, data analysis is performed on the JSON data by using a Fast JSON library.
Preferably, the step S300 of performing set division on the crowd attributes based on the face attributes includes:
based on gender, dividing the crowd attributes into a male set and a female set;
based on age, dividing the population attributes into an infant set, a juvenile set, a teenager set, a youth set, a middle-aged set and an elderly set;
based on the smile degrees, dividing the attributes of the crowd into a first set of smiles corresponding to a smile degree range of 0-60 and a second set of smiles corresponding to a smile degree range of 61-100;
based on whether the glasses are paired, dividing the crowd attributes into a common glasses wearing set, a sunglasses wearing set and a glasses not wearing set;
based on the human face posture, dividing the crowd attributes into a head-up set, a rotation set and a head-shaking set;
based on the emotion, the crowd attributes are divided into a first emotion set and a second emotion set, the corresponding emotion of the first emotion set is anger, disgust, fear and hurt, and the corresponding emotion of the second emotion set is happiness and calmness.
In a second aspect, the present invention provides a system for changing advertisement playing based on a crowd attribute, which obtains a distribution characteristic of people in front of a billboard to adjust advertisement playing content and an advertisement playing sequence in real time by using the method for changing advertisement playing based on a crowd attribute according to any one of the first aspect, and the system includes:
the acquisition terminal is arranged at the advertising screen and used for regularly acquiring the face video of people in front of the advertising board;
the server is provided with a face positioning module, a data analysis module and a statistical analysis module and is in wireless connection with the acquisition terminal;
the face positioning module is used for carrying out face positioning on the crowd face video through an MTCNN algorithm and carrying out face attribute identification to obtain json data which is equal to the number of faces in the crowd face video;
the data analysis module is used for carrying out data analysis on the json data to obtain plaintext information of the face attributes;
the data analysis module is used for carrying out set division on the attributes of the crowd based on the attributes of the faces, each attribute of the faces corresponds to at least two sets and is used for calculating the maximum value of the proportion of each set in the crowd according to the plaintext information of the attributes of the faces and carrying out corresponding advertisement recommendation according to the maximum value of the proportion of the sets, and when the face posture is not towards the billboard and/or the emotion is unpleasant, the advertisements are switched;
the statistical analysis module is used for counting interactive information corresponding to each advertisement in a preset time period, wherein the interactive information comprises playing times, watching time proportion and set proportion;
the attributes of the human face include but are not limited to gender, age, smile degree, whether glasses are worn, human face posture and emotion;
the human face gestures comprise head raising, rotation and shaking;
the emotions include anger, disgust, fear, hurting, surprise, happiness and peace, anger, disgust, fear, hurting, surprise indicating that the emotion is unpleasant.
Preferably, the system further comprises a statistical analysis module, wherein the statistical analysis module is used for counting the interactive information corresponding to each advertisement in a preset time period, the interactive information comprises the playing times, the watching time proportion and the set proportion, and the playing conditions of the advertisements are increased, deleted or the playing area is adjusted based on the interactive information of all the advertisements.
Preferably, the acquisition terminal includes:
the high-definition camera is arranged at the large screen of the billboard and is used for collecting the face video of people in front of the large screen;
the raspberry pie is in wired connection with the high-definition camera and used for storing the crowd face video and sending the crowd face video to the server.
Preferably, the method for locating the human face of the video of the human face through the MTCNN algorithm comprises the following steps:
obtaining a regression of a window and a boundary Box of the face region based on the full convolution neural network;
correcting the obtained face region window through the result of BB regression;
merging overlapping windows by non-maximum suppression;
optimizing a full convolution neural network and filtering a non-face region candidate window;
correcting the candidate window of the non-face region through a BB regression result;
combining the face region window and the non-face region candidate window through NMS;
and extracting the face attribute through the network model O-Pet, and outputting N point positions calibrated by the face.
The advertisement playing change method and system based on the crowd attributes have the following advantages that:
1. the method comprises the steps of collecting face videos of people in front of a billboard, analyzing face attributes of the people, and adjusting advertisement playing contents and advertisement playing sequence in real time according to the characteristics of the people, so that interactive playing of advertisements is realized, the consumption interests of the people are positioned, the advertisement browsing rate is high, the benefits are high, and the propagation advantages of the advertisements are provided;
2. for the collected crowd Face video, before Face attribute recognition is carried out through a Face + + platform, after the obtained Face region window and the non-Face region candidate window are corrected through BB results, the Face region window and the non-Face region candidate window are combined through NMS, then Face attribute extraction is carried out through a network model O-Pet, and the obtained Face attribute is more accurate;
3. the method comprises the steps of carrying out set division on the attributes of a crowd according to the attributes of human faces, wherein each attribute of the human faces corresponds to at least two sets, calculating the maximum value of the proportion of each set in the crowd through plaintext information of the attributes of the human faces, carrying out corresponding advertisement recommendation according to the maximum value of the proportion of the sets, and switching advertisements when the postures of the human faces do not face towards an advertising board and/or are unpleasant in emotion, so that the accurate popularization of the advertisements is realized, and the advertisement browsing rate is improved;
4. the cost is low, and the adding process is simple;
5. the manager can check the advertisement playing condition, so as to regulate and control in real time.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
The invention is further described below with reference to the accompanying drawings.
Fig. 1 is a flow chart of an advertisement playing change method based on the crowd attributes in embodiment 1.
Detailed Description
The present invention is further described in the following with reference to the drawings and the specific embodiments so that those skilled in the art can better understand the present invention and can implement the present invention, but the embodiments are not to be construed as limiting the present invention, and the embodiments and the technical features of the embodiments can be combined with each other without conflict.
The embodiment of the invention provides an advertisement playing change method and system based on crowd attributes, which are used for solving the technical problems that the information carrying capacity in a billboard is small, the information updating is lagged, the cost is high, and the advertisement playing condition cannot be analyzed and adjusted.
Example 1:
the advertisement playing change method based on the crowd attributes adjusts the advertisement playing content and the advertisement playing sequence in real time based on the crowd distribution characteristics in front of the billboard. The method comprises the following steps:
s100, regularly collecting face videos of people in front of the billboard;
s200, carrying out face positioning on the crowd face video through an MTCNN algorithm, and carrying out face attribute identification to obtain json data which is equal to the number of faces in the crowd face video;
s300, carrying out data analysis on the json data to obtain plaintext information of face attributes, and carrying out set division on the crowd attributes based on the face attributes, wherein each kind of face attributes corresponds to at least two sets;
s400, calculating the maximum value of the specific gravity of each set in the crowd through the plaintext information of the face attributes, carrying out corresponding advertisement recommendation according to the maximum value of the specific gravity of the sets, and switching advertisements when the face posture is not towards the advertising board and/or the emotion is unpleasant.
The attributes of the face include, but are not limited to, gender, age, smile level, whether glasses are worn, face pose, mood; the human face gestures comprise head raising, rotation and shaking; the emotions include anger, disgust, fear, hurting, surprise, happiness and peace, anger, disgust, fear, hurting, surprise indicating that the emotion is unpleasant.
In step S100, the facial videos of people in front of the billboard are collected as follows: set up high definition digtal camera before the bill-board, high definition digtal camera passes through USB interface connection to raspberry to be served, and the advertising screen passes through HDMI interface connection to raspberry to be served, and the raspberry is served and is equivalent to a small-size computer, can visit the domain name through the browser to the raspberry is served and is called high definition digtal camera through the mode of visiting the domain name, and every 1S will be returned the server end through the facial video of crowd that high definition digtal camera gathered, at the server end step after step S100.
After the high-definition camera is turned on, the face in the camera is always recognized, and in this embodiment, the camera is called by using a canvas (canvas) in html 5. Canvas is a tag added by html5 to realize real-time image generation on a webpage and to manipulate image content. The Canvas does not have its own behavior, but can be operated on by calling the Canvas API through javaScript.
The server side receives the crowd facial video returned by the high-definition camera to perform face attribute recognition analysis, and in the embodiment, the server side takes the crowd facial video received 1 second before the current advertisement playing is finished as a target image to perform analysis.
In step S200, the method for locating a human face in a video of a human face through an MTCNN algorithm includes the following steps:
(1) obtaining a regression of a window and a boundary Box of the face region based on the full convolution neural network;
(2) correcting the obtained face region window through the result of BB regression;
(3) merging overlapping windows by non-maximum suppression;
(4) optimizing a full convolution neural network and filtering a non-face region candidate window;
(5) correcting the candidate window of the non-face region through a BB regression result;
(6) combining the face region window and the non-face region candidate window through NMS;
(7) and extracting the face attribute through the network model O-Pet, and outputting 5 point positions calibrated by the face.
In step S200, Face attribute recognition is performed through a Face + + platform, an ArcFace platform, a penCV library, and the like, and json data of the same amount, such as gender, age, smile degree, whether glasses are worn, Face pose, emotion, and the like, are returned according to the number of faces in the acquired Face video of the crowd.
After the face attribute is extracted, advertisement recommendation is performed according to the face attribute, in step S300, json data returned by the face attribute is extracted, and data analysis is performed through a FastJSON library to obtain plaintext information of the face attribute, wherein the plaintext information indicates that the face attribute is male or female, the age is a non-negative integer, the smile degree value is a floating point number of [0,100], the larger the numerical value is, the smile degree is high, information such as anger, rotation or shaking is provided for people with ordinary glasses or without glasses, the face posture is angry, rotation or shaking, and the emotion is anger, disgust, fear, happiness, calmness, hurry or surprise.
Then, the crowd attributes are subjected to set division, in the embodiment, the crowd attributes are divided into two sets of men and women according to gender, six sets of infants (0-6 years old), children (7-12 years old), teenagers (13-17 years old), young people (18-45 years old), middle years (46-69 years old) and old people (over 69 years old), a first smiling set [0,60] and a second smiling set [61,100] are divided according to smiling degrees, the first set is divided into three sets according to whether the glasses are worn, common glasses are worn and glasses are not worn, the second set is divided into two sets according to face postures, head raising and rotating or shaking, and the first set and the second set are divided according to emotions, wherein the emotions corresponding to the first emotion set are angry, disgust, fear and hurt, and the emotions corresponding to the second emotion set are happy emotion, Calming; and classifying the advertisements according to the crowd attribute set.
Calculating the maximum value of the specific gravity of each set in the crowd through the plaintext information of the face attribute, wherein the maximum value is the sex specific gravity, the age specific gravity, the average smiling degree, the specific gravity of wearing glasses, the face posture specific gravity and the emotion specific gravity respectively, performing corresponding advertisement recommendation according to the maximum value of the specific gravity of each set, and switching advertisements when the face posture is not towards the billboard and/or the emotion is unpleasant.
And when the recognized face information and the emotion of the figure change, playing the corresponding video advertisement. Js of javaScript is used on a browser page, a video tag is realized on the page, and the video switching can be completed by changing the address of the video in the tag.
As a further improvement of this embodiment, the method further includes the following steps:
s500, counting interactive information corresponding to each advertisement in a preset time period, wherein the interactive information comprises playing times, watching time proportion and set proportion, and increasing, deleting or adjusting playing areas of the advertisements based on the interactive information of all the advertisements;
the time difference from the detection of the face to the end of the face fixation of the advertisement is used as the watching time length of the advertisement.
The shot crowd facial videos are stored in the server, and relevant people perform statistical analysis on the data of the day at 0 point later every day to obtain the playing times, the watching time proportion, the male and female proportion, the age proportion and the like of each advertisement, so that advertisement management personnel can increase, delete or adjust the playing area according to the advertisement playing condition.
Example 2:
the advertisement playing change system based on the crowd attributes obtains the distribution characteristics of the crowd in front of the billboard through the advertisement playing change method based on the crowd attributes disclosed in embodiment 1 to adjust the advertisement playing content and the advertisement playing sequence in real time, the change system comprises a collection terminal and a server, the collection terminal is arranged at an advertisement screen and used for collecting the facial videos of the crowd in front of the billboard at regular time, a face positioning module, a data analysis module and a data statistics module are arranged on the server, and the server is in wireless connection with the collection terminal.
The face positioning module is used for carrying out face positioning on the crowd face video through an MTCNN algorithm and carrying out face attribute identification to obtain json data which is equal to the number of faces in the crowd face video; the data analysis module is used for carrying out data analysis on the json data to obtain plaintext information of the human face attribute; the data analysis module carries out set division on the attributes of the crowd based on the attributes of the faces, each attribute of the faces corresponds to at least two sets and is used for calculating the maximum value of the proportion of each set in the crowd through the plaintext information of the attributes of the faces and carrying out corresponding advertisement recommendation according to the maximum value of the proportion of the sets, and when the face posture is not towards the billboard and/or the emotion is unpleasant, the advertisements are switched; the statistical analysis module is used for counting the interactive information corresponding to each advertisement in a preset time period, wherein the interactive information comprises the playing times, the watching time proportion and the set proportion; the above attributes of the human face include but are not limited to gender, age, smile degree, wearing glasses, human face posture and emotion; the human face gestures comprise head raising, rotation and shaking; the emotions include anger, disgust, fear, hurting, surprise, happiness and peace, anger, disgust, fear, hurting, surprise indicating that the emotion is unpleasant.
In the embodiment, the acquisition terminal comprises a high-definition camera and a raspberry pie, wherein the high-definition camera is arranged at the large screen of the billboard and is used for acquiring the face video of people in front of the large screen; the high definition camera is connected with the raspberry group through the USB interface, and the advertisement screen is connected with the raspberry group through the HDMI interface, and the raspberry group is used for storing crowd's facial video to send crowd's facial video to the server. The raspberry group calls the high-definition camera in a domain name access mode, and the facial videos of the crowd collected through the high-definition camera are returned to the server side every 1S.
After the high-definition camera is turned on, the face in the camera is always recognized, and in this embodiment, the camera is called by using a canvas (canvas) in html 5. Canvas is a tag added by html5 to realize real-time image generation on a webpage and to manipulate image content. The Canvas does not have its own behavior, but can be operated on by calling the Canvas API through javaScript.
The server receives the crowd facial video returned by the high-definition camera to perform face attribute recognition analysis, and in the embodiment, the server side takes the crowd facial video received 1 second before the current advertisement playing is finished as a target image to perform analysis.
The data analysis module carries out face positioning on the video of the face of the crowd through an MTCNN algorithm, and comprises the following steps:
(1) obtaining a regression of a window and a boundary Box of the face region based on the full convolution neural network;
(2) correcting the obtained face region window through the result of BB regression;
(3) merging overlapping windows by non-maximum suppression;
(4) optimizing a full convolution neural network and filtering a non-face region candidate window;
(5) correcting the candidate window of the non-face region through a BB regression result;
(6) combining the face region window and the non-face region candidate window through NMS;
(7) and extracting the face attribute through the network model O-Pet, and outputting 5 point positions calibrated by the face.
Face attribute recognition is carried out through a Face + + platform, an ArcFace platform, a penCV library and the like, and the same json data such as gender, age, smile degree, whether glasses are worn, Face posture, emotion and the like are returned according to the number of faces in the acquired facial videos of the crowd.
After the face attribute is extracted, the data analysis module analyzes the json data to obtain plaintext information of the face attribute.
Then, the data analysis module carries out advertisement recommendation according to the face attributes, and the specific data analysis module carries out the following operations:
firstly, extracting returned json data from face attributes, performing data analysis through a FastJSON library to obtain plaintext information of the face attributes, wherein if the gender is male or female, the age is a non-negative integer, the smiling degree value is a floating point number of [0,100], the larger the numerical value is, the smiling degree is high, common glasses are worn, sunglasses are worn or glasses are not worn, the face posture is head raising, rotation or shaking, and the emotion is angry, disgust, fear, happiness, calmness, hurt or surprise and other information;
then, the crowd attributes are subjected to set division, in the embodiment, the crowd attributes are divided into two sets of men and women according to gender, six sets of infants (0-6 years old), children (7-12 years old), teenagers (13-17 years old), young people (18-45 years old), middle years (46-69 years old) and old people (over 69 years old), a first smile set [0,60] and a second smile set [61,100] are divided according to smile degrees, the glasses are divided into three sets according to whether the glasses are worn, common glasses are worn and glasses are not worn, the glasses are divided into two sets according to face postures, the two sets of head raising and rotating or shaking are divided according to emotion, the first emotion combination and the second emotion set are divided according to emotion, wherein the emotion corresponding to the first emotion set is angry, disgust, fear and fear, and hurt, and the emotion corresponding to the second emotion set is happy emotion, Calming; classifying and dividing the advertisements according to the crowd attribute set;
and finally, calculating the maximum value of the specific gravity of each set in the crowd through the plaintext information of the face attribute, wherein the maximum value is the sex specific gravity, the age specific gravity, the average smiling degree, the specific gravity of whether glasses are worn, the face posture specific gravity and the emotion specific gravity, carrying out corresponding advertisement recommendation according to the maximum value of the specific gravity of each set, and switching the advertisement when the face posture is not towards the billboard and/or the emotion is unpleasant.
And when the recognized face information and the emotion of the figure change, playing the corresponding video advertisement. Js of javaScript is used on a browser page, a video tag is realized on the page, and the video switching can be completed by changing the address of the video in the tag.
The statistical analysis module can count the interactive information corresponding to each advertisement in a preset time period, the shot crowd facial videos are stored in the server, relevant personnel perform statistical analysis on the data of the day at 0 point later every day to obtain the playing times, the watching time proportion, the male and female proportion, the age proportion and the like of each advertisement, and therefore advertisement management personnel can increase, delete or adjust the playing area according to the advertisement playing condition.
The invention relates to an advertisement playing change system based on crowd attributes, which comprises the following working methods:
(1) the collecting terminal is used for regularly collecting the face video of the crowd in front of the billboard, storing the face video of the crowd in a raspberry group and transmitting the face video to the server;
(2) the method comprises the steps that at a server, Face positioning is carried out on a crowd Face video through an MTCNN algorithm, and Face attribute recognition is carried out through a Face + + platform, an ArcFace platform, a penCV library and the like to obtain json data which are equal to the number of faces in the crowd Face video;
(3) performing data analysis on the json data at the server to obtain plaintext information of face attributes, and performing set division on the crowd attributes based on the face attributes, wherein each kind of face attributes corresponds to at least two sets;
(4) and calculating the maximum value of the specific gravity of each set in the crowd through the plaintext information of the face attributes in the server, and carrying out corresponding advertisement recommendation according to the maximum value of the specific gravity of the sets, and switching the advertisements when the face posture is not towards the advertising board and/or the emotion is unpleasant.
Furthermore, the interactive information corresponding to each advertisement in a preset time period is counted by the statistic analysis module, and the advertisement management personnel can add, delete or adjust the playing area of the advertisement based on the interactive information of all advertisements.
The above-mentioned embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.

Claims (10)

1. The advertisement playing change method based on the crowd attribute is characterized in that the advertisement playing content and the advertisement playing sequence are adjusted in real time based on the crowd distribution characteristics before the billboard, and the method comprises the following steps:
s100, regularly collecting face videos of people in front of the billboard;
s200, carrying out face positioning on the crowd face video through an MTCNN algorithm, and carrying out face attribute identification to obtain json data which is equal to the number of faces in the crowd face video;
s300, carrying out data analysis on the json data to obtain plaintext information of face attributes, and carrying out set division on the crowd attributes based on the face attributes, wherein each kind of face attributes corresponds to at least two sets;
s400, calculating the maximum value of the specific gravity of each set in the crowd through the plaintext information of the face attributes, carrying out corresponding advertisement recommendation according to the maximum value of the specific gravity of the sets, and switching advertisements when the face posture is not towards the billboard and/or the emotion is unpleasant;
the attributes of the human face include but are not limited to gender, age, smile degree, whether glasses are worn, human face posture and emotion;
the human face gestures comprise head raising, rotation and shaking;
the emotions include anger, disgust, fear, hurting, surprise, happiness and peace, anger, disgust, fear, hurting, surprise indicating that the emotion is unpleasant.
2. The method of claim 1, further comprising the steps of:
s500, counting interactive information corresponding to each advertisement in a preset time period, wherein the interactive information comprises playing times, watching time proportion and set proportion, and increasing, deleting or adjusting playing areas of the advertisements based on the interactive information of all the advertisements;
the time difference from the detection of the face to the end of the face fixation of the advertisement is used as the watching time length of the advertisement.
3. The method for changing advertisement broadcasting based on crowd attributes as claimed in claim 1 or 2, wherein the facial videos of the people in front of the billboard are collected in step S100 by:
arranging a high-definition camera in front of the billboard, and connecting the high-definition camera and the advertising screen to the raspberry party;
calling a camera in a mode of accessing the domain name;
and returning the face video of the crowd collected by the camera to the server at preset time intervals, and executing the steps after the step S100 based on the server.
4. The advertising playing changing method based on the crowd property as claimed in claim 3, wherein for the currently played advertisement on the advertising screen, the facial video of the crowd obtained a predetermined time before the end of the playing of the current advertisement is taken as the target image, and the subsequent steps after step S100 are performed based on the current image.
5. The method of claim 1 or 2, wherein the step S200 of performing face localization on the video of the human face by MTCNN algorithm comprises the steps of:
obtaining a regression of a window and a boundary Box of the face region based on the full convolution neural network;
correcting the obtained face region window through the result of BB regression;
merging overlapping windows by non-maximum suppression;
optimizing a full convolution neural network and filtering a non-face region candidate window;
correcting the candidate window of the non-face region through a BB regression result;
combining the face region window and the non-face region candidate window through NMS;
and extracting the face attribute through the network model O-Pet, and outputting N point positions calibrated by the face.
6. The method of claim 1 or 2, wherein in step S300, the JSON database is used to perform data analysis on the JSON data.
7. The method of claim 1 or 2, wherein the step S300 of grouping the attributes of the people based on the attributes of the people comprises:
based on gender, dividing the crowd attributes into a male set and a female set;
based on age, dividing the population attributes into an infant set, a juvenile set, a teenager set, a youth set, a middle-aged set and an elderly set;
based on the smile degrees, dividing the attributes of the crowd into a first set of smiles corresponding to a smile degree range of 0-60 and a second set of smiles corresponding to a smile degree range of 61-100;
based on whether the glasses are paired, dividing the crowd attributes into a common glasses wearing set, a sunglasses wearing set and a glasses not wearing set;
based on the human face posture, dividing the crowd attributes into a head-up set, a rotation set and a head-shaking set;
based on the emotion, the crowd attributes are divided into a first emotion set and a second emotion set, the corresponding emotion of the first emotion set is anger, disgust, fear and hurt, and the corresponding emotion of the second emotion set is happiness and calmness.
8. The system for changing advertisement broadcasting based on crowd property, which is characterized in that the crowd distribution characteristics in front of the billboard are obtained to adjust the advertisement broadcasting content and the advertisement broadcasting sequence in real time by the method for changing advertisement broadcasting based on crowd property as claimed in any one of claims 1 to 7, the system comprises:
the acquisition terminal is arranged at the advertising screen and used for regularly acquiring the face video of people in front of the advertising board;
the server is provided with a face positioning module, a data analysis module and a statistical analysis module and is in wireless connection with the acquisition terminal;
the face positioning module is used for carrying out face positioning on the crowd face video through an MTCNN algorithm and carrying out face attribute identification to obtain json data which is equal to the number of faces in the crowd face video;
the data analysis module is used for carrying out data analysis on the json data to obtain plaintext information of the face attributes;
the data analysis module is used for carrying out set division on the attributes of the crowd based on the attributes of the faces, each attribute of the faces corresponds to at least two sets and is used for calculating the maximum value of the proportion of each set in the crowd according to the plaintext information of the attributes of the faces and carrying out corresponding advertisement recommendation according to the maximum value of the proportion of the sets, and when the face posture is not towards the billboard and/or the emotion is unpleasant, the advertisements are switched;
the statistical analysis module is used for counting interactive information corresponding to each advertisement in a preset time period, wherein the interactive information comprises playing times, watching time proportion and set proportion;
the attributes of the human face include but are not limited to gender, age, smile degree, whether glasses are worn, human face posture and emotion;
the human face gestures comprise head raising, rotation and shaking;
the emotions include anger, disgust, fear, hurting, surprise, happiness and peace, anger, disgust, fear, hurting, surprise indicating that the emotion is unpleasant.
9. The system of claim 8, wherein the collection terminal comprises:
the high-definition camera is arranged at the large screen of the billboard and is used for collecting the face video of people in front of the large screen;
the raspberry pie is in wired connection with the high-definition camera and used for storing the crowd face video and sending the crowd face video to the server.
10. The system of claim 8, wherein the human face location of the video of human face by MTCNN algorithm comprises the steps of:
obtaining a regression of a window and a boundary Box of the face region based on the full convolution neural network;
correcting the obtained face region window through the result of BB regression;
merging overlapping windows by non-maximum suppression;
optimizing a full convolution neural network and filtering a non-face region candidate window;
correcting the candidate window of the non-face region through a BB regression result;
combining the face region window and the non-face region candidate window through NMS;
and extracting the face attribute through the network model O-Pet, and outputting N point positions calibrated by the face.
CN202010234291.6A 2020-03-30 2020-03-30 Advertisement playing change method and system based on crowd attributes Pending CN111428662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010234291.6A CN111428662A (en) 2020-03-30 2020-03-30 Advertisement playing change method and system based on crowd attributes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010234291.6A CN111428662A (en) 2020-03-30 2020-03-30 Advertisement playing change method and system based on crowd attributes

Publications (1)

Publication Number Publication Date
CN111428662A true CN111428662A (en) 2020-07-17

Family

ID=71549093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010234291.6A Pending CN111428662A (en) 2020-03-30 2020-03-30 Advertisement playing change method and system based on crowd attributes

Country Status (1)

Country Link
CN (1) CN111428662A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101979A (en) * 2020-07-29 2020-12-18 力引万物(深圳)科技有限公司 Advertisement pushing method and pushing system thereof
CN112184314A (en) * 2020-09-29 2021-01-05 福州东方智慧网络科技有限公司 Popularization method based on equipment side visual interaction
CN112637688A (en) * 2020-12-09 2021-04-09 北京意图科技有限公司 Video content evaluation method and video content evaluation system
CN112651776A (en) * 2020-12-23 2021-04-13 湖北小启数云科技有限公司 Electronic advertisement pushing method and system based on big data analysis
CN115630998A (en) * 2022-12-22 2023-01-20 成都智元汇信息技术股份有限公司 Method, device and medium for dynamically pushing advertisements in subway station
TWI796072B (en) * 2021-12-30 2023-03-11 關貿網路股份有限公司 Identification system, method and computer readable medium thereof
CN117372088A (en) * 2023-12-08 2024-01-09 莱芜职业技术学院 Music teaching popularization method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129644A (en) * 2011-03-08 2011-07-20 北京理工大学 Intelligent advertising system having functions of audience characteristic perception and counting
CN102881239A (en) * 2011-07-15 2013-01-16 鼎亿数码科技(上海)有限公司 Advertisement playing system and method based on image identification
CN202917136U (en) * 2012-10-12 2013-05-01 天津红翔吉瑞网络科技有限公司 Advertising device based on face recognition
CN104573619A (en) * 2014-07-25 2015-04-29 北京智膜科技有限公司 Method and system for analyzing big data of intelligent advertisements based on face identification
CN109086739A (en) * 2018-08-23 2018-12-25 成都睿码科技有限责任公司 A kind of face identification method and system of no human face data training
CN110287363A (en) * 2019-05-22 2019-09-27 深圳壹账通智能科技有限公司 Resource supplying method, apparatus, equipment and storage medium based on deep learning
CN110310144A (en) * 2019-05-22 2019-10-08 深圳壹账通智能科技有限公司 Advertisement sending method, device, equipment and storage medium based on the age
CN110390048A (en) * 2019-06-19 2019-10-29 深圳壹账通智能科技有限公司 Information-pushing method, device, equipment and storage medium based on big data analysis
CN110532421A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of gender based on people, the music recommended method and system of mood and age

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129644A (en) * 2011-03-08 2011-07-20 北京理工大学 Intelligent advertising system having functions of audience characteristic perception and counting
CN102881239A (en) * 2011-07-15 2013-01-16 鼎亿数码科技(上海)有限公司 Advertisement playing system and method based on image identification
CN202917136U (en) * 2012-10-12 2013-05-01 天津红翔吉瑞网络科技有限公司 Advertising device based on face recognition
CN104573619A (en) * 2014-07-25 2015-04-29 北京智膜科技有限公司 Method and system for analyzing big data of intelligent advertisements based on face identification
CN109086739A (en) * 2018-08-23 2018-12-25 成都睿码科技有限责任公司 A kind of face identification method and system of no human face data training
CN110287363A (en) * 2019-05-22 2019-09-27 深圳壹账通智能科技有限公司 Resource supplying method, apparatus, equipment and storage medium based on deep learning
CN110310144A (en) * 2019-05-22 2019-10-08 深圳壹账通智能科技有限公司 Advertisement sending method, device, equipment and storage medium based on the age
CN110390048A (en) * 2019-06-19 2019-10-29 深圳壹账通智能科技有限公司 Information-pushing method, device, equipment and storage medium based on big data analysis
CN110532421A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of gender based on people, the music recommended method and system of mood and age

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101979A (en) * 2020-07-29 2020-12-18 力引万物(深圳)科技有限公司 Advertisement pushing method and pushing system thereof
CN112184314A (en) * 2020-09-29 2021-01-05 福州东方智慧网络科技有限公司 Popularization method based on equipment side visual interaction
CN112637688A (en) * 2020-12-09 2021-04-09 北京意图科技有限公司 Video content evaluation method and video content evaluation system
CN112637688B (en) * 2020-12-09 2021-09-07 北京意图科技有限公司 Video content evaluation method and video content evaluation system
CN112651776A (en) * 2020-12-23 2021-04-13 湖北小启数云科技有限公司 Electronic advertisement pushing method and system based on big data analysis
TWI796072B (en) * 2021-12-30 2023-03-11 關貿網路股份有限公司 Identification system, method and computer readable medium thereof
CN115630998A (en) * 2022-12-22 2023-01-20 成都智元汇信息技术股份有限公司 Method, device and medium for dynamically pushing advertisements in subway station
CN117372088A (en) * 2023-12-08 2024-01-09 莱芜职业技术学院 Music teaching popularization method and system
CN117372088B (en) * 2023-12-08 2024-02-23 莱芜职业技术学院 Music teaching popularization method and system

Similar Documents

Publication Publication Date Title
CN111428662A (en) Advertisement playing change method and system based on crowd attributes
US10517521B2 (en) Mental state mood analysis using heart rate collection based on video imagery
US20200134295A1 (en) Electronic display viewing verification
US20150313530A1 (en) Mental state event definition generation
CN109635680A (en) Multitask attribute recognition approach, device, electronic equipment and storage medium
CN106557937B (en) Advertisement pushing method and device
EP3651103A1 (en) Device, system and method for providing service relating to advertising and product purchase by using artificial-intelligence technology
CN105426850A (en) Human face identification based related information pushing device and method
CN106055710A (en) Video-based commodity recommendation method and device
US20120243751A1 (en) Baseline face analysis
CN104573619A (en) Method and system for analyzing big data of intelligent advertisements based on face identification
US20170105668A1 (en) Image analysis for data collected from a remote computing device
CN106897659A (en) The recognition methods of blink motion and device
CN108363969B (en) Newborn pain assessment method based on mobile terminal
CN111539290A (en) Video motion recognition method and device, electronic equipment and storage medium
US20150186912A1 (en) Analysis in response to mental state expression requests
CN108921034A (en) Face matching process and device, storage medium
CN103761508A (en) Biological recognition method and system combining face and gestures
CN107577706A (en) User behavior data processing method, device and computer-readable recording medium
CN111062752A (en) Elevator scene advertisement putting method and system based on audience group
CN110427795A (en) A kind of property analysis method based on head photo, system and computer equipment
Wedel et al. Modeling eye movements during decision making: A review
CN110389662A (en) Content displaying method, device, storage medium and the computer equipment of application program
CN115687786A (en) Personalized recommendation method, system and storage medium
Wang et al. A hybrid bandit model with visual priors for creative ranking in display advertising

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200717