CN113393275B - Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform - Google Patents

Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform Download PDF

Info

Publication number
CN113393275B
CN113393275B CN202110687605.2A CN202110687605A CN113393275B CN 113393275 B CN113393275 B CN 113393275B CN 202110687605 A CN202110687605 A CN 202110687605A CN 113393275 B CN113393275 B CN 113393275B
Authority
CN
China
Prior art keywords
user
advertisement
module
data
identity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110687605.2A
Other languages
Chinese (zh)
Other versions
CN113393275A (en
Inventor
苏娟
吴育怀
汪功林
陈孝君
梁雨菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Grapefruit Cool Media Information Technology Co ltd
Original Assignee
Anhui Grapefruit Cool Media Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Grapefruit Cool Media Information Technology Co ltd filed Critical Anhui Grapefruit Cool Media Information Technology Co ltd
Priority to CN202111481967.2A priority Critical patent/CN114155034A/en
Priority to CN202110687605.2A priority patent/CN113393275B/en
Publication of CN113393275A publication Critical patent/CN113393275A/en
Application granted granted Critical
Publication of CN113393275B publication Critical patent/CN113393275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0265Vehicular advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications

Abstract

The invention belongs to the field of big data processing, and particularly relates to an intelligent medium management system based on a VOC (volatile organic compound) car owner big data platform. The intelligent media management system is used for acquiring the matching degree between the current user and the advertisement to be delivered, and further adjusting the advertisement playing sequence list; the intelligent media management system comprises: the system comprises a keyword extraction module, a historical user information query module, a user type classification module, a user label establishing module, an identity characteristic identification module, a target portrait dataset establishing module and an advertisement playing sequence list adjusting module. The logic of the adjusting method adopted by the system is as follows: firstly, acquiring a user label of a current user; establishing a target image data set of the current user group according to the user label; and finally, adjusting the playing sequence of the advertisements in the advertisement playing sequence list according to the matching degree of the target portrait data set and the advertisements. The invention solves the problems that the advertisement putting efficiency of the existing advertisement putting equipment is low and the advertisement putting sequence can not be adjusted aiming at the user group.

Description

Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform
Technical Field
The invention belongs to the field of big data processing, and particularly relates to an intelligent medium management system based on a VOC vehicle owner cloud big data platform.
Background
In scenes such as elevators, markets, garages, subway stations and the like, a large number of advertisement putting devices for playing advertisements are almost arranged. These devices cycle through the built-in advertising video. The existing advertisement delivery equipment can only play the advertisements in the server according to a fixed playing sequence, and cannot adjust the playing sequence of the advertisements or the playing content of the advertisements according to different users. If the played content needs to be replaced, the related device management personnel is required to manually switch or update the played content of the device from a remote place or a field. This makes the advertisement delivery equipment can not realize the accurate delivery to different users, may appear pushing the problem of the advertisement that is totally unsuitable for the propelling movement to specific crowd even, and this has not only reduced user's use and has experienced, has also reduced the efficiency of putting of advertisement operator's advertisement, can't realize the maximize of marketing value.
Disclosure of Invention
In order to solve the problems that the advertisement putting efficiency of the existing advertisement putting equipment is low and the advertisement putting sequence cannot be adjusted aiming at a user group, the invention provides an intelligent medium management system based on a VOC (volatile organic compound) vehicle owner big data platform.
The invention is realized by adopting the following technical scheme:
the utility model provides an intelligence media management system based on VOC car owner big data platform, this intelligence media management system is used for acquireing the matching degree between current user and the advertisement of waiting to put in, and then adjusts the advertisement broadcast sequence list. The intelligent media management system comprises: the system comprises a keyword extraction module, a historical user information query module, a user type classification module, a user label establishing module, an identity characteristic identification module, a target portrait dataset establishing module and an advertisement playing sequence list adjusting module.
The keyword extraction module is used for extracting a keyword data set associated with each advertisement in the advertisement playing sequence list, and the feature data in the keyword data set are a plurality of preset keywords related to the content of the advertisement.
The historical user information query module is used for querying a user portrait data set of each historical user from an advertisement analysis database and acquiring various feature data of each historical user in the user portrait data set.
The user type classification module is used for extracting facial features of all target users in an advertisement delivery area; and then comparing the extracted facial features with the facial features of the historical users in the advertisement analysis database, and distinguishing the current user as the historical user or the newly added user.
The user label establishing module is used for establishing an empty user label for each newly added user, and the user label comprises an identity label, a favorite label and an aversion label; the user label establishing module also adds a special user number in the identity label of each newly added user.
The identity characteristic identification module is used for extracting identity characteristics of the newly added users and adding the extracted identity characteristics into the identity labels of the corresponding newly added users.
The target portrait dataset creation module is to:
(1) setting a historical user proportion critical value q0, and calculating the proportion q of the current users identified as historical users in the advertisement delivery area in the current user group.
(2) Judging the magnitude relation between q and q0, and making the following decision according to the judgment result:
when q is larger than or equal to q0, extracting characteristic data in the preference label of the historical user, and after the characteristic data is de-duplicated, taking the characteristic data as a target image data set of the current user group.
And (ii) when q is less than q0, extracting characteristic data in the preference labels of all historical users. Sequentially calculating the contact ratio Dc1 between the content in the identity label of each newly added user and the content in the identity label of each historical user in the advertisement analysis database; and extracting the characteristic data in the preference label of the historical user with the maximum coincidence degree Dc1 with the identity label of each newly added user. And merging the two parts of feature data, and taking the feature data as a target image data set of the current user group after the feature data is deduplicated.
The advertisement playing sequence list adjusting module is used for:
(1) calculating the coincidence degree Dc2 of the feature data in the keyword data set associated with each advertisement extracted by the keyword extraction module and the feature data in the target portrait data set;
(2) and reordering the advertisements in the advertisement playing sequence list according to the descending order of the calculation result of the Dc2 of each advertisement to obtain the adjusted advertisement playing sequence list.
Further, the feature data in the keyword data set of each advertisement in the advertisement playlist at least comprises:
(1) keywords reflecting the advertised promotional product;
(2) keywords reflecting targeted customer groups targeted by the advertisement;
(3) keywords reflecting a speaker of the advertisement or a character image of the advertisement;
(4) high frequency or special keywords in the ad;
(5) the time length of the advertisement is classified;
(6) the genre of the advertisement is classified.
Further, the advertisement analysis database includes a user representation data set of the collected historical users. The user representation dataset includes facial feature data of each historical user and a user tag. The user tags include an identity tag, a favorite tag and an aversion tag. The identity tag stores feature data reflecting identity features of a user, and the feature data in the identity tag comprises: gender, age group, style of wear, and other characteristics. Other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features. The preference label stores feature data of an object reflecting the preference of the user. The aversion tag stores feature data reflecting an object averted by the user.
Further, the advertisement analysis database is stored in the VOC vehicle owner cloud big data platform.
Furthermore, the intelligent media management system based on the VOC vehicle owner big data platform is applied to an advertisement putting system with multi-angle monitoring equipment, and the advertisement putting system is used for playing advertisements to be put according to an advertisement playing sequence list; the multi-angle monitoring equipment is used for acquiring multi-angle monitored video stream data of all target users in an advertisement delivery area of the advertisement delivery equipment.
Further, the data sources of the user type classification module and the identity characteristic identification module are all multi-angle monitoring video stream data of the advertisement delivery area. The user type classification module comprises a facial feature extraction unit, a facial feature comparison unit and a user type classification unit. The facial feature extraction unit is used for extracting facial features of all users appearing in the video stream data. The facial feature comparison unit is used for acquiring all the facial features extracted by the facial feature extraction unit and the facial features of the historical users inquired by the historical user information inquiry module and comparing the facial features with the facial features of the historical users. The user type classification unit is used for classifying all users appearing in the video stream data into historical users or newly added users according to the distinguishing result of the facial feature comparison unit.
Further, the identity feature recognized by the identity feature recognition module comprises: gender, age group, style of wear, and other characteristics; other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features.
Further, in the features extracted by the identity feature identification module, the age groups are 0-10 years old, 10-20 years old, 20-30 years old, 30-50 years old, 50-70 years old or more than 70 years old; the wearing style includes leisure, business, sports, children or the elderly.
Further, the content reflected by other characteristics in the identity label comprises whether glasses are worn, whether a hat is worn, whether alopecia exists, whether lipstick is smeared, whether high-heeled shoes are worn, whether beard is accumulated, and whether a wristwatch is worn; for the above feature, if so, the feature data reflecting the feature is added to the other features, otherwise, the feature data is not added to the other features.
Further, the calculation formula of Dc1 is as follows:
Figure BDA0003125326250000021
further, the calculation formula of Dc2 is as follows:
Figure BDA0003125326250000031
furthermore, the identity characteristic identification module carries out multi-angle shooting on the current user through the camera, then utilizes the image identification unit to identify the shot image reflecting the current user state, and further extracts the characteristics which are reflected in the image and have the same type with the characteristic data stored in the identity label.
Further, the camera is a part of the identity characteristic identification module, or the camera belongs to an external device independent of the intelligent media management system based on the VOC vehicle owner big data platform, and the identity characteristic identification module calls the camera to shoot images when needed.
The technical scheme provided by the invention has the following beneficial effects:
the invention adjusts the advertisement playing sequence list in the advertisement putting equipment based on a classified advertisement analysis database. The characteristics of the historical users are extracted through the classified advertisement analysis database, and the newly added users and the historical users are compared by utilizing abundant samples in the advertisement analysis database, so that the preference, the demand and the like of the newly added users are predicted. By adopting the mode, the user group receiving the advertisement at present can be accurately analyzed and portrayed, the most suitable advertisement is delivered to the users according to the portraying result of the user group, and the effects of accurate delivery and efficient marketing are achieved.
The scheme of the invention solves the problem that the interest and the actual demand of the user group cannot be obtained in the prior art, and simultaneously solves the problems that the traditional advertisement putting equipment cannot be adjusted in real time and cannot be accurately matched with the demand of the user group; the efficiency of advertisement putting has been promoted greatly, improper advertisement propelling movement is avoided appearing. Therefore, it is believed that the solution provided by the present invention should be very commercially valuable.
Drawings
Fig. 1 is a schematic diagram of module connections of an intelligent media management system based on a VOC vehicle owner big data platform provided in embodiment 1 of the present invention;
FIG. 2 is a flowchart of a method for accurately delivering advertisements based on user images according to embodiment 1 of the present invention;
fig. 3 is a logic block diagram of a process of acquiring a user tag of a current user in embodiment 1 of the present invention;
fig. 4 is a logic block diagram of a process of acquiring a target image dataset of a current user group in embodiment 1 of the present invention;
fig. 5 is a flowchart of a method for creating an advertisement analysis database according to embodiment 2 of the present invention;
fig. 6 is a category differentiation diagram of feature data included in an identity tag in an advertisement analysis database according to embodiment 2 of the present invention;
FIG. 7 is a graph illustrating type discrimination of feature data included in a user image data set according to embodiment 2 of the present invention;
fig. 8 is a block diagram of a system for creating an advertisement analysis database according to embodiment 3 of the present invention;
fig. 9 is a flowchart of a method for evaluating the recognition degree of the advertisement by the user based on the feature recognition according to embodiment 4 of the present invention;
fig. 10 is a flowchart of a method for timely analyzing user requirements in a business district scenario according to embodiment 5 of the present invention;
fig. 11 is a flowchart of a method for matching user needs with advertisement content according to embodiment 6 of the present invention;
fig. 12 is a schematic block diagram of a garage megascreen MAX intelligent terminal with an intelligent voice interaction function according to embodiment 7 of the present invention;
fig. 13 is a type classification diagram of a switching instruction adopted by a human-computer interaction module in the garage megascreen MAX intelligent terminal with an intelligent voice interaction function according to embodiment 7 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
The embodiment provides an intelligent media management system based on a VOC vehicle owner big data platform, and the intelligent media management system is used for obtaining the matching degree between a current user and an advertisement to be delivered, and further adjusting an advertisement playing sequence list. In this embodiment, as shown in fig. 1, the intelligent media management system includes: the system comprises a keyword extraction module, a historical user information query module, a user type classification module, a user label establishing module, an identity characteristic identification module, a target portrait dataset establishing module and an advertisement playing sequence list adjusting module.
The keyword extraction module is used for extracting a keyword data set associated with each advertisement in the advertisement playing sequence list, and the feature data in the keyword data set are a plurality of preset keywords related to the content of the advertisement.
In this embodiment, the feature data in the keyword data set of each advertisement at least includes, as in embodiment 1:
(1) keywords reflecting the advertised promotional product.
(2) Keywords that reflect the targeted customer population targeted by the advertisement.
(3) Keywords reflecting the speaker of the advertisement or the character image of the advertisement.
(4) High frequency or special keywords in the ad words.
(5) The duration of the advertisement is classified.
(6) The genre of the advertisement is classified.
The historical user information query module is used for querying a user portrait data set of each historical user from an advertisement analysis database and acquiring various feature data of each historical user in the user portrait data set. The advertisement analysis database is stored in the VOC vehicle owner cloud big data platform. The advertisement analysis database is the database created in example 1. The advertisement analysis database contains a collected user representation data set of historical users; the user portrait data set comprises facial feature data of each historical user and user labels, and the user labels comprise identity labels, favorite labels and aversion labels; the identity tag stores feature data reflecting identity features of a user, and the feature data in the identity tag comprises: gender, age group, style of wear, and other characteristics; other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features. The preference tag stores feature data of an object reflecting a preference of the user, and the aversion tag stores feature data of an object reflecting an aversion of the user. In the features extracted by the identity feature recognition module, the age groups are 0-10 years old, 10-20 years old, 20-30 years old, 30-50 years old, 50-70 years old or more than 70 years old; the wearing style includes leisure, business, sports, children or the elderly. The contents reflected by other characteristics in the identity label comprise whether glasses are worn, whether a hat is worn, whether alopecia occurs, whether lipstick is smeared, whether high-heeled shoes are worn, whether beard is accumulated, and whether a wristwatch is worn; for the above feature, if so, the feature data reflecting the feature is added to the other features, otherwise, the feature data is not added to the other features.
The user type classification module is used for extracting facial features of all target users in an advertisement delivery area; and then comparing the extracted facial features with the facial features of all historical users in the advertisement analysis database, and distinguishing whether the current user is a historical user or a newly added user.
In the embodiment, the data sources of the user type classification module and the identity characteristic identification module are all multi-angle monitored video stream data of an advertisement delivery area; the user type classification module comprises a facial feature extraction unit, a facial feature comparison unit and a user type classification unit. The facial feature extraction unit is used for extracting facial features of all users appearing in the video stream data; the facial feature comparison unit is used for acquiring all the facial features extracted by the facial feature extraction unit and the facial features of all the historical users inquired by the historical user information inquiry module and comparing the facial features with the facial features of all the historical users. The user type classification unit is used for classifying all users appearing in the video stream data into historical users or newly added users according to the result distinguished by the facial feature comparison unit.
The user label establishing module is used for establishing an empty user label for each newly added user, and the established user labels comprise an identity label, a favorite label and an aversion label; the user label establishing module also adds a special user number in the identity label of each newly added user.
The identity characteristic identification module is used for extracting identity characteristics of the newly added users and adding the extracted identity characteristics into the identity labels of the corresponding newly added users.
The target portrait dataset creation module is to:
(1) setting a historical user proportion critical value q0, and calculating the proportion q of the current users identified as historical users in the advertisement delivery area in the current user group.
(2) Judging the magnitude relation between q and q0, and making the following decision according to the judgment result:
when q is larger than or equal to q0, extracting characteristic data in favorite labels of all historical users, and after the characteristic data are de-duplicated, taking the characteristic data as a target image data set of the current user group;
and (ii) when q is less than q0, extracting characteristic data in the preference labels of all historical users. And sequentially calculating the contact ratio Dc1 between the content in the identity label of each newly added user and the content in the identity label of each historical user in the advertisement analysis database. The formula for Dc1 is as follows:
Figure BDA0003125326250000051
and extracting the characteristic data in the preference label of the historical user with the maximum coincidence degree Dc1 with the identity label of each newly added user. Merging the two parts of feature data, and after the duplication removal of the feature data is completed; as a target image dataset for the current user group.
The advertisement playing sequence list adjusting module is used for: (1) the calculation formula for calculating the degree of coincidence Dc2 between the feature data in the keyword dataset associated with each advertisement extracted by the keyword extraction module and the feature data in the target image dataset is as follows:
Figure BDA0003125326250000052
(2) and reordering the advertisements in the advertisement playing sequence list according to the descending order of the calculation result of the Dc2 of each advertisement to obtain the adjusted advertisement playing sequence list.
In this embodiment, the provided intelligent media management system based on the VOC vehicle owner big data platform is applied to an advertisement delivery system with multi-angle monitoring equipment, and the advertisement delivery system is used for playing advertisements to be delivered according to an advertisement playing sequence list; the multi-angle monitoring equipment is used for acquiring multi-angle monitored video stream data of all target users in an advertisement delivery area of the advertisement delivery equipment. The identity characteristic identification module carries out image identification on the frame images of the shot video stream data by using the image identification unit, and then extracts the characteristics which are reflected in the images and have the same type as the characteristic data stored in the identity label.
Of course, in other embodiments, the related advertisement delivery system and multi-angle monitoring device may also be used as part of the intelligent media management system based on the VOC vehicle owner big data platform in this embodiment. And then the integrated coordination control is carried out on the processes of data acquisition, data processing, data analysis and advertisement putting which need to be completed in the embodiment.
The embodiment also comprises an advertisement accurate delivery method based on the user portrait. The precise delivery method is applied to an intelligent medium management system based on a VOC vehicle owner big data platform in the embodiment; as shown in fig. 2, the precise delivery method includes the following steps:
the method comprises the following steps: acquiring a user tag of a current user, as shown in fig. 3, specifically including the following steps:
1. and acquiring the facial features of each current user in the advertisement delivery area.
2. Sequentially carrying out facial recognition on each current user, inquiring an advertisement analysis database containing user portrait data sets of a plurality of historical users according to the facial recognition result, and judging as follows:
(1) when the facial features of the current user are matched with the feature data in one historical user facial feature data, all the feature data in the user tags of the historical users are obtained.
(2) And when the facial features of the current user are not matched with the feature data in the facial feature data of all historical users, judging that the current user is a new user, and establishing an empty user label for the new user.
The user portrait data set comprises corresponding facial feature data and user tags of historical users; the user tags include an identity tag, a like tag, and an aversion tag.
3. Acquiring a multi-angle image of the newly added user, performing image recognition on the multi-angle image, and supplementing feature data in the identity tag of the newly added user according to a recognition result; the characteristic data supplemented in the identity tag comprises a user number, gender, age group, wearing style and other characteristics; other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features.
Step two: establishing a target image data set of the current user group, as shown in fig. 4, the specific process is as follows:
1. setting a historical user proportion critical value q0, and calculating the proportion q of the current users identified as historical users in the advertisement delivery area in the current user group.
2. Judging the magnitude relation between q and q0, and making the following decision according to the judgment result:
(1) and when q is more than or equal to q0, extracting characteristic data in the favorite labels of all historical users, and after the characteristic data are de-duplicated, taking the characteristic data as a target image data set of the current user group.
(2) When q is less than q0, extracting characteristic data in favorite labels of all historical users; and sequentially calculating the contact ratio Dc1 between the content in the identity label of each newly added user and the content in the identity label of each historical user, wherein the calculation formula Dc1 is as follows:
Figure BDA0003125326250000061
extracting characteristic data in the preference label of the historical user with the maximum coincidence degree Dc1 with the identity label of each newly added user; and merging the two parts of feature data (the identified historical users and the favorite labels of the historical users with the maximum coincidence degree with the identity labels of the newly-added users), and after the duplication of the feature data is removed, taking the feature data as a target image data set of the current user group.
Step three: adjusting the playing sequence of the advertisements in the advertisement playing sequence list, and the specific process is as follows:
1. and acquiring a keyword data set associated with each advertisement in the advertisement playing sequence list, wherein the characteristic data in the keyword data set are a plurality of preset keywords related to the content of the currently played advertisement.
2. The calculation formula for obtaining feature data in the target image data set and calculating the degree of coincidence Dc2, Dc2 between feature data in the keyword data set associated with each advertisement and feature data in the target image data set is as follows:
Figure BDA0003125326250000062
3. and sequencing the advertisements in the advertisement playing sequence list according to the descending order of the calculation result of the Dc2 of each advertisement to obtain the readjusted advertisement playing sequence list.
The method for adjusting the advertisement playing sequence list in the advertisement delivery system provided in the embodiment is mainly based on the following principle and implementation logic:
since the present embodiment has acquired data in the created advertisement analysis database; therefore, when the advertisement is delivered, the face recognition is carried out on all the users in the advertisement delivery area, and whether the users belong to historical users in the advertisement analysis database or newly-added users which are not collected by the advertisement analysis database can be distinguished.
Profiling of historical users has been achieved in view of advertisement analysis data, i.e., feature data rich in user tags. At this time, when most users in the advertisement delivery area belong to the historical users, it can be considered that the needs and preferences of the historical users can represent the current entire user group. By obtaining the favorite label of the corresponding historical user and then extracting the feature data, a target portrait data set used for depicting the favorite or the demand of the current user group can be obtained.
When the number of newly added users in the advertisement delivery area reaches a certain level, portrayal cannot be performed only by historical users. It is obviously not known that real-time analysis is performed on the new users at this time, but because the implementation can query an advertisement analysis data set with a sufficiently large sample size and sufficiently rich data, the embodiment can perform identification (which can be realized by an image identification technology) on the new users, then compare the identification with the user tags in the advertisement analysis data set, extract the most suitable historical users from the identification, and use the user tags of the historical users as the user tags of the new users temporarily, so as to obtain the features in the favorite tags of the new users. Since the identity characteristics (such as age, height, sex, dressing, physiological characteristics) of the user have a great correlation with the needs or preferences of the user (characteristics in the preference label). Such approximate substitution in this embodiment should therefore be of high confidence. The embodiment can obtain the target portrait data set of the user group containing a large number of newly added users through the technical scheme.
After the target portrait dataset of the user group in the advertisement delivery area is obtained, the embodiment further compares the feature data in the target portrait dataset with the keyword dataset of each advertisement to be played, so that the overlapping degree of the feature data and the keyword dataset of each advertisement to be played can be found, and the higher the overlapping degree is, the user group is the target client of the advertisement, and at this time, the advertisements should be placed at the position where the advertisement is delivered preferentially.
Example 2
The present embodiment provides an advertisement analysis database containing a plurality of historical user data. This advertisement analysis database is the advertisement analysis database mentioned in example 1. The data in the advertisement analysis database realizes accurate portrayal of user interests and hobbies; thereby enabling accurate targeted marketing of advertisements to users.
The data in the advertisement analysis database is mainly obtained by identifying the identity characteristics of the user and analyzing the result of the acceptance evaluation of the user on the video advertisements in the scenes such as an elevator, a garage, a shopping mall and the like. The data in the advertisement analysis database mainly comprises the following contents:
(1) facial features of the user; the characteristic is mainly used for distinguishing the identities of different users as the unique identity marks of the users, and meanwhile, the advertisement analysis database allocates a special user number to the users according to the different identity marks.
(2) Identity characteristics of the user; the content of the partial data is rich, all the characteristics which can be obtained and are useful for distinguishing the identity characteristics of the user are included, including age, height, posture, wearing and dressing, physiological state and the like, and the characteristics have reference values for judging the working type, behavior habits, demand characteristics, hobbies, group members and the like of the user.
(3) A preference object of the user; the data of the part is obtained through the feedback of the user to different types of advertisements, and the content of the part is continuously updated and continuously optimized; basically, objects concerned and favored by the user in the current state can be described.
(4) An object of aversion of the user; the data of the part is obtained through the feedback of the user to different types of advertisements, and the content of the part is continuously updated and continuously optimized; objects that are not of interest or aversion in the current state of the user can be substantially characterized.
In this embodiment, as shown in fig. 5, the advertisement analysis database is created as follows:
step one, establishing user labels of all users
1. In the advertisement playing process, the facial features of each user are sequentially acquired, and facial recognition is carried out on the facial features.
2. Inquiring an advertisement analysis database according to the result of the facial recognition, and judging whether the facial features of the current user are matched with the facial features of a certain historical user in the advertisement analysis database:
(1) if yes, the current user is skipped.
(2) Otherwise, establishing an empty user label for the current user; the user tags include an identity tag, a favorite tag and an aversion tag.
3. And acquiring the multi-angle image of each user, and supplementing the feature data in the identity label of each user according to the image recognition result of the multi-angle image.
In this step, profiling can be performed on each user, and whether the user is a new user or a historical user, the user can be profiled and analyzed as long as the user appears in the target area and can be captured. This enables the size of the advertisement analysis database established in the present embodiment to reach a high level, and the sample is also rich enough. And a data foundation is laid for later application development by applying the database.
In the present embodiment, as shown in fig. 6, the feature data supplemented in the identity tag includes user number, gender, age group, wearing style and other features; other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features.
The age range in the identity label is one of 0-10 years old, 10-20 years old, 20-30 years old, 30-50 years old, 50-70 years old and above 70 years old which are classified according to the image recognition result; the wearing style in the identity tag includes leisure, business, sports, children or elderly. In the embodiment, it is considered that the age has an important influence on the needs of the user, and therefore, the age characteristic is one of the identity characteristics which must be considered. Meanwhile, as the conventional image information collection cannot directly acquire the professional characteristics of the user, the embodiment can roughly divide the occupation or social identity of the user to a certain extent by classifying the wearing style of the user.
Meanwhile, the contents reflected by other characteristics in the identity label comprise whether glasses are worn, whether a hat is worn, whether alopecia exists, whether lipstick is smeared, whether high-heeled shoes are worn, whether beard is accumulated, whether a wristwatch is worn and the like; for the above feature, if so, the feature data reflecting the feature is added to the other features, otherwise, the feature data is not added to the other features. Other features in the identity tag are very typical user-distinguishing features that have a great correlation with the consumer needs of different users. Women wearing high-heeled shoes, for example, painted lipstick, may have a higher level of interest in advertising for clothing, cosmetics, etc. Beard individuals are generally not very concerned with shavers. The hair-growing products and health products are more likely to be interested by the hair-losing population.
In fact, after applying some more various feature extraction techniques, the embodiment can also acquire more different types of identity features, and the more abundant the obtained feature quantity, the more detailed the feature classification of the user.
Step two, acquiring the characteristic data of the advertisement played currently
1. And acquiring the playing time T of each played advertisement and a keyword data set associated with each advertisement.
The feature data in the keyword data set are a plurality of preset keywords related to the content of the advertisement played currently. The feature data within the keyword dataset for each advertisement includes at least:
(1) keywords reflecting the advertised promotional product.
(2) Keywords that reflect the targeted customer population targeted by the advertisement.
(3) Keywords reflecting the speaker of the advertisement or the character image of the advertisement.
(4) High frequency or special keywords in the ad words.
(5) The duration of the advertisement is classified.
(6) The genre of the advertisement is classified.
In this embodiment, rich keywords are set for each advertisement, and these keywords include various types of information that the client can receive from one advertisement. When the user indicates approval of the advertisement, or makes positive feedback on the content in the advertisement, then some or all of the features in the keyword dataset for the advertisement may be deemed to be of interest or preference by the user. Conversely, when a user exhibits aversion or negative feedback with respect to an advertisement, the user may be deemed to be indifferent or aversive to certain features in the keyword dataset for the advertisement. In this way. When the sample size of the feedback data of different types of advertisements of corresponding users is collected to be large enough, the preference of the users can be analyzed basically, and then the preference of the users can be portrayed.
Step three, obtaining feedback data of each user on advertisement playing
1. Acquiring voice stream data generated by all users in an advertisement putting area during advertisement playing, monitoring video stream data of all users in the advertisement putting area, and sending an instruction which requires switching of the currently played advertisement by one or more users in the advertisement putting area.
The mode of the instruction sent by the user for switching the currently played advertisement includes key input, voice interaction and gesture interaction. The voice interaction is realized by identifying a voice keyword which is sent by a user and requires to switch the currently played advertisement; the gesture interaction is realized by identifying a characteristic gesture sent by a user for switching the currently played advertisement; the key input means a key input instruction to request switching of the currently played advertisement, which is input by the user directly through a key.
The voice key words are obtained by a voice recognition algorithm according to real-time voice stream data recognition; the characteristic gestures are obtained by a video motion recognition algorithm according to real-time video stream data; the key input instruction is obtained through an entity switching key module installed on an advertisement playing site.
In this embodiment, the feedback of the user mainly includes the following aspects:
(1) the change in expression when the user views the advertisement.
(2) Direct discussion of the user for the advertisement. E.g. talk about an actor or speaker in an advertisement, talk about the effect of a product, etc
(3) Gesture actions made by the user while viewing the advertisement. For example, a user's hand is directed to the advertisement playing device to prompt other users to watch the advertisement, which reflects that the user is interested in the currently playing advertisement.
(4) The time of attention of the user to watch a certain advertisement.
(5) The user requests to switch the currently played advertisement. This directly reflects that the user dislikes the advertisement.
In addition, other types of feedback can be extracted when the technical conditions are mature, and can be applied to later data analysis, such as laughing of the user, characteristic actions in other details, and the like.
2. And judging whether an instruction for switching the currently played advertisement is received, if so, assigning 1 to the characteristic quantity SW reflecting the instruction, and if not, assigning 0 to the SW.
Step four, calculating the acceptance evaluation value of each user to the current advertisement
1. Performing voice recognition on voice stream data, extracting keywords matched with feature data in the keyword data set, and counting the number N of the keywords1
2. Performing video motion recognition on video stream data; extracting the gesture actions of the representation user for feeding back the currently played advertisement, and counting the number N of the gesture actions2
The gesture actions of the user for feeding back the currently played advertisement include a head nodding, a palm applanation, a hand pointing to an advertisement playing interface generated by the user during the advertisement playing period, a head raising or head turning action of switching the head from a non-direct-view state to a direct-view state, and the like.
3. Performing video motion recognition on video stream data; extracting characteristic actions reflecting eye attention position changes of each user, and calculating attention duration t of each user to the currently played advertisement according to the characteristic actionsn(ii) a Wherein n represents the currentThe user number of the user.
The attention duration t of the user with the number n to the currently played advertisementnThe calculation method of (2) is as follows:
Figure BDA0003125326250000081
in the above formula, t1nIndicating the direct view duration of the user with the number n during the playing of the current advertisement; t is t2nIndicating the eye closing time length of the user with the number n in the current advertisement playing period; t is t3nIndicating the underhead time of the user with the number n in the current advertisement playing period; t is t4nIndicating the turnaround time length of the user with the number n during the playing of the current advertisement.
In this embodiment, when counting the attention duration of the user to the advertisement, the duration of the user viewing the advertisement playing interface is considered, and the duration of the user in a non-viewing state is also considered. In the embodiment, the time length determined to belong to the non-attention state is removed, and then the average value of the time length determined to belong to the attention state is obtained, so that the relatively accurate attention time length is obtained.
4. Performing frame-by-frame sampling on a frame image of video stream data according to a sampling frequency; carrying out image recognition on the images sampled at every other frame; extracting facial expressions of each user, and classifying the facial expressions as liked, ignored or disliked; and respectively counting the number of the three types of expression classification results of each user, and calculating the ratio of the number of the three types of expression classification results of each user in the total sample amount of the user.
5. And acquiring the value of the SW.
6. The acceptance evaluation value E of each user to the current advertisement is calculated by the following formulan
Figure BDA0003125326250000091
In the above formula, n represents the user number of the current user, EnIndicating the user with number n is playing currentlyEvaluation value of advertisement of (E)nNot less than 0, and EnThe larger the value of (A) is, the higher the recognition degree of the user on the currently played multimedia is reflected;
Figure BDA0003125326250000092
representing the attention concentration of the user with the number n to the currently played advertisement; k is a radical of1Representing the influence factor of the voice information feedback on the overall recognition evaluation result; k is a radical of2Representing the influence factor of the attitude action feedback on the overall recognition evaluation result; k is a radical of3Representing the influence factor of the expression feedback on the overall recognition evaluation result; k is a radical of4Representing the influence factor of the concentration of attention on the overall recognition evaluation result; m is1A score representing a single keyword in the voice information feedback; m is2A score representing a single gesture in the gesture motion feedback; m is3A score representing concentration; a represents the score of the favorite expression, p1,nThe expression classified as favorite by the user with the characterization number n is used as the proportion of the total quantity of the images sampled at every other frame; b denotes a score of neglecting expression, p2,nThe expression classified as neglect for the user with the representation number n is the proportion of the expression in the total quantity of the images sampled at every other frame; c denotes the score of aversive expression, p3,nThe expression classified as aversive for the user with the characterization number n is a proportion of the total number of images sampled at every other frame.
In this embodiment, the expression recognition may be completed by a neural network algorithm trained by a large number of samples. Voice recognition, video motion recognition, etc. also have a large number of products that can be directly applied, and for these parts, this embodiment is not described again.
In the embodiment, various types of feedback information made by the user on the played advertisement is extracted from voice stream data and video stream data of the user through the technologies of voice recognition, image recognition and video action recognition, and after the feedback information is quantized by the method provided by the embodiment, an evaluation result reflecting the recognition degree of the user on the current advertisement can be obtained. This result reflects the user's current advertisement's likes and dislikes, which in turn can be used to characterize the user's needs or interests.
Step five, establishing or updating the advertisement analysis database
1. Set EnA high threshold value E ofhAnd a low threshold value ElWherein E ishCritical value indicating that the user likes the currently played advertisement, ElA critical value indicating that the user dislikes the currently played advertisement, El>0。
2. When E isn≥EhAnd p is1,n+p2,n≥p3,nAdding feature data in a keyword data set associated with the currently played advertisement into a favorite tag corresponding to the current user, and performing feature data deduplication on the supplemented favorite tag; and deleting the characteristic data which is the same as the characteristic data in the keyword data set in the aversion label corresponding to the current user.
3. When E is less than or equal to ElAnd p is2,n+p3,n≥p1,nAdding feature data in a keyword data set associated with the currently played advertisement into an aversion tag corresponding to the current user, and performing feature data duplication elimination on the supplemented aversion tag; and deleting the characteristic data matched with the characteristic data in the keyword data set in the favorite label corresponding to the current user.
4. And updating the user label of each user to obtain a new user portrait data set of each user, and creating an advertisement analysis database.
As shown in fig. 7, the user portrait data set includes facial feature data and a user tag of a corresponding user.
The most core content in the advertisement analysis database is the content of the like label and the aversion label obtained by analyzing the behavior of the user, and the content is the direct data used for analyzing the user requirement at the later stage. In this embodiment, the user's likes and dislikes, which should be consistent with some or all of the features in the keyword dataset of the advertisement, can be directly estimated by feedback on the user when viewing the advertisement. Therefore, in this embodiment, after each advertisement is played, the accuracy attitude of the advertisement of the user is determined through analysis and statistics of feedback information of the user, and then the keyword data set of the advertisement is used as a feature in the favorite tag or the aversion tag of the current user when a specific condition is met.
In order to avoid the phenomenon of misclassification, the attitudes of the determined users need to be checked more strictly. The determination process of this embodiment introduces a special threshold determined according to expert experience, which is used as a basis for determining the true attitude of the user, in this embodiment, the threshold EhAnd ElThe method is determined after repeated verification, and can have high reliability. Thereby ensuring that the final portrait for the user is accurate and reliable.
Example 3
In this embodiment, a system for creating an advertisement analysis database is provided, where the system uses the method for creating an advertisement analysis database included in embodiment 2 to implement the processes of creating and updating the advertisement analysis database.
As shown in fig. 8, the creating system includes: the system comprises a historical user query module, an advertisement characteristic data extraction module, a user feedback data extraction module, a face recognition module, an image recognition module, a voice recognition module, a video action recognition module, a user label establishing module, an acceptance evaluation value calculation module and a database establishing module.
The historical user query module is used for querying an advertisement analysis database and extracting a user portrait data set of the collected historical users; the user portrait dataset comprises facial feature data of each historical user and user labels, and the user labels comprise identity labels, favorite labels and aversion labels.
The advertisement characteristic data extraction module is used for extracting the playing time length T of each advertisement and a keyword data set associated with the advertisement when the advertisement is played by an advertisement delivery system.
A user feedback data extraction module to: (1) when the advertisement delivery system plays the advertisement, voice information generated by users watching the advertisement in the advertisement delivery area is obtained, and voice stream data relevant to each advertisement is obtained. (2) When the advertisement delivery system plays the advertisement, multi-angle monitoring videos of all users watching the advertisement in the advertisement delivery area are obtained, and video stream data relevant to each advertisement are obtained. (3) The method comprises the steps that when the advertisement delivery system plays the advertisement, a switching instruction sent by a user watching the advertisement is obtained, wherein the switching instruction comprises a keyboard input instruction, a voice interaction instruction or a gesture interaction instruction; and assigning the characteristic quantity SW representing the switching instruction to be 1 when the acquisition is successful, otherwise assigning the characteristic quantity SW to be 0.
The face recognition module is used for obtaining an image data set through framing processing according to the video stream data and extracting face features of each user appearing in the image data set; and finishing the comparison process of the facial features of the current user and the facial features of each historical user in the advertisement analysis database, and distinguishing the newly added user from the historical users.
The image identification module is used for carrying out image identification on an image data set obtained by framing processing of video stream data, and further: (1) and acquiring various feature data reflecting the identity features of the newly added user. (2) The expressions of all the users during the advertisement playing are extracted, and the expressions are classified into one of liked, ignored or disliked.
The voice recognition module is used for carrying out voice recognition on voice stream data, and then: (1) and acquiring the voice interaction instruction which is sent by a user during the advertisement playing and is used for indicating that the currently played advertisement is required to be switched. (2) And extracting all words in the voice stream data, and finding out keywords matched with the characteristic data in the keyword data set.
The video motion recognition module is used for carrying out video motion recognition on video stream data, and further: (1) and extracting a gesture interaction instruction which is sent by a certain user in the video stream data and represents that the currently played advertisement is required to be switched. (2) And extracting gesture actions which are sent out by a certain user in the video stream data and are used for feeding back the currently played advertisement. (3) And extracting characteristic actions reflecting the eye attention position change of a certain user in the current advertisement playing process.
The user label establishing module is used for establishing an empty user label for each newly added user, and supplementing various feature data which are acquired by the image identification module and reflect the identity features of the newly added user to the identity label of the corresponding user.
The acceptance evaluation value calculation module is used for: (1) acquiring keywords which are identified from voice stream data by a voice identification module and matched with the characteristic data in the keyword data set, and counting the number N of the keywords1. (2) Acquiring gesture actions which are recognized by a video action recognition module and reflect the feedback of the user to the currently played advertisement, and counting the number N of the gesture actions2. (3) Obtaining the characteristic action which is identified by the video action identification module and reflects the eye attention position change of a certain user in the current advertisement playing process, and calculating the attention duration t of the current user to the currently played advertisement according to the characteristic actionn(ii) a Where n represents the user number of the current user. The acceptance evaluation value calculation module calculates the attention duration t of the user with the number n to the currently played advertisementnThe calculation formula of (a) is as follows:
Figure BDA0003125326250000111
in the above formula, t1nIndicating the direct view duration of the user with the number n during the playing of the current advertisement; t is t2nIndicating the eye closing time length of the user with the number n in the current advertisement playing period; t is t3nIndicating the underhead time of the user with the number n in the current advertisement playing period; t is t4nIndicating the turnaround time length of the user with the number n during the playing of the current advertisement.
(4) And acquiring the number of the three-category-expression classification results of each user, which are identified by the image identification module, and calculating the ratio of the number of the three-category-expression classification results of each user in the total sample size. (5) The value of SW is obtained. (6) The acceptance evaluation value E of each user to the current advertisement is calculated by the following formulan
Figure BDA0003125326250000112
In the above formula, n represents the user number of the current user,EnIndicating the evaluation value of the currently played advertisement by the user with the number n, EnNot less than 0, and EnThe larger the value of (A) is, the higher the recognition degree of the user on the currently played multimedia is reflected;
Figure BDA0003125326250000113
representing the attention concentration of the user with the number n to the currently played advertisement; k is a radical of1Representing the influence factor of the voice information feedback on the overall recognition evaluation result; k is a radical of2Representing the influence factor of the attitude action feedback on the overall recognition evaluation result; k is a radical of3Representing the influence factor of the expression feedback on the overall recognition evaluation result; k is a radical of4Representing the influence factor of the concentration of attention on the overall recognition evaluation result; m is1A score representing a single keyword in the voice information feedback; m is2A score representing a single gesture in the gesture motion feedback; m is3A score representing concentration; a represents the score of the favorite expression, p1,nThe expression classified as favorite by the user with the characterization number n is used as the proportion of the total quantity of the images sampled at every other frame; b denotes a score of neglecting expression, p2,nThe expression classified as neglect for the user with the representation number n is the proportion of the expression in the total quantity of the images sampled at every other frame; c denotes the score of aversive expression, p3,nThe expression classified as aversive for the user with the characterization number n is a proportion of the total number of images sampled at every other frame.
The database creation module is to: (1) based on expert experience, set EnA high threshold value E ofhAnd a low threshold value El(ii) a Wherein E ishCritical value indicating that the user likes the currently played advertisement, ElA critical value indicating that the user dislikes the currently played advertisement, ElIs greater than 0. (2) The following decisions and decisions are made for each user: when E isn≥EhAnd p is1,n+p2,n≥p3,nAdding feature data in a keyword data set associated with the currently played advertisement into a favorite tag corresponding to the current user, and performing feature data deduplication on the supplemented favorite tag; then the aversion label corresponding to the current user is usedThe same feature data as the feature data in the keyword data set is deleted. (ii) when E is less than or equal to ElAnd p is2,n+p3,n≥p1,nAdding feature data in a keyword data set associated with the currently played advertisement into an aversion tag corresponding to the current user, and performing feature data duplication elimination on the supplemented aversion tag; and deleting the characteristic data matched with the characteristic data in the keyword data set in the favorite label corresponding to the current user. (3) And updating the user label of each user in sequence to obtain a new user portrait data set of each user, and further completing the creation or updating of the advertisement analysis database. Wherein the user representation data set includes facial feature data and user tags of corresponding users.
The advertisement analysis database in this embodiment is empty at the beginning of creation, and after a user portrait data set of a first historical user is entered therein, the creation system of the advertisement analysis database determines that the current user is a new user or a historical user by comparing the facial features of the current user with the facial features of the historical users in the advertisement analysis database; and inputting the distinguished user portrait dataset of the newly added user into an advertisement analysis database, or updating the user label in the user portrait dataset of the historical user existing in the advertisement analysis database.
Example 4
On the basis of the foregoing embodiments, the present embodiment provides a method for evaluating the recognition degree of a user for an advertisement based on feature recognition, as shown in fig. 9, the method includes the following steps:
the method comprises the following steps: obtaining the characteristic data of the advertisement played currently
And acquiring the playing time T of each played advertisement and a keyword data set associated with each advertisement.
The feature data in the keyword data set are a plurality of preset keywords related to the content of the advertisement played currently. The feature data within the keyword dataset for each advertisement includes at least:
(1) keywords reflecting the advertised promotional product.
(2) Keywords that reflect the targeted customer population targeted by the advertisement.
(3) Keywords reflecting the speaker of the advertisement or the character image of the advertisement.
(4) High frequency or special keywords in the ad words.
(5) The duration of the advertisement is classified.
(6) The genre of the advertisement is classified.
Step two, obtaining feedback data of each user to the advertisement playing
1. Acquiring voice stream data generated by all users in an advertisement putting area during advertisement playing, monitoring video stream data of all users in the advertisement putting area, and sending an instruction which requires switching of the currently played advertisement by one or more users in the advertisement putting area.
The mode of the instruction sent by the user for switching the currently played advertisement includes key input, voice interaction and gesture interaction. The voice interaction is realized by identifying a voice keyword which is sent by a user and requires to switch the currently played advertisement; the gesture interaction is realized by identifying a characteristic gesture sent by a user for switching the currently played advertisement; the key input means a key input instruction to request switching of the currently played advertisement, which is input by the user directly through a key.
The voice key words are obtained by a voice recognition algorithm according to real-time voice stream data recognition; the characteristic gestures are obtained by a video motion recognition algorithm according to real-time video stream data; the key input instruction is obtained through an entity switching key module installed on an advertisement playing site.
In this embodiment, the feedback of the user mainly includes the following aspects:
(1) the change in expression when the user views the advertisement.
(2) Direct discussion of the user for the advertisement. E.g. talk about an actor or speaker in an advertisement, talk about the effect of a product, etc
(3) Gesture actions made by the user while viewing the advertisement. For example, a user's hand is directed to the advertisement playing device to alert other users, which reflects that the user is interested in the currently playing advertisement.
(4) The time of attention of the user to watch a certain advertisement.
(5) The user requests to switch the currently played advertisement. This directly reflects that the user dislikes the advertisement.
In addition, other types of feedback can be extracted when the technical conditions are mature, and can be applied to later data analysis, such as laughing of the user, characteristic actions in other details, and the like.
2. And judging whether an instruction for switching the currently played advertisement is received, if so, assigning 1 to the characteristic quantity SW reflecting the instruction, and if not, assigning 0 to the SW.
Thirdly, calculating the acceptance evaluation value of each user to the current advertisement
1. Performing voice recognition on voice stream data, extracting keywords matched with feature data in the keyword data set, and counting the number N of the keywords1
2. Performing video motion recognition on video stream data; extracting the gesture actions of the representation user for feeding back the currently played advertisement, and counting the number N of the gesture actions2
The gesture actions of the user for feeding back the currently played advertisement include a head nodding, a palm applanation, a hand pointing to an advertisement playing interface generated by the user during the advertisement playing period, a head raising or head turning action of switching the head from a non-direct-view state to a direct-view state, and the like.
3. Performing video motion recognition on video stream data; extracting characteristic actions reflecting eye attention position changes of each user, and calculating attention duration t of each user to the currently played advertisement according to the characteristic actionsn(ii) a Where n represents the user number of the current user.
The attention duration t of the user with the number n to the currently played advertisementnThe calculation method of (2) is as follows:
Figure BDA0003125326250000121
in the above formula, t1nIndicating the direct view duration of the user with the number n during the playing of the current advertisement; t is t2nIndicating the eye closing time length of the user with the number n in the current advertisement playing period; t is t3nIndicating the underhead time of the user with the number n in the current advertisement playing period; t is t4nIndicating the turnaround time length of the user with the number n during the playing of the current advertisement.
In this embodiment, when counting the attention duration of the user to the advertisement, the duration of the user viewing the advertisement playing interface is considered, and the duration of the user in a non-viewing state is also considered. In the embodiment, the time length determined to belong to the non-attention state is mainly removed, and then the average value of the time length determined to belong to the attention state is approximately obtained, so that the relatively accurate attention time length can be obtained.
4. Performing frame-by-frame sampling on a frame image of video stream data according to a sampling frequency; carrying out image recognition on the images sampled at every other frame; extracting facial expressions of each user, and classifying the facial expressions as liked, ignored or disliked; and respectively counting the number of the three types of expression classification results of each user, and calculating the ratio of the number of the three types of expression classification results of each user in the total sample amount of the user.
5. And acquiring the value of the SW.
6. The acceptance evaluation value E of each user to the current advertisement is calculated by the following formulan
Figure BDA0003125326250000131
In the above formula, n represents the user number of the current user, EnIndicating the evaluation value of the currently played advertisement by the user with the number n, EnNot less than 0, and EnThe larger the value of (A) is, the higher the recognition degree of the user on the currently played multimedia is reflected;
Figure BDA0003125326250000132
indicating currently played advertisement by user numbered nConcentration of attention; k is a radical of1Representing the influence factor of the voice information feedback on the overall recognition evaluation result; k is a radical of2Representing the influence factor of the attitude action feedback on the overall recognition evaluation result; k is a radical of3Representing the influence factor of the expression feedback on the overall recognition evaluation result; k is a radical of4Representing the influence factor of the concentration of attention on the overall recognition evaluation result; m is1A score representing a single keyword in the voice information feedback; m is2A score representing a single gesture in the gesture motion feedback; m is3A score representing concentration; a represents the score of the favorite expression, p1,nThe expression classified as favorite by the user with the characterization number n is used as the proportion of the total quantity of the images sampled at every other frame; b denotes a score of neglecting expression, p2,nThe expression classified as neglect for the user with the representation number n is the proportion of the expression in the total quantity of the images sampled at every other frame; c denotes the score of aversive expression, p3,nThe expression classified as aversive for the user with the characterization number n is a proportion of the total number of images sampled at every other frame.
The method provided by the embodiment can identify various types of feedback characteristics according to the feedback made by the user when the advertisement is played, and further obtain the acceptance evaluation of the user on the advertisement. The method can acquire various types of feedback of the user, and the obtained result of the acceptance evaluation of the user on the advertisement is more accurate and can be used as a basis for evaluating the advertisement putting effect.
Example 5
The embodiment provides a method for timely analyzing user requirements in a business district scene, which is obtained by further developing the method in embodiment 4, and realizes the most direct and rapid prediction or evaluation of the user requirements of a specific user. As shown in fig. 10, the method includes the steps of:
step 1: and acquiring the facial features of the current user in the advertisement delivery area.
Step 2: sequentially carrying out facial recognition on the current user, inquiring an advertisement analysis database (the advertisement analysis database is the advertisement analysis database in the embodiment) containing a user portrait dataset of a plurality of historical users according to the facial recognition result, and making the following judgment:
(1) when the facial features of the current user are matched with the feature data in one historical user facial feature data, all the feature data in the user tags of the historical users are obtained.
(2) And when the facial features of the current user are not matched with the feature data in the facial feature data of all historical users, judging that the current user is a new user, and establishing an empty user label for the new user.
The user portrait data set comprises corresponding facial feature data and user tags of historical users; the user tags include an identity tag, a like tag, and an aversion tag.
And step 3: acquiring a multi-angle image of the newly added user, performing image recognition on the multi-angle image, and supplementing feature data in the identity tag of the newly added user according to a recognition result; the characteristic data supplemented in the identity tag comprises a user number, gender, age group, wearing style and other characteristics; other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features.
And 4, step 4: comparing all the feature data in the identity tags with the identity tags of all the historical users in the advertisement analysis database, and calculating the feature coincidence degree Dc3 between the feature data and the identity tags, wherein the calculation formula of Dc3 is as follows:
Figure BDA0003125326250000141
and 5: extracting the characteristic data in the favorite label and the aversion label of the historical user with the maximum value of the characteristic coincidence degree Dc3 of the current user in the advertisement analysis database, filling the characteristic data into the user image data set of the newly added user, and completing the timely analysis process of the user requirement of the current user.
By analyzing the above processes, it can be found that the method in the embodiment can analyze and identify the user just before the user leaves the scene, so as to establish an estimated portrait dataset of features and behaviors and predict objects which the user likes and dislikes; and based on such predictions, timely analysis of user needs is achieved. The analysis method is more timely and effective, and long-time tracking and evaluation on the user are not needed. Therefore, the method has high practical value, and it is noted that the accuracy of the timely analysis result has great correlation with the sample size in the advertisement analysis database containing the user portrait data sets of a plurality of historical users. The larger the sample size of the advertisement analysis database, the more accurate the results of such timely analysis.
The logic of the method of this embodiment is to first obtain facial features of a user appearing in a specific scene, determine whether a data sample of the user is already recorded in the advertisement analysis database, if so, directly extract contents of favorite tags and aversion tags recorded in the advertisement analysis database by the user, and use the contents as a user image data set of the user, thereby analyzing and predicting the user requirements of the user. When the data sample of the user is not included in the advertisement analysis database, the identity feature of the user is extracted, and then the like label and the dislike label in the identity labels of the historical users whose identity features are most similar to the user (determined by Dc 3) in each historical user included in the advertisement analysis database are extracted and used as the user image data set of the current user, so as to analyze the user requirement of the user.
Example 6
The embodiment provides a method for matching user requirements with advertisement contents, which is developed on the basis of the embodiment and is used for selecting advertisements which are most matched with the current user from the advertisements to be launched currently; as shown in fig. 11, the matching method includes the steps of:
step 1: acquiring keyword data sets of all advertisements to be released currently; the keyword data set is the keyword data set established in any one of the above embodiments; the keyword data set contains keywords reflecting various feature data of advertisement contents.
Step 2: acquiring a user portrait dataset of a current user, wherein the user portrait dataset is a final result acquired by the timely analysis method for the user requirements in the business district scene provided by the embodiment 5;
and step 3: the calculation formula for calculating the degree of matching Dc4, Dc4 between the feature data in the keyword dataset and the data in the current user representation dataset for each advertisement is as follows:
Figure BDA0003125326250000142
and 4, step 4: and the advertisement with the maximum value of Dc4 is taken as the advertisement which is matched with the current user most, and the matching process of the user requirement and the advertisement content is completed.
The matched advertisement is matched with the actual demand of the user, and the best propaganda and promotion effect can be obtained. In practice, the best matching advertisement should be placed preferentially for the identified current user.
In the present embodiment, the matching method for the user requirement and the advertisement content adopts feature matching, in the feature matching process, the features (features in the favorite representation) representing the user requirement are obtained according to the feedback of the user to the historical advertisement playing process, and the feature data are the keywords of the corresponding advertisement. Therefore, when the characteristics are matched with the actual advertisements to be delivered, the matching is usually very easy to succeed, and the result of the characteristic matching is more accurate considering that the preference of the user is usually consistent and long-lasting.
Example 7
The embodiment provides a huge curtain MAX intelligent terminal in garage who possesses intelligent voice interaction function, and this huge curtain MAX intelligent terminal in garage is used for according to the user and self interactive process when broadcasting the advertisement, realizes waiting to put in the advertisement to the advertisement broadcast sequence list the update. The scheme of the embodiment belongs to the deep development and application of the technical scheme and the achievement in the embodiments. The garage megascreen MAX intelligent terminal with the intelligent voice interaction function adopts part of the processing method and the equipment module in the embodiment.
Specifically, as shown in fig. 12, the garage megascreen MAX intelligent terminal provided in this embodiment includes: the system comprises an advertisement playing module, a voice acquisition module, a video monitoring module, an advertisement characteristic data extraction module, a user feedback data extraction module, an image recognition module, a voice recognition module, a video action recognition module, a man-machine interaction module, an acceptance evaluation value calculation module and an advertisement playing sequence updating module.
The advertisement playing module is used for sequentially playing each advertisement to be launched according to the advertisement playing sequence list and switching the advertisement being played after receiving a switching instruction sent by the man-machine interaction module. Wherein, advertisement broadcast module is the huge curtain MAX display screen in garage.
The voice collecting module is used for collecting voice information generated by a user group watching the advertisements around the advertisement playing module when the advertisement playing module plays each advertisement. The voice acquisition module is a plurality of sound pickups arranged around a garage huge screen MAX display screen; the sound pick-up distributes in the one side towards the huge curtain MAX display screen display surface in garage.
The video monitoring module is used for monitoring the user groups watching the advertisements around the advertisement playing module in multiple angles when the advertisement playing module plays each advertisement. The view finding range of the video monitoring module faces one side of a display surface of a huge-screen MAX display screen of the garage, the video monitoring module comprises a plurality of monitoring cameras, and the view finding range is shot by the monitoring cameras from different angles.
The advertisement characteristic data extraction module is used for extracting the playing time T of each advertisement played by the advertisement playing module and a keyword data set associated with the advertisement.
The user feedback data extraction module is used for: (1) and receiving the voice information acquired by the voice acquisition module to obtain voice stream data related to each advertisement. (2) And receiving the multi-angle monitoring video collected by the video monitoring module to obtain video stream data related to each advertisement. (3) And acquiring a switching instruction which is sent by a man-machine interaction module and requires to switch the currently played advertisement, and assigning the characteristic quantity SW representing the switching instruction as 1 when the switching instruction is received, otherwise assigning the SW as 0.
The image recognition module is used for carrying out image recognition on an image data set obtained by framing the video stream data, further extracting expressions of all users during the advertisement playing period, and classifying the expressions into one of likeness, neglect or dislike. The image recognition module comprises an expression recognition unit, and the expression recognition unit adopts a neural network recognition algorithm trained by a large number of training sets to complete the classification process of the expression of the user in the image.
The voice recognition module is used for carrying out voice recognition on voice stream data, and then: (1) and acquiring a voice interaction instruction which is sent by a user during the advertisement playing and is used for indicating that the currently played advertisement is required to be switched. (2) And extracting all words in the voice stream data, and finding out keywords matched with the characteristic data in the keyword data set.
The voice recognition module comprises a voice interaction instruction extraction unit and a keyword extraction unit, and the voice interaction instruction extraction unit sends the extracted voice interaction instruction to a voice interaction unit in the man-machine interaction module; the keyword extraction unit sends the extracted keywords matching the feature data in the keyword data set to the recognition degree evaluation value calculation module.
The video motion recognition module is used for carrying out video motion recognition on video stream data, and further: (1) and extracting a gesture interaction instruction which is sent by a certain user in the video stream data and represents that the currently played advertisement is required to be switched. (2) Extracting gesture actions which are sent out by a certain user and used for feeding back the currently played advertisement in the video stream data; (3) and extracting characteristic actions reflecting the eye attention position change of a certain user in the current advertisement playing process.
The video motion extraction module comprises a gesture interaction instruction extraction unit, a posture motion feedback extraction unit and a catch-of-eye feature motion extraction unit; the gesture interaction instruction extraction unit sends the extracted gesture interaction instruction to a gesture interaction unit in the man-machine interaction module; the gesture action feedback extraction unit and the catch-of-eye feature action extraction unit send the extracted feature data to the acceptance evaluation value calculation module.
The man-machine interaction module is used for acquiring an instruction sent by a user for switching the currently played advertisement and sending a switching instruction; as shown in fig. 13, the manner in which the user issues the advertisement requesting to switch the currently played advertisement includes key input, voice interaction, and gesture interaction. The man-machine interaction module comprises an entity key module which is used for receiving a key input instruction which is directly sent by a user and requires to switch the currently played advertisement; the man-machine interaction module also comprises a voice interaction unit and a gesture interaction unit; the voice interaction unit is used for acquiring a voice interaction instruction which is sent by a user and requires to switch the currently played advertisement, and the voice interaction instruction is obtained by performing voice recognition by the voice recognition module according to real-time voice stream data; the gesture interaction unit is used for acquiring a gesture interaction instruction which is sent by a user and requires to switch the currently played advertisement, and the gesture interaction instruction is obtained by performing video action recognition by the video action recognition module according to real-time video stream data.
The acceptance evaluation value calculation module is used for: (1) obtaining keywords identified by the speech recognition module and matched with the feature data in the keyword data set, and counting the number N of the keywords1. (2) Acquiring gesture actions which are recognized by a video action recognition module and represent that a user feeds back the currently played advertisement, and counting the number N of the gesture actions2. (3) Obtaining the characteristic action which is identified by the video action identification module and reflects the eye attention position change of a certain user in the current advertisement playing process, and calculating the attention duration t of the current user to the currently played advertisement according to the characteristic actionn(ii) a Duration of interest tnThe calculation formula of (a) is as follows:
Figure BDA0003125326250000161
in the above formula, t1nIndicating the direct view duration of the user with the number n during the playing of the current advertisement; t is t2nIndicating the eye closing time length of the user with the number n in the current advertisement playing period; t is t3nIndicating the underhead time of the user with the number n in the current advertisement playing period; t is t4nIndicating the turnaround time length of the user with the number n during the playing of the current advertisement. (4) And acquiring the number of the three-category-expression classification results of each user, which are identified by the image identification module, and calculating the ratio of the number of the three-category-expression classification results of each user in the total sample size. (5) The value of SW is obtained. (6) The acceptance evaluation value E of each user to the current advertisement is calculated by the following formulan
Figure BDA0003125326250000162
In the above formula, n represents the user number of the current user, EnIndicating the evaluation value of the currently played advertisement by the user with the number n, EnNot less than 0, and EnThe larger the value of (A) is, the higher the recognition degree of the user on the currently played multimedia is reflected;
Figure BDA0003125326250000163
representing the attention concentration of the user with the number n to the currently played advertisement; k is a radical of1Representing the influence factor of the voice information feedback on the overall recognition evaluation result; k is a radical of2Representing the influence factor of the attitude action feedback on the overall recognition evaluation result; k is a radical of3Representing the influence factor of the expression feedback on the overall recognition evaluation result; k is a radical of4Representing the influence factor of the concentration of attention on the overall recognition evaluation result; m is1A score representing a single keyword in the voice information feedback; m is2A score representing a single gesture in the gesture motion feedback; m is3A score representing concentration; a represents the score of the favorite expression, p1,nClassifying the liked expression of the user with the number n as the proportion of the total quantity of the images sampled at every other frame; b denotes a score of neglecting expression, p2,nClassifying the face which is ignored for the user with the number n in the ratio of the total number of the images sampled at every other frame; c denotes the score of aversive expression, p3,nOf the total number of images sampled every other frame for expressions classified as aversive by the user numbered nRatio of occupation.
The advertisement playing sequence updating module is used for: (1) obtaining the average acceptance evaluation result of all advertisements in the played advertisement sequence list in an updating period
Figure BDA0003125326250000164
Figure BDA0003125326250000165
The calculation formula of (a) is as follows:
Figure BDA0003125326250000166
in the above equation, i represents the number of each advertisement in the advertisement playlist. (2) According to individual advertisements
Figure BDA0003125326250000167
And sequencing all the played advertisements in the updating period from large to small to obtain a rating ranking table of the played advertisements. (3) And acquiring the advertisements needing to be added and the quantity of the advertisements, deleting the corresponding quantity of played advertisements which are ranked in the ranking list from the advertisement playing sequence list, adding the advertisements needing to be added and released into the advertisement playing sequence list, and finishing the updating process of the advertisement playing sequence list.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. An intelligent media management system based on a VOC vehicle owner big data platform is characterized in that the intelligent media management system is used for obtaining the matching degree between a current user and an advertisement to be delivered, and further adjusting an advertisement playing sequence list; the intelligent media management system comprises:
the keyword extraction module is used for extracting a keyword data set associated with each advertisement in the advertisement playing sequence list, and the feature data in the keyword data set are a plurality of preset keywords related to the content of the advertisement;
the system comprises a historical user information query module, a user profile analysis module and a user profile analysis module, wherein the historical user information query module is used for querying a user profile data set of each historical user from an advertisement analysis database and acquiring various feature data of each historical user in the user profile data set;
the system comprises a user type classification module, a user type classification module and a user type classification module, wherein the user type classification module is used for extracting facial features of all target users in an advertisement delivery area; then, the extracted facial features are compared with the facial features of all historical users in the advertisement analysis database, and the current user is distinguished as a historical user or a newly added user;
the user label establishing module is used for establishing an empty user label for each newly added user, and the user label comprises an identity label, a favorite label and an aversion label; the user label establishing module is also used for adding a special user number in the identity label of each newly added user;
the identity characteristic identification module is used for extracting identity characteristics of the newly added user and adding the extracted identity characteristics into the identity label of the corresponding newly added user;
a target representation dataset creation module to:
(1) setting a historical user proportion critical value q0, and calculating the proportion q of the current user identified as the historical user in the advertisement delivery area in the current user group;
(2) judging the magnitude relation between q and q0, and making the following decision according to the judgment result:
when q is larger than or equal to q0, extracting feature data in the favorite labels of all historical users, and after the feature data are subjected to de-duplication, taking the de-duplicated feature data as a target image data set of the current user group;
(ii) when q is less than q0, extracting feature data in the favorite labels of all historical users; sequentially calculating the coincidence degree Dc1 of the content in the identity label of each newly added user and the content in the identity label of each historical user in each advertisement analysis database; extracting characteristic data in the preference label of the historical user with the maximum coincidence degree Dc1 with the identity label of each newly added user; merging the two parts of feature data, and taking the feature data as a target image data set of the current user group after the feature data is deduplicated; and
an advertisement playlist adjustment module to:
(1) calculating the coincidence degree Dc2 of the feature data in the keyword data set associated with each advertisement extracted by the keyword extraction module and the feature data in the target portrait data set;
(2) and reordering the advertisements in the advertisement playing sequence list according to the descending order of the calculation result of the Dc2 of each advertisement to obtain an adjusted advertisement playing sequence list.
2. The VOC vehicle owner big data platform based intelligent media management system of claim 1 wherein: the feature data within the keyword dataset associated with each advertisement includes at least:
(1) keywords reflecting the advertised promotional product;
(2) keywords reflecting targeted customer groups targeted by the advertisement;
(3) keywords reflecting a speaker of the advertisement or a character image of the advertisement;
(4) high frequency or special keywords in the ad;
(5) the time length of the advertisement is classified;
(6) the genre of the advertisement is classified.
3. The VOC vehicle owner big data platform based intelligent media management system of claim 1 wherein: the advertisement analysis database comprises a collected user portrait data set of historical users, the user portrait data set comprises facial feature data of each historical user and user labels, and the user labels comprise identity labels, favorite labels and aversion labels; the identity tag is stored with feature data reflecting the identity features of the user, and the feature data in the identity tag comprises: gender, age group, style of wear, and other characteristics; the other features represent identifiable non-gender, age group, and wear style features useful for distinguishing user identity features; the preference tag stores feature data of an object reflecting the preference of the user, and the aversion tag stores feature data of an object reflecting the aversion of the user.
4. The intelligent vehicle owner big data platform-based media management system of claim 3, wherein: the advertisement analysis database is stored in the VOC vehicle owner cloud big data platform.
5. The VOC vehicle owner big data platform based intelligent media management system of claim 1 wherein: the intelligent media management system based on the VOC vehicle owner big data platform is applied to an advertisement putting system with multi-angle monitoring equipment, and the advertisement putting system is used for playing advertisements to be put according to an advertisement playing sequence list; the multi-angle monitoring equipment is used for acquiring multi-angle monitored video stream data of all target users in an advertisement delivery area of the advertisement delivery equipment.
6. The intelligent vehicle owner big data platform-based media management system of claim 5, wherein: the data sources of the user type classification module and the identity characteristic identification module are all multi-angle monitored video stream data of the advertisement delivery area; the user type classification module comprises a facial feature extraction unit, a facial feature comparison unit and a user type classification unit, wherein the facial feature extraction unit is used for extracting facial features of all users appearing in the video stream data; the facial feature comparison unit is used for acquiring all facial features extracted by the facial feature extraction unit and the facial features of all historical users inquired by the historical user information inquiry module and comparing the facial features with the facial features of all historical users; the user type classification unit is used for classifying all users appearing in the video stream data into historical users or newly added users according to the distinguishing result of the facial feature comparison unit.
7. The intelligent vehicle owner big data platform-based media management system of claim 6, wherein:
the calculation formula of Dc1 is as follows:
Figure FDA0003348646480000021
8. the VOC vehicle owner big data platform based intelligent media management system of claim 7 wherein:
the calculation formula of Dc2 is as follows:
Figure FDA0003348646480000022
9. the VOC vehicle owner big data platform based intelligent media management system of claim 1 wherein: the identity characteristic identification module shoots a current user at multiple angles through the camera, then utilizes the image identification unit to identify the shot image reflecting the current user state, and further extracts the characteristics which are reflected in the image and have the same type as the characteristic data stored in the identity label.
10. The VOC vehicle owner big data platform based intelligent media management system of claim 9 wherein: the camera is one part of the identity characteristic identification module, or the camera is independent of an external device outside the intelligent media management system based on the VOC vehicle owner big data platform, and the identity characteristic identification module calls the camera to shoot images when needed.
CN202110687605.2A 2021-06-21 2021-06-21 Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform Active CN113393275B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111481967.2A CN114155034A (en) 2021-06-21 2021-06-21 Method for evaluating acceptance of user to advertisement based on feature recognition
CN202110687605.2A CN113393275B (en) 2021-06-21 2021-06-21 Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110687605.2A CN113393275B (en) 2021-06-21 2021-06-21 Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111481967.2A Division CN114155034A (en) 2021-06-21 2021-06-21 Method for evaluating acceptance of user to advertisement based on feature recognition

Publications (2)

Publication Number Publication Date
CN113393275A CN113393275A (en) 2021-09-14
CN113393275B true CN113393275B (en) 2021-12-14

Family

ID=77623437

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111481967.2A Pending CN114155034A (en) 2021-06-21 2021-06-21 Method for evaluating acceptance of user to advertisement based on feature recognition
CN202110687605.2A Active CN113393275B (en) 2021-06-21 2021-06-21 Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202111481967.2A Pending CN114155034A (en) 2021-06-21 2021-06-21 Method for evaluating acceptance of user to advertisement based on feature recognition

Country Status (1)

Country Link
CN (2) CN114155034A (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109355B (en) * 2023-04-12 2023-06-16 广东玄润数字信息科技股份有限公司 Advertisement delivery analysis method, system and storage medium based on preference data
CN116503113B (en) * 2023-06-27 2023-09-12 深圳依时货拉拉科技有限公司 Vehicle body advertisement operation management method, system, computer equipment and storage medium
CN116823352A (en) * 2023-07-14 2023-09-29 菏泽学义广告设计制作有限公司 Intelligent advertisement design system based on remote real-time interaction

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010078539A2 (en) * 2009-01-04 2010-07-08 Robert Thomas Kulakowski Advertising profiling and targeting system
CN109874125A (en) * 2019-01-29 2019-06-11 上海博泰悦臻网络技术服务有限公司 The car owner's authorization method and system of bluetooth key, storage medium and vehicle Cloud Server

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BRPI0720992A2 (en) * 2006-12-28 2014-04-08 Sharp Kk AD DISTRIBUTION SYSTEM, AD DISTRIBUTION SERVER, AD DISTRIBUTION METHOD, PROGRAM AND RECORD
JP2012113355A (en) * 2010-11-19 2012-06-14 Japan Research Institute Ltd Advertisement information provision system, advertisement information provision method and advertisement information provision program
US10410245B2 (en) * 2013-05-15 2019-09-10 OpenX Technologies, Inc. System and methods for using a revenue value index to score impressions for users for advertisement placement
CN104573619A (en) * 2014-07-25 2015-04-29 北京智膜科技有限公司 Method and system for analyzing big data of intelligent advertisements based on face identification
CN106971317A (en) * 2017-03-09 2017-07-21 杨伊迪 The advertisement delivery effect evaluation analyzed based on recognition of face and big data and intelligently pushing decision-making technique
CN110070393A (en) * 2019-06-19 2019-07-30 成都大象分形智能科技有限公司 Ads on Vehicles interacts jettison system under line based on cloud artificial intelligence
CN111882361A (en) * 2020-07-31 2020-11-03 苏州云开网络科技有限公司 Audience accurate advertisement pushing method and system based on artificial intelligence and readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010078539A2 (en) * 2009-01-04 2010-07-08 Robert Thomas Kulakowski Advertising profiling and targeting system
CN109874125A (en) * 2019-01-29 2019-06-11 上海博泰悦臻网络技术服务有限公司 The car owner's authorization method and system of bluetooth key, storage medium and vehicle Cloud Server

Also Published As

Publication number Publication date
CN113393275A (en) 2021-09-14
CN114155034A (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN113393275B (en) Intelligent medium management system based on VOC (volatile organic compound) vehicle owner big data platform
CN113379460A (en) Advertisement accurate delivery method based on user portrait
US10779037B2 (en) Method and system for identifying relevant media content
CN106971317A (en) The advertisement delivery effect evaluation analyzed based on recognition of face and big data and intelligently pushing decision-making technique
CN113435924B (en) VOC car owner cloud big data platform
CN107146096B (en) Intelligent video advertisement display method and device
US7987111B1 (en) Method and system for characterizing physical retail spaces by determining the demographic composition of people in the physical retail spaces utilizing video image analysis
KR20100107036A (en) Laugh detector and system and method for tracking an emotional response to a media presentation
CN108600865B (en) A kind of video abstraction generating method based on super-pixel segmentation
JP2019507533A (en) System and method for assessing viewer engagement
CN107305557A (en) Content recommendation method and device
CN104573619A (en) Method and system for analyzing big data of intelligent advertisements based on face identification
JP6807389B2 (en) Methods and equipment for immediate prediction of media content performance
CN110135883A (en) A kind of method and system of elevator crowd portrayal and commercial audience analysis
CN113377327A (en) Huge curtain MAX intelligent terminal in garage that possesses intelligent voice interaction function
CN113469737A (en) Advertisement analysis database creation system
WO2020253360A1 (en) Content display method and apparatus for application, storage medium, and computer device
CN110689367A (en) Data acquisition method and device, computer device and storage medium
CN110766454A (en) Method for collecting customer visit information of store and store subsystem architecture
CN108876430B (en) Advertisement pushing method based on crowd characteristics, electronic equipment and storage medium
CN112837098A (en) Mobile internet advertisement intelligent pushing system based on big data analysis
CN112417204A (en) Music recommendation system based on real-time road conditions
CN113506124B (en) Method for evaluating media advertisement putting effect in intelligent business district
US20190050890A1 (en) Video dotting placement analysis system, analysis method and storage medium
CN111046209B (en) Image clustering retrieval system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant