CN111951043A - Information delivery processing method and device, storage medium and electronic equipment - Google Patents

Information delivery processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111951043A
CN111951043A CN202010734582.1A CN202010734582A CN111951043A CN 111951043 A CN111951043 A CN 111951043A CN 202010734582 A CN202010734582 A CN 202010734582A CN 111951043 A CN111951043 A CN 111951043A
Authority
CN
China
Prior art keywords
client
old
current
information
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010734582.1A
Other languages
Chinese (zh)
Inventor
黄崇远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010734582.1A priority Critical patent/CN111951043A/en
Publication of CN111951043A publication Critical patent/CN111951043A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0255Targeted advertisements based on user history

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure provides an information delivery processing method and device, a computer readable storage medium and electronic equipment, and relates to the technical field of data processing. The method comprises the following steps: acquiring a current passenger flow image; taking the old client in the current passenger flow image as the current old client, and acquiring the behavior information portrait of the current old client; determining the personal information portrait of the current new client by taking the new client in the current client flow image as the current new client; searching similar old clients according to the personal information portrait of the current new client, and generating a behavior information portrait of the current new client by using the behavior information portrait of the similar old clients; and determining target information matched with the current passenger flow image according to the behavior information portrait of the client in the current passenger flow image. The method and the device can improve the pertinence of information delivery, particularly realize the prediction of interest preference and the targeted information delivery for new customers, and improve the information delivery effect.

Description

Information delivery processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to an information delivery processing method, an information delivery processing apparatus, a computer-readable storage medium, and an electronic device.
Background
In common consumption and entertainment scenes, such as shopping malls, supermarkets, squares, restaurants, etc., information delivery has become a main publicity method, for example, advertisements are delivered on a digital large screen of a shopping mall or a square, public publicity information is delivered on a television screen of a supermarket or a restaurant, etc.
In the related art, information delivery generally includes the following processes: the background management system inputs an information list to be released and sets playing parameters; the multimedia terminal equipment plays the information in the information list to be released according to the playing parameters; and the client views corresponding information through the multimedia terminal equipment in a consumption scene.
However, in practice, customers have a high degree of diversity, with different information being of different interest. The related art cannot effectively distinguish the customers, so that the pertinence of the delivered information is low, the delivery effect is poor, and the delivered information cannot attract the customers.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides an information delivery processing method, an information delivery processing apparatus, a computer-readable storage medium, and an electronic device, thereby improving an effect of information delivery at least to a certain extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, an information delivery processing method is provided, including: acquiring a current passenger flow image, wherein the current passenger flow image comprises at least one customer; taking the old client in the current passenger flow image as the current old client, and acquiring the behavior information portrait of the current old client; determining the personal information portrait of the current new client by taking the new client in the current client flow image as the current new client; searching similar old clients according to the personal information portrait of the current new client, and generating a behavior information portrait of the current new client by using the behavior information portrait of the similar old clients; the similar old client is an old client similar to the personal information portrait of the current new client; determining target information matched with the current passenger flow image according to the behavior information portrait of the client in the current passenger flow image; the target information is used for being released to a releasing terminal in the scene where the current passenger flow image is located.
According to a second aspect of the present disclosure, there is provided an information delivery processing apparatus including: the passenger flow image acquisition module is used for acquiring a current passenger flow image, and the current passenger flow image comprises at least one customer; the old client identification module is used for taking the old client in the current client flow image as the current old client and acquiring the behavior information portrait of the current old client; the new client identification module is used for determining the personal information portrait of the current new client by taking the new client in the current client flow image as the current new client; the similar old client searching module is used for searching similar old clients according to the personal information portrait of the current new client and generating the behavior information portrait of the current new client by utilizing the behavior information portrait of the similar old clients; the similar old client is an old client similar to the personal information portrait of the current new client; the target information determining module is used for determining target information matched with the current passenger flow image according to the behavior information portrait of the client in the current passenger flow image; the target information is used for being released to a releasing terminal in the scene where the current passenger flow image is located.
According to a third aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, implements the information delivery processing method of the first aspect and possible implementations thereof.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor. Wherein the processor is configured to execute the information delivery processing method of the first aspect and possible embodiments thereof via execution of the executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
according to the information delivery processing method, the information delivery processing device, the computer readable storage medium and the electronic device, on one hand, matched target information is determined through behavior information portraits of customers in the current passenger flow image and is used for delivering the information to a delivery terminal in a scene where the current passenger flow image is located, the target information has high pertinence and high probability of attracting the current passenger flow, and therefore the information delivery effect can be improved. On the other hand, for the new client in the current client flow image, the behavior information portrait of the current new client is generated by utilizing the similarity of the new client and the personal information portrait of the old client, so that the consumption and interest preference of the current new client are predicted, and the pertinence and the accuracy of information delivery are further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained from those drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a system architecture diagram of an environment in which the present exemplary embodiment operates;
fig. 2 is a flowchart illustrating an information delivery processing method according to the present exemplary embodiment;
FIG. 3 illustrates a flow diagram for configuring a behavioral information representation library in the present exemplary embodiment;
FIG. 4 illustrates a sub-flow diagram for configuring a behavioral information gallery in the present exemplary embodiment;
FIG. 5 illustrates a flow chart for generating a personal information representation in the present exemplary embodiment;
fig. 6 shows a schematic structural diagram of a CNN in the present exemplary embodiment;
fig. 7 shows a flowchart of determining target information in the present exemplary embodiment;
fig. 8 is a block diagram showing an information delivery processing apparatus in the present exemplary embodiment;
fig. 9 shows a block diagram of an electronic device in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The exemplary embodiment of the present disclosure first provides an information delivery processing method, which is used for determining adaptive target information according to passenger flows at different times in different scenes, so as to deliver the information to a terminal device in the scene, and attract the passenger flows.
Fig. 1 is a system architecture diagram illustrating an operating environment of the information delivery processing method, and as shown in fig. 1, the system architecture 100 may include a monitoring device 110, a backend device 120, and a delivery terminal 130. The monitoring device 110 may be a monitoring camera disposed in a market, a square, a supermarket, a restaurant, etc., and is configured to capture a passenger flow image of a crowd and transmit the image to the back-end device 120. The backend device 120 is a computing device or a device cluster deployed in an information delivery background, and may be a personal computer, a server, a matching database, or the like, and determines appropriate target information after receiving the passenger flow image, and directly or indirectly sends the target information to the delivery terminal 130. The delivery terminal 130 may be a terminal device with an information delivery function in the above scenario, including but not limited to: digital big screen, television, display, audio playing device (i.e. information can be released in audio mode), POS (Point of Sales, Point of sale terminal, information released can be printed on receipt).
In an alternative embodiment, the monitoring device 110 and the delivery terminal 130 may be integrated on the same device, for example, the delivery terminal 130 is a digital large screen, and the monitoring device 110 may be a camera built in the digital large screen.
It should be understood that the number of the devices shown in fig. 1 is only exemplary, and may be arbitrarily set according to actual requirements. For example, the backend device 120 is typically connected to a large number of monitoring devices 110 and delivery terminals 130, or the backend device 120 may be a cluster formed by a plurality of devices, or the like. The present disclosure is not limited thereto.
The information delivery processing method according to the exemplary embodiment of the present disclosure is specifically described below with reference to fig. 2. As shown in fig. 2, the information delivery processing method may include the following steps S210 to S230:
step S210, acquiring a current passenger flow image, where the current passenger flow image includes at least one customer.
And after the monitoring equipment shoots the live image at present, the live image is transmitted to the back-end equipment. The back-end device may detect whether a person is included in the live image, determine that the image is the current passenger image when the person is included, perform the processing of the following steps, determine that the image is not a passenger image when the person is not included, and not perform the processing of the following steps.
It should be noted that the monitoring device is generally disposed at a position with a large flow of people, so that the current passenger flow image generally contains more than one person, and is therefore called a passenger flow image.
Step S220, the old client in the current client flow image is taken as the current old client, and the behavior information portrait of the current old client is obtained.
In the present exemplary embodiment, each person appearing in the current passenger flow image is detected, and classified as an old customer or a new customer. The old client refers to a client detected by the back-end equipment or a client detected within a set effective time; otherwise, the client is a new client. For example, the back-end device detects a customer a from the passenger flow image three days ago, and then recognizes the customer a again from the current passenger flow image, so that the customer a is an old customer; and when the back-end equipment does not find the detection history record about the client A, determining the client A as a new client. The effective time of the detection is set, so that the time interval between two times of detection of the same client is not too long, so as to prevent the interest preference of the client from changing, for example, the effective time is set to be one month, and when the last time the client A is detected to be more than one month away from the currently detected client A, the client A is determined to be a new client.
For the old customer, the backend device may record its behavior information, which generally includes two types of information: the first type of behavior information is mainly non-consumption behavior information, for example, when a client appears in a certain scene, watches a certain advertisement, stays for a long time in front of a certain counter and a certain shelf, moves a certain commodity (such as picking up and putting down), and the like, the back-end equipment can identify, count and record the first type of behavior information through an image shot by the monitoring equipment; the second type of behavior information is mainly consumption behavior information, such as payment in a supermarket, consumption in a restaurant, movie ticket purchase and the like, and the back-end equipment can acquire, count and record the second type of behavior information through payment and payment equipment (such as a POS machine), an online transaction system and the like. The backend device may generate a behavioral information representation according to the behavioral information of the old client, for example, tag different types of information, and use the tag set of each old client as its behavioral information representation.
In an alternative embodiment, the behavioral information representation of the current old customer may be looked up in a pre-configured behavioral information representation library. Specifically, the backend device stores the behavior information portraits of different clients in a behavior information portraits library, for example, numbering each client as an index, so that the current behavior information portraits of old clients can be obtained by searching in the library.
Referring to FIG. 3, the configuration process of the behavior information representation library may include the following steps S310 to S340:
in step S310, when a customer is detected in the image of the preset scene, the detected customer is determined as an old customer.
The preset scene is a consumption or activity scene capable of reflecting interest preference of people, the back-end device can set a certain scene range according to information delivery types, service types and the like, and the scene in the range is the preset scene, including but not limited to: markets, supermarkets, squares, restaurants, shops of various brands, KTVs, cinemas, gymnasiums, sports halls and the like. In the exemplary embodiment, a preset scene may be labeled, for example: the shopping mall can be labeled with shopping mall commodities, and more detailed labels such as men's clothing, sports clothing and the like can be labeled on different floors and different areas of the shopping mall; the movie theater can be marked with a 'movie entertainment' label, and movie theaters at different times can be marked with more subdivided labels such as 'adventure movie', 'action movie', and the like according to the types of the movies shown.
Generally, when a client appears in a preset scene, the client can be considered to have behavior information, and the client is taken as an old client.
Step S320, generating a first type behavior tag of the old customer according to the tag of the preset scene where the old customer is located.
Based on the labels of the preset scenes, the client can be considered to have corresponding consumption tendency or interest preference, such as the client appearing in a movie theater, adding a label of the movie theater to the client (e.g., "movie entertainment"), the client appearing in a cantonese restaurant, and adding a label of a cantonese restaurant to the client (e.g., "cantonese"). The labels of the preset scenes appearing in the client can be formed into a set as the first-class behavior labels, and the labels of different preset scenes can be subjected to weight distribution according to the behavior types of the client in different preset scenes to obtain the first-class behavior labels.
And step S330, generating a second-class behavior label of the old client according to the consumption information of the old client.
The back-end equipment can also acquire consumption information of old customers, for example, the back-end equipment is connected with a consumption settlement system of a shopping mall, a ticket purchasing system of a movie theater, a meal ordering settlement system of a restaurant, a cash register system of a supermarket, a song ordering system of KTV and the like, and the related consumption information is acquired through the system and is mapped into a specific label. After the consumption information is acquired, features may be extracted and processed by using a model such as CNN (Convolutional Neural Network), and a label of a certain dimension may be output. And the weights of different labels can be distributed according to the intensity (including consumption times, consumption amount and the like) of different consumption behaviors of the client. And the label corresponding to the consumption information is the second type behavior label.
In an alternative embodiment, step S330 may include:
determining a preset scene of consumption behavior of an old client as a consumption scene of the old client;
and generating a second type of behavior label of the old client according to the consumption scene of the old client and the consumption object category of the old client in the consumption scene.
For example, consumption activities occur at restaurants or movie theaters. The restaurant system divides dishes into a plurality of categories of dishes, such as Guangdong dishes, Hunan dishes, Sichuan dishes and the like, when an old client consumes in the restaurant, the restaurant is used as a consumption scene, and the ordering and settlement system of the restaurant uploads the consumption condition to the back-end equipment; the back-end equipment prints related labels, such as favorite Guangdong dishes or Sichuan dishes, on the old client according to the type of the dish series consumed by the old client. The movie theatre divides the movies into a plurality of categories, such as adventure, action, love and the like, when the old customers buy movie tickets, the movie theatre is taken as a consumption scene, the ticket buying system of the movie theatre uploads ticket buying information, and the back-end equipment prints related labels, such as adventure movies or action movies and the like, for the old customers according to the movie categories of the movie tickets purchased by the old customers.
And step S340, generating a behavior information portrait of the old client based on the first type behavior tag and the second type behavior tag of the old client, and writing the behavior information portrait into a behavior information portrait library.
After the first type behavior tag and the second type behavior tag of the old client are obtained, a tag set can be generally formed for the two types of tags to be used as a behavior information portrait of the old client; the two types of labels can also be comprehensively sorted, for example, a certain number of labels with higher sorting are selected according to weight sorting to form a behavior information portrait.
By the method, the behavior information images of different old clients are generated and stored in the specific database, and the behavior information image library is formed.
Since the number of old clients is generally huge, the above records of behavior information portraits of different old clients, and in order to simplify the data size of the behavior information portraits library, it is possible to merge the data of the similar old clients. In an alternative embodiment, when configuring the behavior information image library, referring to fig. 4, the following steps S410 and S420 may be further performed:
step S410, generating a personal information portrait of the old client according to the appearance characteristics of the old client in the image of the preset scene.
For example, the appearance characteristics may include a plurality of aspects of gender, age, wearing type, etc., the characteristics of the old client are identified from the image of the preset scene, the characteristics are mapped to specific labels, for example, the gender label includes male or female, the age label includes different age groups of 1-15 years, 15-20 years, 20-25 years, etc., the wearing type label includes leisure, formal dress, sports, etc., and the label set can be used as a personal information portrait of the old client.
Further, referring to fig. 5, step S410 may be implemented by the following steps S510 and S520:
step S510, processing the image of the preset scene by using a pre-trained convolutional neural network to obtain a gender label, an age label and a wearing type label of an old client in the image of the preset scene;
in step S520, a personal information portrait of the old client is generated based on the gender label, age label, and wearing type label of the old client.
CNN can extract image features by convolution, and convolution operations can preserve spatial relationships between pixels. Fig. 6 shows the basic structure of a CNN, comprising an input layer, hidden layers and an output layer, wherein at least one hidden layer performs a convolution operation. Rather than connecting each input to every neuron, CNN may restrict certain connections so that a neuron can only accept a small fraction of the inputs from the previous layer (e.g., 3 x 3 or 5 x 5). Thus, each neuron only needs to be responsible for processing a particular portion of an image. Thereby mimicking the way individual cortical neurons of the human brain work, with each neuron responding to only a small portion of the entire field of view. Technically, CNN filters connections between neurons according to similarities, making image processing computationally controllable.
The specific operation of CNN is explained with reference to fig. 6, which includes the following steps:
the CNN network scans an input image of a preset scene, extracts features through convolution operation, obtains an activated image serving as intermediate data after activation, and arranges the activated image in a stack form, wherein each activated image corresponds to an applied convolution filter;
compressing the activation image by downsampling (e.g., pooling);
carrying out convolution, activation and down sampling for multiple times through different hidden layers so as to extract image features from different scales;
one label is assigned to the output of each node through the full connection layer, for example, in the present exemplary embodiment, three-dimensional labels, respectively, a gender label, an age label, and a wearing category label, may be output.
Step S420, writing the mapping relation between the personal information portrait of the old client and the behavior information portrait into the behavior information portrait base.
In the exemplary embodiment, old clients with the same or similar personal information images may be grouped into a group, and behavior information images of the old clients are grouped into a group, or the most representative behavior information image is selected, so as to obtain a mapping relationship between the personal information image and the behavior information image, and record the mapping relationship in the behavior information image library. Therefore, information of each old client does not need to be stored specifically, data in the behavior information portrait library can be simplified, and related information can be conveniently searched in the library in the follow-up process; and each type of old client is used as granularity to store data, so that the generalization of the data can be improved, and the influence of extreme individuals is reduced.
It is added that the data in the behavior information image library can be continuously updated. Generally, when an old client generates new consumption behavior related information, the old client can trigger the updating of the behavior information representation. Examples are as follows:
when the old client consumes in a certain consumption scene, the consumption terminal system, such as a cash register system, a ticket purchasing system and the like, immediately uploads consumption information to back-end equipment;
the back-end equipment generates a related label of the current consumption according to the content types of the consumption scenes, such as commodity types, movie types, dish types and the like, and the specific consumption information of old customers.
If the behavior information portrait of the client is stored, the label generated this time is added to the behavior information portrait.
With the increase of the consumption information quantity and scene quantity of old customers, the behavior information portrayal is richer and more accurate.
If the behavior information portrait of each old client is stored in the behavior information portrait base, for example, the identity of the old client is used as an index, and after the current old client is identified in the current client flow image, the behavior information portrait of the old client can be searched in the behavior information portrait base through the identity of the current old client; if the mapping relation between the personal information portrait of each type of old client and the behavior information portrait is stored in the behavior information portrait base, after the current old client is identified in the current passenger flow image, the personal information portrait of the current old client can be extracted according to the current passenger flow image, and then the behavior information portrait corresponding to the personal information portrait is searched in the behavior information portrait base.
And step S230, taking the new client in the current client flow image as the current new client, and determining the personal information portrait of the current new client.
Because the new client appears for the first time or appears for the first time within the effective time, the new client does not have an effective behavior history record, and the behavior information of the new client cannot be acquired currently. In the present exemplary embodiment, a personal information representation of the current new customer is determined as a feature. For example, personal information, such as name, age, gender, model of a mobile terminal (such as a mobile phone) used, and the like, currently entered by a new customer can be acquired from an entry and exit registration system in a market, supermarket, and other scenes; the personal information portrait can also be obtained by identifying and extracting the appearance of the current new client in the current passenger flow image.
In an alternative embodiment, step S230 may be implemented by:
processing the current passenger flow image by using a pre-trained convolutional neural network to obtain a gender label, an age label and a wearing type label of a current new client in the current passenger flow image;
and generating the personal information portrait of the current new client according to the gender label, the age label and the wearing type label of the current new client.
The above process of extracting the tag of the current new client through CNN and generating the personal information portrait can refer to the content of the above part of fig. 5, and thus is not described again. Therefore, the old client in the behavior information portrait library has the personal information portrait with the same type and the same dimension as the current new client, and the similar old client can be conveniently searched according to the personal information portrait.
Step S240, searching similar old clients according to the personal information portrait of the current new client, and generating the behavior information portrait of the current new client by using the behavior information portrait of the similar old clients.
Wherein the similar old client is an old client similar to the personal information representation of the current new client. In an alternative embodiment, the similarity between the personal information representation of the current new client and the personal information representation of each old client in the behavior information representation library may be calculated, and the old client with the highest similarity of the personal information representation may be determined as the similar old client of the current new client. For example, the personal information figures of the current new client and the old client, such as sex, age, wearing type, and the like, can be vectorized to obtain personal information vectors; then, similarity calculation is performed on the personal information vector of the current new customer and the personal information vectors of the old customers respectively, for example, cosine similarity can be calculated by adopting the following formula (1):
scoresim=cos([V0gender,V0age,V0dress],[V1gender,V1age,V1dress])
(1)
scoresimindicates the degree of similarity, VgenderRepresenting a sex vector, VageRepresents an age vector, VdressRepresenting a wear category vector, subscript 0 identifying the current new customer, and subscript 1 representing the old customer.
Of course, the similarity may be calculated in other manners such as the euclidean distance and the manhattan distance, and for example, when the euclidean distance is used, the similarity score is 1/(1+ d) when the euclidean distance between the personal information vectors of the current new customer and the old customer is calculated as d.
In an alternative embodiment, the old client with the highest similarity of the personal information representation can be used as the similar old client, and the behavior information representation of the similar old client is assigned to the current new client, namely the current new client and the similar old client have the same behavior information representation.
In an alternative embodiment, a plurality of old clients with higher similarity may also be selected as similar old clients, and then the behavior information portrayal of these similar old clients is weighted, and the weighting may be performed by using the similarity of the personal information vectors of the similar old clients and the current new client, so as to obtain the behavior information portrayal of the current new client.
And step S250, determining target information matched with the current passenger flow image according to the behavior information portrait of the client in the current passenger flow image.
By the above processing of the current old client and the current new client, behavior information portraits of each client in the current passenger flow image are obtained, and target information matched with the current passenger flow can be determined based on the behavior information portraits. The target information is information for being released to a releasing terminal in a scene where the current passenger flow image is located.
The behavioral information representation reflects the consumer's consumption and interest preferences, thereby finding matching target information. For example, when the behavior information portraits of most customers in the current passenger flow have a label of "movie entertainment", the advertisement information related to the movie may be used as the target information. In an alternative embodiment, the information that can be delivered can be used as candidate information, a category label is set for each candidate information in advance, the same dimension as the label in the behavior information portrait can be adopted, for example, an advertisement about a cantonese dish restaurant, the category label can be "cantonese dish", a promotional video about a adventure movie, the category label can be "adventure movie", and the like. Based on this, referring to fig. 7, step S250 may include the following steps S710 to S730:
step S710, obtaining the category label of each candidate message;
step S720, counting a first class behavior label and a second class behavior label in the behavior information portrait of each client in the current passenger flow image, and calculating the matching degree of the first class behavior label and the second class behavior label of each candidate information;
step S730, determining the candidate information corresponding to the highest matching degree as the target information.
According to the first class behavior tag and the second class behavior tag in the behavior information portrait of each client in the current passenger flow image, the matching degree of the behavior information portrait of each client and the class tags of the candidate information can be calculated, the matching condition of the tags of each client is counted, and the matching degree of the current passenger flow image and the candidate information can be comprehensively measured. The following formula (2) may be referred to:
Figure BDA0002604387070000121
wherein, scoretagAnd representing the matching degree of the category label of the candidate information and the current passenger flow image. n denotes the number of customers in the current stream image, and i denotes an arbitrary ith customer. tag1 represents a first type behavior tag, tag2 represents a second type behavior tag, and for the client i, when the first type behavior tag of the behavior information representation comprises tag1, the value of tag1 is 1, otherwise, the value is 0; when the second type of behavior tag of the behavior information representation includes tag2, tag2 takes the value 1, otherwise 0. w is a1And w2Weights for the first type of behavior tags and the second type of behavior tags, respectively, e.g. the second type of behavior tags represents consumption information and its weight w2Can be higher than 0.5 (such as 0.6), and w1 can be lower than 0.5 (such as 0.4), and the numerical value is not limited by the disclosure and can be adjusted according to actual requirements. Since each client has a different matching condition for different tags, the exemplary embodiment sums up all n clients in the current client image as the final cumulative matching degree, and of course, the average value may be calculated as the matching degree by dividing n on the basis of formula (2).
In an alternative embodiment, the candidate information corresponding to the highest matching degree may be used as the target information, and the probability that the information attracts the current passenger flow is highest.
In an optional implementation manner, a plurality of candidate information with high matching degree may also be used as target information, and sorted according to the order of matching degree to form an information delivery list.
The target information is determined for delivery, and therefore, in an alternative embodiment, as shown in fig. 2, step S260 may be further performed: and releasing the target information to a releasing terminal in the scene of the current passenger flow image. For example, when the current passenger flow image is shot by a camera at an entrance of a certain market, the backend device may launch the target information to a nearby launch terminal, such as a digital large screen at the entrance of the market, a display at a first floor of the market, and the like.
In an optional embodiment, after the target information is released, the effect of the release can be evaluated. Specifically, as shown in fig. 2 above, the following steps S270 and S280 may be implemented:
step S270, acquiring a field feedback image from the monitoring equipment in the scene where the current passenger flow image is located;
and step S280, counting the number of people watching the target information and the time length of the watching target information according to the field feedback image to determine the release feedback information of the target information.
The live feedback image generally includes a plurality of continuous frames of images, and may be a live image taken within 60 seconds or 300 seconds after the target information is delivered, for example. The backend device may cut the live feedback image, for example, one image per 1 second; then, whether the customer in the image views the target information is identified, for example, the customer can be identified through CNN, and the result of whether each face in the image is facing the target information is output. The backend device can count two indexes: the number of people viewing the target information, and the length of time the target information is viewed. The two indexes are integrated to obtain the release feedback information of the target information, which can be represented by the following formula (3):
scoreinf=w3·count+w4·duration (3)
Scoreinffeedback score of the information, count number of people watching, duration number of people watching, and w3And w4Weights of two indices, e.g. w3Is 0.4, w40.6, and the present disclosure does not limit the specific values thereof.
In calculating both the count and duration indices, a normalization process may be performed to pull both indices to the same numerical level. Taking count as an example, the normalization can refer to formula (4):
Figure BDA0002604387070000131
wherein, countoriOriginal statistics, count, representing the number of people viewedmaxAnd countminRespectively representing the maximum value and the minimum value of the historical statistics of the index of the number of the viewers. The count thus obtained is within the range of 0 to 1.
Through the above steps S270 and S280, the delivery effect of the target information can be evaluated, and when the delivery effect of the target information is not good, other information can be replaced in time, so as to reduce the waste of information delivery.
Based on the above, in the exemplary embodiment, on one hand, the matched target information is determined through the behavior information portrait of the customer in the current passenger flow image, so as to be used for being delivered to the delivery terminal in the scene where the current passenger flow image is located, and the target information has higher pertinence and higher probability of attracting the current passenger flow, so that the information delivery effect can be improved. On the other hand, for the new client in the current client flow image, the behavior information portrait of the current new client is generated by utilizing the similarity of the new client and the personal information portrait of the old client, so that the consumption and interest preference of the current new client are predicted, and the pertinence and the accuracy of information delivery are further improved.
The exemplary embodiment of the present disclosure also provides an information delivery processing apparatus. As shown in fig. 8, the information delivery processing apparatus 800 may include:
a passenger flow image obtaining module 810, configured to obtain a current passenger flow image, where the current passenger flow image includes at least one customer;
the old client identification module 820 is used for taking the old client in the current client flow image as the current old client and acquiring the behavior information portrait of the current old client;
a new client identification module 830, configured to determine a personal information representation of the current new client by using the new client in the current client stream image as the current new client;
the similar old client searching module 840 is used for searching similar old clients according to the personal information portrait of the current new client and generating a behavior information portrait of the current new client by using the behavior information portrait of the similar old clients; the similar old client is an old client similar to the personal information portrait of the current new client;
a target information determining module 850, configured to determine target information matching the current passenger flow image according to the behavior information portrait of the customer in the current passenger flow image; the target information is used for being released to a releasing terminal in the scene where the current passenger flow image is located.
In an alternative embodiment, the old customer identification module 820 is configured to:
and searching the behavior information portrait of the current old client in a pre-configured behavior information portrait library.
In an optional implementation, the information delivery processing apparatus 800 may further include a representation library configuration module configured to:
when a client is detected in an image of a preset scene, determining the detected client as an old client;
generating a first-class behavior label of the old client according to the label of the preset scene where the old client is located;
generating a second type of behavior label of the old client according to the consumption information of the old client;
and generating a behavior information portrait of the old client based on the first type behavior tag and the second type behavior tag of the old client, and writing the behavior information portrait into a behavior information portrait library.
In an alternative embodiment, a representation library configuration module is configured to:
determining a preset scene of consumption behavior of an old client as a consumption scene of the old client;
and generating a second type of behavior label of the old client according to the consumption scene of the old client and the consumption object category of the old client in the consumption scene.
In an alternative embodiment, a representation library configuration module is configured to:
generating a personal information portrait of an old client according to the appearance characteristics of the old client in the image of the preset scene;
and writing the mapping relation between the personal information portrait of the old client and the behavior information portrait into the behavior information portrait library.
In an alternative embodiment, a representation library configuration module is configured to:
processing the image of the preset scene by using a pre-trained convolutional neural network to obtain a gender label, an age label and a wearing type label of an old client in the image of the preset scene;
and generating the personal information portrait of the old client according to the sex label, the age label and the wearing type label of the old client.
In an alternative embodiment, the affinity old customer lookup module 840 is configured to:
and calculating the similarity between the personal information portrait of the current new client and the personal information portrait of each old client in the behavior information portrait library, and determining the old client with the highest similarity of the personal information portrait as the similar old client of the current new client.
In an alternative embodiment, the target information determination module 850 is configured to:
acquiring a category label of each candidate message;
counting a first class behavior label and a second class behavior label in a behavior information portrait of each customer in a current passenger flow image, and calculating the matching degree of the first class behavior label and the second class behavior label of each candidate information;
and determining the candidate information corresponding to the highest matching degree as the target information.
In an alternative embodiment, the new client identification module 830 is configured to:
processing the current passenger flow image by using a pre-trained convolutional neural network to obtain a gender label, an age label and a wearing type label of a current new client in the current passenger flow image;
and generating the personal information portrait of the current new client according to the gender label, the age label and the wearing type label of the current new client.
In an optional implementation, the information delivery processing apparatus 800 may further include a target information delivery module configured to:
and releasing the target information to a releasing terminal in the scene of the current passenger flow image.
In an optional implementation, the information delivery processing apparatus 800 may further include an information delivery feedback module configured to:
acquiring a field feedback image from monitoring equipment in a scene where a current passenger flow image is located;
and counting the number of people watching the target information and the time length of watching the target information according to the field feedback image so as to determine the release feedback information of the target information.
The specific details of each part in the above device have been described in detail in the method part embodiments, and thus are not described again.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device, for example, any one or more of the steps in fig. 2 may be performed.
The program product may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Exemplary embodiments of the present disclosure also provide an electronic device, which may be the backend device 120 in fig. 1. The electronic device is explained below with reference to fig. 9. It should be understood that the electronic device 900 shown in fig. 9 is only one example and should not bring any limitations to the functionality or scope of use of the embodiments of the present disclosure.
As shown in fig. 9, the electronic device 900 is embodied in the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to: at least one processing unit 910, at least one memory unit 920, a bus 930 connecting different system components (including the memory unit 920 and the processing unit 910), a display unit 940.
Where the storage unit stores program code, which may be executed by the processing unit 910, to cause the processing unit 910 to perform the steps according to various exemplary embodiments of the present invention described in the above section "exemplary methods" of the present specification. For example, processing unit 910 may perform method steps, etc., as shown in fig. 2.
The storage unit 920 may include volatile memory units such as a random access memory unit (RAM)921 and/or a cache memory unit 922, and may further include a read only memory unit (ROM) 923.
Storage unit 920 may also include a program/utility 924 having a set (at least one) of program modules 925, such program modules 925 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The bus 930 may include a data bus, an address bus, and a control bus.
The electronic device 900 may also communicate with one or more external devices 1000 (e.g., keyboard, pointing device, bluetooth device, etc.), which may be through an input/output (I/O) interface 950. The electronic device 900 further comprises a display unit 940 connected to the input/output (I/O) interface 950 for displaying. Also, the electronic device 900 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 960. As shown, the network adapter 960 communicates with the other modules of the electronic device 900 via the bus 930. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 900, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the following claims.

Claims (14)

1. An information delivery processing method, comprising:
acquiring a current passenger flow image, wherein the current passenger flow image comprises at least one customer;
taking the old client in the current passenger flow image as the current old client, and acquiring the behavior information portrait of the current old client;
determining the personal information portrait of the current new client by taking the new client in the current client flow image as the current new client;
searching similar old clients according to the personal information portrait of the current new client, and generating a behavior information portrait of the current new client by using the behavior information portrait of the similar old clients; the similar old client is an old client similar to the personal information portrait of the current new client;
determining target information matched with the current passenger flow image according to the behavior information portrait of the client in the current passenger flow image; the target information is used for being released to a releasing terminal in the scene where the current passenger flow image is located.
2. The method of claim 1, wherein obtaining the behavioral information representation of the current old customer comprises:
and searching the behavior information portrait of the current old client in a pre-configured behavior information portrait library.
3. The method of claim 2, wherein the behavioral information imagery library is configured by:
when a client is detected in an image of a preset scene, determining the detected client as an old client;
generating a first-class behavior label of the old customer according to the label of the preset scene where the old customer is located;
generating a second type of behavior tag of the old client according to the consumption information of the old client;
and generating a behavior information portrait of the old client based on the first type behavior tag and the second type behavior tag of the old client, and writing the behavior information portrait into the behavior information portrait library.
4. The method of claim 3, wherein the generating the second type of behavior tag of the old customer according to the consumption information of the old customer comprises:
determining the preset scene of the consumption behavior of the old client as the consumption scene of the old client;
and generating a second type of behavior tag of the old client according to the consumption scene of the old client and the consumption object category of the old client in the consumption scene.
5. The method of claim 3, wherein in configuring the behavioral information imagery library, the method further comprises:
generating a personal information portrait of the old client according to the appearance characteristics of the old client in the image of the preset scene;
and writing the mapping relation between the personal information portrait of the old client and the behavior information portrait into the behavior information portrait library.
6. The method of claim 5, wherein generating the personal information representation of the old customer according to the appearance features of the old customer in the image of the preset scene comprises:
processing the image of the preset scene by using a pre-trained convolutional neural network to obtain a gender label, an age label and a wearing category label of the old client in the image of the preset scene;
and generating the personal information portrait of the old client according to the sex label, the age label and the wearing type label of the old client.
7. The method of claim 5, wherein said finding a similar old customer from said current new customer's personal information representation comprises:
and calculating the similarity between the personal information portrait of the current new client and the personal information portrait of each old client in the behavior information portrait library, and determining the old client with the highest similarity of the personal information portrait as the similar old client of the current new client.
8. The method of claim 3, wherein determining target information that matches the current passenger flow image from a consumption information representation of a customer in the current passenger flow image comprises:
acquiring a category label of each candidate message;
counting a first class behavior label and a second class behavior label in the behavior information portrait of each customer in the current passenger flow image, and calculating the matching degree of the first class behavior label and the second class behavior label of each candidate information;
and determining the candidate information corresponding to the highest matching degree as the target information.
9. The method of claim 1, wherein said determining a personal information representation of said current new customer comprises:
processing the current passenger flow image by using a pre-trained convolutional neural network to obtain a gender label, an age label and a wearing category label of the current new client in the current passenger flow image;
and generating the personal information portrait of the current new client according to the gender label, the age label and the wearing type label of the current new client.
10. The method of claim 1, wherein after determining the target information, the method further comprises:
and delivering the target information to a delivery terminal in the scene of the current passenger flow image.
11. The method of claim 10, wherein after the target information is delivered to a delivery terminal in a scene in which the current passenger flow image is located, the method further comprises:
acquiring a field feedback image from the monitoring equipment in the scene where the current passenger flow image is located;
and counting the number of people who view the target information and the time length for viewing the target information according to the field feedback image so as to determine the release feedback information of the target information.
12. An information delivery processing apparatus, comprising:
the passenger flow image acquisition module is used for acquiring a current passenger flow image, and the current passenger flow image comprises at least one customer;
the old client identification module is used for taking the old client in the current client flow image as the current old client and acquiring the behavior information portrait of the current old client;
the new client identification module is used for determining the personal information portrait of the current new client by taking the new client in the current client flow image as the current new client;
the similar old client searching module is used for searching similar old clients according to the personal information portrait of the current new client and generating the behavior information portrait of the current new client by utilizing the behavior information portrait of the similar old clients; the similar old client is an old client similar to the personal information portrait of the current new client;
the target information determining module is used for determining target information matched with the current passenger flow image according to the behavior information portrait of the client in the current passenger flow image; the target information is used for being released to a releasing terminal in the scene where the current passenger flow image is located.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 11.
14. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 11 via execution of the executable instructions.
CN202010734582.1A 2020-07-27 2020-07-27 Information delivery processing method and device, storage medium and electronic equipment Withdrawn CN111951043A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010734582.1A CN111951043A (en) 2020-07-27 2020-07-27 Information delivery processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010734582.1A CN111951043A (en) 2020-07-27 2020-07-27 Information delivery processing method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111951043A true CN111951043A (en) 2020-11-17

Family

ID=73339615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010734582.1A Withdrawn CN111951043A (en) 2020-07-27 2020-07-27 Information delivery processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111951043A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113672803A (en) * 2021-08-02 2021-11-19 杭州网易云音乐科技有限公司 Recommendation method and device, computing equipment and storage medium
CN113723984A (en) * 2021-03-03 2021-11-30 京东城市(北京)数字科技有限公司 Method and device for acquiring crowd consumption portrait information and storage medium
CN115423510A (en) * 2022-08-30 2022-12-02 成都智元汇信息技术股份有限公司 Media service processing method based on subway associated data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874891A (en) * 2017-03-16 2017-06-20 湖南众益文化传媒股份有限公司 Smart media ad system based on recognition of face
US20170337611A1 (en) * 2016-05-23 2017-11-23 Yahoo! Inc. Method and system for presenting personalized products based on digital signage for electronic commerce
CN110008375A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 Video is recommended to recall method and apparatus
CN110009401A (en) * 2019-03-18 2019-07-12 康美药业股份有限公司 Advertisement placement method, device and storage medium based on user's portrait
CN110046965A (en) * 2019-04-18 2019-07-23 北京百度网讯科技有限公司 Information recommendation method, device, equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170337611A1 (en) * 2016-05-23 2017-11-23 Yahoo! Inc. Method and system for presenting personalized products based on digital signage for electronic commerce
CN106874891A (en) * 2017-03-16 2017-06-20 湖南众益文化传媒股份有限公司 Smart media ad system based on recognition of face
CN110009401A (en) * 2019-03-18 2019-07-12 康美药业股份有限公司 Advertisement placement method, device and storage medium based on user's portrait
CN110008375A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 Video is recommended to recall method and apparatus
CN110046965A (en) * 2019-04-18 2019-07-23 北京百度网讯科技有限公司 Information recommendation method, device, equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723984A (en) * 2021-03-03 2021-11-30 京东城市(北京)数字科技有限公司 Method and device for acquiring crowd consumption portrait information and storage medium
CN113672803A (en) * 2021-08-02 2021-11-19 杭州网易云音乐科技有限公司 Recommendation method and device, computing equipment and storage medium
CN115423510A (en) * 2022-08-30 2022-12-02 成都智元汇信息技术股份有限公司 Media service processing method based on subway associated data

Similar Documents

Publication Publication Date Title
CN108876526B (en) Commodity recommendation method and device and computer-readable storage medium
US11341515B2 (en) Systems and methods for sensor data analysis through machine learning
CN106776619B (en) Method and device for determining attribute information of target object
CN110033298B (en) Information processing apparatus, control method thereof, system thereof, and storage medium
US8489459B2 (en) Demographic based content delivery
CN111951043A (en) Information delivery processing method and device, storage medium and electronic equipment
US20150006243A1 (en) Digital information gathering and analyzing method and apparatus
US20120140069A1 (en) Systems and methods for gathering viewership statistics and providing viewer-driven mass media content
US20110231873A1 (en) Selecting advertisements and presentations to present based on known audience profiles
US11831954B2 (en) System for targeted display of content
CN109145707A (en) Image processing method and device, electronic equipment and storage medium
US20150206222A1 (en) Method to construct conditioning variables based on personal photos
CN110648186A (en) Data analysis method, device, equipment and computer readable storage medium
WO2020093827A1 (en) Multimedia material pushing method and apparatus
Yu et al. AI-based targeted advertising system
CN113887884A (en) Business-super service system
JP7294663B2 (en) Customer service support device, customer service support method, and program
US20240281474A1 (en) Media triggered virtual repository with temporal data recognition and matching engine
CN109074498A (en) Visitor's tracking and system for the region POS
KR102478149B1 (en) System for artificial intelligence digital signage and operating method thereof
CN112036865B (en) Service providing method, device and equipment
US20150052013A1 (en) Customer service apparatus for providing services to customers when assessing and/or purchasing items
CN108573056B (en) Content data processing method and device, electronic equipment and storage medium
CN111127128B (en) Commodity recommendation method, commodity recommendation device and storage medium
TWI803759B (en) Store system and operation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201117