CN113034198A - User portrait data establishing method and device - Google Patents
User portrait data establishing method and device Download PDFInfo
- Publication number
- CN113034198A CN113034198A CN202110397855.2A CN202110397855A CN113034198A CN 113034198 A CN113034198 A CN 113034198A CN 202110397855 A CN202110397855 A CN 202110397855A CN 113034198 A CN113034198 A CN 113034198A
- Authority
- CN
- China
- Prior art keywords
- user
- advertisement
- terminal
- data
- identity information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000012216 screening Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 230000007613 environmental effect Effects 0.000 claims description 7
- 230000006399 behavior Effects 0.000 claims description 6
- 230000001815 facial effect Effects 0.000 claims description 5
- 238000013075 data extraction Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 238000004891 communication Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006996 mental state Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0269—Targeted advertisements based on user profile or attribute
- G06Q30/0271—Personalized advertisement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0261—Targeted advertisements based on user location
Landscapes
- Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Engineering & Computer Science (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Finance (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Information Transfer Between Computers (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application provides a user portrait data establishing method and device, and relates to the field of data acquisition. According to the user portrait establishing method, offline user data are extracted according to the first environment image, online user data related to user identity information are extracted, and the offline user data and the online user data are fused to establish user portrait data. Furthermore, the user portrait data comprises offline user data and online user data, so that the user portrait data comprises richer data content, and the description of the user characteristics is more comprehensive and accurate.
Description
Technical Field
The application relates to the field of data acquisition, in particular to a user portrait data establishing method and device.
Background
The advertising terminal is visible everywhere in our daily life. For example, advertising terminals can be distributed in subway channels, squares and elevators, and convenience is brought to advertising of advertisers. When any user passes through the advertisement terminal, the advertisement terminal can circularly play according to the set advertisement, so that the user may not be interested in the played advertisement, namely the effectiveness of the advertisement played by the user is low.
In the prior art, user characteristics are extracted from a human body image by collecting the human body image of a user who passes through an advertisement terminal. The advertisements played to the user may then be selected based on the extracted user characteristics. However, the above-mentioned user features are not comprehensive enough to describe the user. This, in turn, may result in the selected advertisement being less effective for the user to play.
Disclosure of Invention
The application aims to provide a user portrait data establishing method and device, which are used for solving the problem that the effectiveness of selected advertisements played by a user is low due to incomplete description of user characteristics on the user.
In a first aspect, an embodiment of the present application provides a user portrait creating method, which is applied to a server, and the method includes: receiving a first environment image acquired by a first advertisement terminal when a target advertisement is played and user identity information associated with a user terminal which is within a preset distance from the first advertisement terminal, wherein the first environment image comprises a user face image; extracting offline user data according to the first environment image and extracting online user data associated with the user identity information, wherein the online user data is generated based on an operation record associated with the user identity information in the target application program; and fusing off-line user data and on-line user data to establish user portrait data.
According to the user portrait establishing method, offline user data are extracted according to the first environment image, online user data related to user identity information are extracted, and the offline user data and the online user data are fused to establish user portrait data. Furthermore, the user portrait data comprises offline user data and online user data, so that the user portrait data comprises richer data content, and the description of the user characteristics is more comprehensive and accurate.
In one possible embodiment, before extracting the offline user data from the first environment image and extracting the online user data associated with the user identity information, the method further comprises: determining whether the face image of the user is matched with the identity information of the user; and if the face image of the user is matched with the user identity information, establishing an incidence relation between the first environment image and the user identity information. In the case that the first environment image has an association relationship with the user identity information, it may be determined that the extracted offline user data and online user data are data describing the same user. Furthermore, the created user portrait data has a higher reference value.
Further, prior to extracting the offline user data from the first environment image and the online user data from the targeted application associated with the targeted advertisement based on the user identity information, the method further comprises: receiving a first advertisement terminal identification sent by a first advertisement terminal, wherein the first advertisement terminal identification is associated with a first environment image; receiving advertisement identification information and a second advertisement terminal identification which are sent by a user terminal, wherein the advertisement identification information and the second advertisement terminal identification are obtained from a target advertisement; determining whether the face image of the user is matched with the identity information of the user, including: and if the first advertisement terminal identification is consistent with the second advertisement terminal identification and the advertisement identification information is associated with the target advertisement, determining that the face image of the user is matched with the identity information of the user. When the first advertisement terminal identification is consistent with the second advertisement terminal identification, the user is shown to watch the advertisement played by the advertisement terminal determined by the advertiser, and further, when the advertisement identification information is associated with the target advertisement, the user is shown to watch the target advertisement played by the advertisement terminal determined by the advertiser. And furthermore, the reliability of the matching result of the face image of the user and the identity information of the user is higher.
Or, further, the first environment image includes a plurality of user face images, and determining whether the user face images match with the user identity information includes: identifying a target face image with a behavior of acquiring advertisement identification information in a target advertisement in a plurality of user face images, wherein the number of the target face images is one; acquiring advertisement identification information and user identity information of a target terminal in a plurality of user terminals; and if the acquired advertisement identification information is associated with the target advertisement, determining that the target face image is matched with the user identity information. Only the target face image is matched with the user identity information, and the target face image of the user interested in the target advertisement can be determined from the plurality of user face images. Furthermore, the user portrait data created for the user has a higher reference value.
In one possible embodiment, after the user representation data is created, the method further comprises:
receiving a second environment image sent by a second advertisement terminal, wherein the second environment image comprises a face image of a user; querying user portrait data associated with a user face image; screening advertisements to be delivered from an advertisement database according to the user portrait data; and issuing the advertisement to be delivered to a second advertisement terminal for delivery. Because the user portrait data contains richer data content, the description of the user characteristics is more comprehensive and accurate. Furthermore, the advertisement delivered at the second advertisement terminal according to the user portrait data is more suitable for the user's demand.
Alternatively, in another possible implementation, after the user representation data is created, the method further includes:
receiving a product browsing request and user identity information sent by a user terminal; querying user representation data associated with the user identity information; screening information of products to be recommended from a product database according to the user portrait data; and sending the information of the product to be recommended to a display interface of the user terminal for display. Because the user portrait data contains richer data content, the description of the user characteristics is more comprehensive and accurate. Furthermore, the information of the product to be recommended displayed on the display interface of the user terminal according to the user portrait data is more suitable for the requirements of the user.
In one possible embodiment, receiving a first environment image collected by a first advertisement terminal when a target advertisement is played and user identity information associated with a user terminal whose distance from the first advertisement terminal is within a preset distance includes:
receiving advertisement identification information and a second advertisement terminal identification from a user terminal; wherein the user terminal is located near the first advertising terminal;
informing the first advertisement terminal to collect a first environment image or actively uploading the first environment image according to the second advertisement terminal identification, and receiving the first environment image sent by the first advertisement terminal;
and if the first environment image is detected to comprise the face image of the user, informing the first advertisement terminal to collect user identity information associated with the user terminal, and receiving the user identity information sent by the first advertisement terminal.
In a second aspect, an embodiment of the present application further provides a user portrait creating apparatus, which is applied to a server, and the apparatus includes:
the information receiving unit is used for receiving a first environment image acquired by a first advertisement terminal when a target advertisement is played and user identity information associated with a user terminal which is within a preset distance from the first advertisement terminal, wherein the first environment image comprises a user face image;
the data extraction unit is used for extracting offline user data according to the first environment image and extracting online user data associated with the user identity information, wherein the online user data is generated based on an operation record associated with the user identity information in the target application program;
and the data fusion unit is used for fusing off-line user data and on-line user data to establish user portrait data.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the steps in the method as provided in the first aspect are executed.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps in the method as provided in the first aspect.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a schematic diagram of an interaction between a server, a first advertisement terminal, and a user terminal according to an embodiment of the present application;
FIG. 2 is a flowchart of a user representation creation method according to an embodiment of the present application;
FIG. 3 is a block diagram of functional modules of a user representation creation apparatus according to an embodiment of the present disclosure;
fig. 4 is a circuit connection block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The embodiment of the application provides a user portrait establishing method, and the method can be applied to a server 10. As shown in FIG. 1, the server 10 may be communicatively coupled to the user terminal 20 and the first advertising terminal 30, respectively, for data interaction. In the embodiment of the present application, the user terminal 20 may be a smart phone or a tablet computer. The first advertising terminal 30 may be disposed in an elevator, a subway passage, a building lobby, etc., without limitation. As shown in fig. 2, the method includes:
s201: the server 10 receives a first environment image captured by the first advertising terminal 30 while the targeted advertisement is played, and user identification information associated with the user terminal 20 in the vicinity of the first advertising terminal 30.
The first advertisement terminal 30 may include a display screen and an image capture module. The display screen is used for playing the target advertisement, and the image acquisition module is used for acquiring a first environment image near the first advertisement terminal 30. The first environment image includes a human body image of a user, and the human body image of the user includes a human face image of the user (the human face image of the user may be used as an identity tag for identifying the user). The human body image of the user can also comprise: accessories of the user (jewelry, dresses, satchels, etc.). The first environment image may further include articles such as the carrying objects (e.g., baby carriage, bicycle, pet) of the user.
In addition, the user identity information associated with the user terminal 20 may be: the MAC address of the user terminal 20, or the IMEI code or IDFA code of the user terminal 20, or the login account of the target application program in the user terminal 20, etc., are not limited herein. When the user identity information is the MAC address of the user terminal 20, the manner of the server 10 acquiring the MAC address of the user terminal 20 may be: the first advertisement terminal 30 collects the MAC address of the user terminal 20 through the WIFI probe, and the server 10 receives the MAC address uploaded by the first advertisement terminal 30, or the MAC address uploaded by the user terminal 20. When the user identity information is a login account of the target application program in the user terminal 20, the server 10 may acquire the login account by: the user terminal 20 scans the identifiable tag in the target advertisement played by the first advertisement terminal 30 and enters a login interface for logging in the target application program. The user terminal 20 obtains the login account and the login password input by the user on the login interface, and enters the browsing interface of the target application program. Meanwhile, the server 10 receives the login account and the login password transmitted by the user terminal 20.
S202: the server 10 extracts offline user data from the first environment image and extracts online user data associated with the user identity information.
For offline user data, the offline user data may be feature data extracted from the first environmental image according to an image recognition algorithm. For example, the offline user data may include facial features (data of age, sex, skin color, and the like) of the user, accessory data (accessory data of a category, a brand, a color, and the like of an accessory), carrier data (carrier data of a category, a brand, a color, and the like of a carrier), behavior data (motion data, mental state data, and the like of the user). For online user data, the online user data is generated based on the user identity information in the operation record associated with the target application, and the target application can be associated with the target advertisement (the target advertisement and the target application are products of the same advertiser, or the target application can be used for playing the target advertisement). The operation record may include, but is not limited to, a browsing record, a placing order record, a payment record, and the like.
S203: and fusing off-line user data and on-line user data to establish user portrait data.
According to the user portrait establishing method, offline user data are extracted according to the first environment image, online user data related to user identity information are extracted, and the offline user data and the online user data are fused to establish user portrait data. Furthermore, the user portrait data comprises offline user data and online user data, so that the user portrait data comprises richer data content, and the description of the user characteristics is more comprehensive and accurate.
It will be appreciated that the user profile data created has reference value when the offline user data and the online user data are data describing the same user. Therefore, in order to enable the acquired offline user data and online user data to be data describing the same user, the acquired user face image and the user identity information can be matched in advance, and if the user face image is matched with the user identity information, the association relationship between the first environment image and the user identity information is established. When the first environment image and the user identity information have an association relationship, the offline user data extracted according to the first environment image and the online user data extracted and associated with the user identity information are data describing the same user. Furthermore, the created user portrait data has a reference value.
Specifically, the specific matching method for the face image of the user and the identity information of the user may include, but is not limited to, the following two methods:
the first matching mode is as follows:
first, the server 10 receives the first advertising terminal 30 identification transmitted from the first advertising terminal 30. Wherein the first advertising terminal 30 identifier is associated with a first environment image comprising an image of a user's face. Specifically, the first advertising terminal 30 identification may be tagged to the first environmental image to indicate that the first environmental image was captured by the first advertising terminal 30.
Then, the server 10 receives the advertisement identification information and the second advertisement terminal identification transmitted from the user terminal 20.
Wherein the advertisement identification information and the second advertisement terminal identification are obtained by the user terminal 20 from the first advertisement terminal 30. Specifically, the manner in which the user terminal 20 obtains the advertisement identification information and the second advertisement terminal identification includes, but is not limited to, the following two manners: the first method comprises the following steps: the user terminal 20 scans the identifiable tag (e.g., two-dimensional code) in the target advertisement played by the first advertisement terminal 30, and obtains the advertisement identification information and the second advertisement terminal identification from the identifiable tag, wherein the advertisement identification information is understandably identifiable tag. And the second method comprises the following steps: when the "shake-and-shake" flag is played in the target advertisement, the user terminal 20 detects audio information (i.e., advertisement flag information and a second advertisement terminal flag) of the target advertisement after being shaken.
Finally, if the first advertisement terminal 30 identifier is identical to the second advertisement terminal identifier and the advertisement identification information is associated with the target advertisement, the server 10 determines that the user face image matches the user identity information. When the first advertisement terminal 30 identification is consistent with the second advertisement terminal identification, it indicates that the user is watching the advertisement played by the first advertisement terminal 30, and further, when the advertisement identification information is associated with the target advertisement, it indicates that the user watches the target advertisement played by the first advertisement terminal 30. And furthermore, the reliability of the matching result of the face image of the user and the identity information of the user is higher.
The second matching mode is as follows:
when the first environment image comprises a plurality of user face images, firstly, a target face image with a behavior of acquiring advertisement identification information in a target advertisement is identified from the plurality of user face images, and the number of the target face images is one. The act of obtaining the advertisement identification information in the target advertisement may be, but is not limited to: identifying scanning behavior of a user on identifiable tags of the targeted advertisement in the first environmental image; or when a "shake-and-shake" mark is displayed in the played target advertisement, the shake behavior of the user on the user terminal 20 in the first environment image is recognized.
Then, advertisement identification information and user identification information of a target terminal among the plurality of user terminals 20 are acquired. The method for acquiring the advertisement identification information and the user identity information of the target terminal is the same as the method for acquiring the advertisement identification information and the user identity information of the target terminal in the first matching mode, and is not repeated here.
And if the acquired advertisement identification information is associated with the target advertisement, determining that the target face image is matched with the user identity information. Only the target face image is matched with the user identity information, and the target face image of the user interested in the target advertisement can be determined from the plurality of user face images. Furthermore, the user portrait data established for the user has a higher reference value.
In addition, in the following implementation, the acquired offline user data and online user data can be guaranteed to be data describing the same user without matching the target face image with the user identity information. Specifically, S201 includes:
first, the server 10 receives advertisement identification information and a second advertisement terminal identification from the user terminal 20.
Wherein, the distance between the user terminal 20 and the first advertising terminal 30 is within the preset distance, and therefore, the first advertising terminal 30 can capture the facial image of the user holding the user terminal 20. The advertisement identification information and the second advertisement terminal identification may also be obtained by the user terminal 20 from the targeted advertisement. The method for acquiring the advertisement identification information of the target terminal and the identification mode of the second advertisement terminal is the same as the method for acquiring the advertisement identification information of the target terminal and the identification mode of the second advertisement terminal in the first matching mode, and is not repeated here.
Then, the server 10 notifies the first advertisement terminal 30 to collect the first environment image or actively upload the first environment image according to the second advertisement terminal identification, and receives the first environment image sent by the first advertisement terminal 30.
Finally, if it is detected that the first environment image includes a face image of the user, the first advertisement terminal 30 is notified to collect user identity information associated with the user terminal 20, and the user identity information sent by the first advertisement terminal 30 is received. The manner of obtaining the user identity information is the same as that of obtaining the user identity information in the first matching manner, and is not described herein again. It can be understood that when the face image of the user is recognized, and the user identity information associated with the user terminal 20 held by the user is collected, it indicates that the face image and the user identity information belong to the same user, and matching is not required.
In addition, in the above-mentioned S201 to S203, the user image data having a more reference value is created. After the user portrait data is successfully established, the user portrait data can be respectively established with the user face image and the user identity information to be convenient for calling the user portrait data according to the user face image or the user identity information. For applications of user portrait data, the following two ways may be included, but not limited to:
first, after S203, the method further includes:
first, the server 10 receives a second environment image sent by a second advertisement terminal, where the second environment image includes a face image of a user. When the user is located near the second advertising terminal, the user is captured by the second advertising terminal, and the server 10 receives the second environment image from the second advertising terminal.
The server 10 then queries the user representation data associated with the user's facial image. The user face image and the user portrait data establish an association relationship in advance. Thus, the server 10 can search for user portrait data from the face image of the user.
And finally, screening advertisements to be delivered from the advertisement database according to the user portrait data. And issuing the advertisement to be delivered to a second advertisement terminal for delivery. Because the user portrait data contains richer data content, the description of the user characteristics is more comprehensive and accurate. Furthermore, the advertisement delivered at the second advertisement terminal according to the user portrait data is more suitable for the user's demand.
Secondly, after S203, the method further includes:
first, the server 10 receives a product browsing request and user identification information transmitted from the user terminal 20. When a user opens a product browsing interface of a target application associated with a target advertisement at the user terminal 20, the server 10 receives a product browsing request and user identity information transmitted from the user terminal 20.
The server 10 then queries the user representation data associated with the user identity information. The user identity information is associated with the user portrait data in advance. Thus, the server 10 can search for user portrait data from the face image of the user.
Finally, screening out information of a product to be recommended from a product database according to the user portrait data; and sending the information of the product to be recommended to a display interface of the user terminal 20 for display. Because the user portrait data contains richer data content, the description of the user characteristics is more comprehensive and accurate. Furthermore, the information of the product to be recommended displayed on the display interface of the user terminal 20 according to the user portrait data is more suitable for the user's requirement.
Referring to fig. 3, an embodiment of the present invention further provides a user portrait creating apparatus, which is applied to a server 10. As shown in FIG. 1, the server 10 may be communicatively coupled to the user terminal 20 and the first advertising terminal 30, respectively, for data interaction. It should be noted that the basic principle and the technical effects of the user portrait creating apparatus provided in the embodiment of the present application are the same as those of the above embodiment, and for the sake of brief description, reference may be made to the corresponding contents in the above embodiment for the part of this embodiment that is not mentioned. The device comprises an information receiving unit 301, a data extracting unit 32 and a data fusing unit 33. Wherein,
an information receiving unit 301, configured to receive a first environment image captured by the first advertisement terminal 30 when the target advertisement is played, and user identity information associated with the user terminal 20 in the vicinity of the first advertisement terminal 30.
Among them, it is also possible to directly receive offline user data extracted by the first advertising terminal 30 according to the first environment image.
And the data extraction unit 32 is used for extracting offline user data according to the first environment image and extracting online user data associated with the user identity information, wherein the online user data is generated based on the operation record associated with the target application program by the user identity information.
And a data fusion unit 33 for fusing the offline user data and the online user data to create user portrait data.
In a possible embodiment, the apparatus may further include:
and the information matching unit is used for determining whether the face image of the user is matched with the identity information of the user.
And the incidence relation establishing unit is used for establishing the incidence relation between the first environment image and the user identity information if the user face image is matched with the user identity information.
In a possible implementation manner, the information receiving unit 301 may be further configured to receive an identifier of the first advertisement terminal 30 sent by the first advertisement terminal 30.
Wherein the first advertising terminal 30 identification is associated with the first environmental image.
The information receiving unit 301 may be further configured to receive advertisement identification information and a second advertisement terminal identification sent by the user terminal 20, where the advertisement identification information and the second advertisement terminal identification are obtained from the target advertisement.
And the information matching unit is specifically used for determining that the facial image of the user is matched with the identity information of the user if the first advertisement terminal 30 identifier is consistent with the second advertisement terminal identifier and the advertisement identifier information is associated with the target advertisement.
In another possible embodiment, the apparatus may further include:
and the image recognition unit is used for recognizing a target face image which has the action of acquiring the advertisement identification information in the target advertisement in the face images of the users.
An information obtaining unit for obtaining advertisement identification information and user identity information of a target terminal among the plurality of user terminals 20.
And the information matching unit is specifically used for determining that the target face image is matched with the user identity information if the acquired advertisement identification information is associated with the target advertisement.
In yet another possible embodiment, the information receiving unit 301 is specifically configured to receive advertisement identification information and a second advertisement terminal identification from the user terminal 20. Wherein the user terminal 20 is located in the vicinity of the first advertising terminal 30.
And an information sending unit for informing the first advertising terminal 30 to collect the first environment image according to the second advertising terminal identification.
An information receiving unit 301, further configured to receive a first environment image sent by the first advertisement terminal 30;
and the information sending unit is further configured to notify the first advertisement terminal 30 to collect user identity information associated with the user terminal 20 if it is detected that the first environment image includes a user face image.
The information receiving unit 301 is further configured to receive user identity information sent by the first advertisement terminal 30.
Optionally, the information receiving unit 301 may be further configured to receive a second environment image sent by a second advertisement terminal, where the second environment image includes a face image of the user. The apparatus may further include:
and the data query unit is used for querying the user portrait data associated with the user face image.
And the data screening unit is used for screening advertisements to be delivered from the advertisement database according to the user portrait data.
And the advertisement putting unit is used for issuing the advertisement to be put to the second advertisement terminal for putting.
Or, optionally, the information receiving unit 301 may be further configured to receive a product browsing request and user identity information sent by the user terminal 20. The apparatus may further include:
and the data query unit is used for querying the user portrait data associated with the user identity information.
And the information screening unit is used for screening the information of the product to be recommended from the product database according to the user portrait data.
And the information sending unit is used for sending the information of the product to be recommended to a display interface of the user terminal 20 for display.
The above prior art solutions have shortcomings which are the results of practical and careful study of the inventor, and therefore, the discovery process of the above problems and the solutions proposed by the following embodiments of the present invention to the above problems should be the contribution of the inventor to the present invention in the course of the present invention.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device for executing a user portrait creation method according to an embodiment of the present disclosure. The electronic device may be the server in the above embodiments. The electronic device may include: at least one processor 110, such as a CPU, at least one communication interface 120, at least one memory 130, and at least one communication bus 140. Wherein the communication bus 140 is used for realizing direct connection communication of these components. The communication interface 120 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The memory 130 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). Memory 130 may optionally be at least one memory device located remotely from the aforementioned processor. The memory 130 stores computer readable instructions, and when the computer readable instructions are executed by the processor 110, the electronic device executes the method process shown in fig. 2.
It will be appreciated that the configuration shown in fig. 2 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 2 or may have a different configuration than shown in fig. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
The apparatus may be a module, a program segment, or code on an electronic device. It should be understood that the apparatus corresponds to the above-mentioned embodiment of the method of fig. 2, and can perform various steps related to the embodiment of the method of fig. 2, and the specific functions of the apparatus can be referred to the description above, and the detailed description is appropriately omitted here to avoid redundancy.
It should be noted that, for the convenience and conciseness of description, the specific working processes of the system and the device described above may refer to the corresponding processes in the foregoing method embodiments, and the description is not repeated here.
Embodiments of the present application provide a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the method processes performed by an electronic device in the method embodiment shown in fig. 1.
The embodiment discloses a computer program product, which comprises a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer, the computer is capable of executing the method provided by the above method embodiments, for example, the method comprises receiving a first environment image collected by a first advertising terminal when a target advertisement is played, and user identity information associated with a user terminal within a preset distance from the first advertising terminal, wherein the first environment image comprises a user face image; extracting offline user data according to the first environment image and extracting online user data associated with the user identity information, wherein the online user data is generated based on an operation record associated with the user identity information in the target application program; and fusing off-line user data and on-line user data to establish user portrait data.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. A user portrait creation method, applied to a server, the method comprising:
receiving a first environment image acquired by a first advertisement terminal when a target advertisement is played and user identity information related to a user terminal near the first advertisement terminal, wherein the first environment image comprises a user face image;
extracting offline user data according to the first environment image and extracting online user data associated with the user identity information, wherein the online user data is generated based on an operation record associated with the user identity information in a target application program;
and fusing the offline user data and the online user data to establish user portrait data.
2. The method of claim 1, wherein prior to said extracting offline user data from the first environmental image and extracting online user data associated with the user identity information, the method further comprises:
determining whether the face image of the user is matched with the identity information of the user;
and if the user face image is matched with the user identity information, establishing an incidence relation between the first environment image and the user identity information.
3. The method of claim 2, wherein prior to said extracting offline user data from the first environmental image and extracting online user data from a targeted application associated with the targeted advertisement based on the user identity information, the method further comprises:
receiving a first advertisement terminal identification sent by the first advertisement terminal, wherein the first advertisement terminal identification is associated with the first environment image;
receiving advertisement identification information and a second advertisement terminal identification sent by the user terminal, wherein the advertisement identification information and the second advertisement terminal identification are obtained from the target advertisement;
the determining whether the facial image of the user and the identity information of the user are matched comprises: and if the first advertisement terminal identification is consistent with the second advertisement terminal identification and the advertisement identification information is associated with the target advertisement, determining that the user face image is matched with the user identity information.
4. The method of claim 2, wherein the first environment image comprises a plurality of user face images, and wherein the determining whether the user face images match the user identity information comprises:
identifying a target face image with a behavior of acquiring advertisement identification information in the target advertisement in the plurality of user face images, wherein the number of the target face images is one;
acquiring advertisement identification information and user identity information of a target terminal in a plurality of user terminals;
and if the acquired advertisement identification information is associated with the target advertisement, determining that the target face image is matched with the user identity information.
5. The method of claim 1, wherein after said creating user representation data, said method further comprises:
receiving a second environment image sent by a second advertisement terminal, wherein the second environment image comprises a face image of the user;
querying user portrait data associated with the user face image;
screening advertisements to be launched from an advertisement database according to the user portrait data;
and issuing the advertisement to be delivered to the second advertisement terminal for delivery.
6. The method of claim 1, wherein after said creating user representation data, said method further comprises:
receiving a product browsing request and user identity information sent by the user terminal;
querying user representation data associated with the user identity information;
screening out information of a product to be recommended from a product database according to the user portrait data;
and sending the information of the product to be recommended to a display interface of the user terminal for display.
7. The method of claim 1, wherein receiving the first environment image collected by the first advertisement terminal while playing the target advertisement and the user identity information associated with the user terminal in the vicinity of the first advertisement terminal comprises:
receiving advertisement identification information and a second advertisement terminal identification from a user terminal; wherein the user terminal is located in proximity to the first advertising terminal;
informing a first advertisement terminal to collect a first environment image or actively uploading the first environment image according to the second advertisement terminal identification, and receiving the first environment image sent by the first advertisement terminal;
and if the first environment image is detected to comprise a user face image, informing the first advertisement terminal to collect user identity information associated with the user terminal, and receiving the user identity information sent by the first advertisement terminal.
8. A user profile creation apparatus, for use with a server, the apparatus comprising:
the system comprises an information receiving unit, a first advertisement terminal and a second advertisement terminal, wherein the information receiving unit is used for receiving a first environment image acquired by the first advertisement terminal when a target advertisement is played and user identity information related to a user terminal near the first advertisement terminal, and the first environment image comprises a user face image;
the data extraction unit is used for extracting offline user data according to the first environment image and extracting online user data associated with the user identity information, wherein the online user data is generated on the basis of an operation record associated with a target application program by the user identity information;
and the data fusion unit is used for fusing the offline user data and the online user data to establish user portrait data.
9. An electronic device comprising a processor and a memory, the memory storing computer readable instructions that, when executed by the processor, perform the method of any of claims 1-7.
10. A storage medium on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110397855.2A CN113034198B (en) | 2021-04-13 | 2021-04-13 | User portrait data establishing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110397855.2A CN113034198B (en) | 2021-04-13 | 2021-04-13 | User portrait data establishing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113034198A true CN113034198A (en) | 2021-06-25 |
CN113034198B CN113034198B (en) | 2024-09-27 |
Family
ID=76456592
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110397855.2A Active CN113034198B (en) | 2021-04-13 | 2021-04-13 | User portrait data establishing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113034198B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014205262A2 (en) * | 2013-06-20 | 2014-12-24 | Aol Advertising Inc. | Systems and methods for cross-browser advertising id synchronization |
US20160148259A1 (en) * | 2014-11-25 | 2016-05-26 | Mezzomedia Co., Ltd. | Method of managing cookie information for target advertisement and application for managing cookie information |
CN108280368A (en) * | 2018-01-22 | 2018-07-13 | 北京腾云天下科技有限公司 | On a kind of line under data and line data correlating method and computing device |
CN110009401A (en) * | 2019-03-18 | 2019-07-12 | 康美药业股份有限公司 | Advertisement placement method, device and storage medium based on user's portrait |
CN110533440A (en) * | 2018-05-24 | 2019-12-03 | 北京智慧图科技有限责任公司 | A kind of internet advertisement information dispensing application method based on video camera |
CN110795584A (en) * | 2019-09-19 | 2020-02-14 | 深圳云天励飞技术有限公司 | User identifier generation method and device and terminal equipment |
CN111784396A (en) * | 2020-06-30 | 2020-10-16 | 广东奥园奥买家电子商务有限公司 | Double-line shopping tracking system and method based on user image |
CN112669095A (en) * | 2021-03-15 | 2021-04-16 | 北京焦点新干线信息技术有限公司 | Client portrait construction method and device, electronic equipment and computer storage medium |
-
2021
- 2021-04-13 CN CN202110397855.2A patent/CN113034198B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014205262A2 (en) * | 2013-06-20 | 2014-12-24 | Aol Advertising Inc. | Systems and methods for cross-browser advertising id synchronization |
US20160148259A1 (en) * | 2014-11-25 | 2016-05-26 | Mezzomedia Co., Ltd. | Method of managing cookie information for target advertisement and application for managing cookie information |
CN108280368A (en) * | 2018-01-22 | 2018-07-13 | 北京腾云天下科技有限公司 | On a kind of line under data and line data correlating method and computing device |
CN110533440A (en) * | 2018-05-24 | 2019-12-03 | 北京智慧图科技有限责任公司 | A kind of internet advertisement information dispensing application method based on video camera |
CN110009401A (en) * | 2019-03-18 | 2019-07-12 | 康美药业股份有限公司 | Advertisement placement method, device and storage medium based on user's portrait |
CN110795584A (en) * | 2019-09-19 | 2020-02-14 | 深圳云天励飞技术有限公司 | User identifier generation method and device and terminal equipment |
CN111784396A (en) * | 2020-06-30 | 2020-10-16 | 广东奥园奥买家电子商务有限公司 | Double-line shopping tracking system and method based on user image |
CN112669095A (en) * | 2021-03-15 | 2021-04-16 | 北京焦点新干线信息技术有限公司 | Client portrait construction method and device, electronic equipment and computer storage medium |
Non-Patent Citations (1)
Title |
---|
崔萃;: "基于大数据的电子商务用户画像构建研究", 信息与电脑(理论版), no. 03 * |
Also Published As
Publication number | Publication date |
---|---|
CN113034198B (en) | 2024-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10380170B2 (en) | Integrated image searching system and service method thereof | |
CN108234591B (en) | Content data recommendation method and device based on identity authentication device and storage medium | |
US20160239892A1 (en) | User interest-based product information recommendation system | |
US20120259741A1 (en) | Information provision device, information provision method, information provision program, and computer-readable storage medium for storing said program | |
CN105450778B (en) | Information transmission system | |
CN104021398A (en) | Wearable intelligent device and method for assisting identity recognition | |
US20130238467A1 (en) | Object display server, object display method, object display program, and computer-readable recording medium for storing the program | |
CN102947850A (en) | Content output device, content output method, content output program, and recording medium with content output program thereupon | |
CN108960892B (en) | Information processing method and device, electronic device and storage medium | |
CN103942705A (en) | Advertisement classified match pushing method and system based on human face recognition | |
CN103488528A (en) | QR code processing method and device based on mobile terminals | |
CN211180928U (en) | Information processing apparatus, express delivery management system, and information processing system | |
CN105631461A (en) | Image recognition system and method | |
CN111753608A (en) | Information processing method and device, electronic device and storage medium | |
CN106203050A (en) | The exchange method of intelligent robot and device | |
CN103237165A (en) | Method and electronic equipment for checking extended name card information in real time | |
JP5851560B2 (en) | Information processing system | |
JP2014092934A (en) | Information communication device and information communication method, information communication system, and computer program | |
CN112418994B (en) | Commodity shopping guide method and device, electronic equipment and storage medium | |
CN113901244A (en) | Label construction method and device for multimedia resource, electronic equipment and storage medium | |
CN110348925A (en) | Shops's system, article matching method, device and electronic equipment | |
WO2016033033A1 (en) | Method and system for presenting information | |
KR102278693B1 (en) | Signage integrated management system providing Online to Offline user interaction based on Artificial Intelligence and method thereof | |
CN113034198A (en) | User portrait data establishing method and device | |
CN107209907A (en) | Utilize the order system of personal information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |