CN111626817A - User portrait analysis method based on electronic commerce big data and artificial intelligence platform - Google Patents

User portrait analysis method based on electronic commerce big data and artificial intelligence platform Download PDF

Info

Publication number
CN111626817A
CN111626817A CN202010389256.1A CN202010389256A CN111626817A CN 111626817 A CN111626817 A CN 111626817A CN 202010389256 A CN202010389256 A CN 202010389256A CN 111626817 A CN111626817 A CN 111626817A
Authority
CN
China
Prior art keywords
browsing
information
vector
portrait
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010389256.1A
Other languages
Chinese (zh)
Other versions
CN111626817B (en
Inventor
石伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHANGSHA JUMEI NETWORK TECHNOLOGY Co.,Ltd.
Original Assignee
石伟
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 石伟 filed Critical 石伟
Priority to CN202010389256.1A priority Critical patent/CN111626817B/en
Priority to CN202011341364.8A priority patent/CN112465590A/en
Priority to CN202011341365.2A priority patent/CN112465591A/en
Publication of CN111626817A publication Critical patent/CN111626817A/en
Application granted granted Critical
Publication of CN111626817B publication Critical patent/CN111626817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the disclosure provides a user portrait analysis method based on electronic commerce big data and an artificial intelligence platform, browsing behavior information of a video live broadcast terminal of a live broadcast audience account on live broadcast commodity recommendation information of the live broadcast audience account is obtained, then deep analysis is carried out to obtain a current positive browsing relation characteristic and a current negative browsing relation characteristic, browsing tendency information is obtained after comparison, browsing behavior label information of a main broadcast audience account is determined based on the browsing tendency information, and then the browsing behavior label information of the main broadcast audience account is analyzed according to a preset artificial intelligence model to generate a user portrait of the main broadcast audience account. Therefore, the deep mining of further browsing behaviors of the audiences to the live broadcast commodity recommendation information can be realized, so that the browsing tendency of the audiences is effectively mined, the depth dimension of the analysis process of the user portrait aiming at the audiences is expanded, and the subsequent information pushing and audience experience optimization are facilitated.

Description

User portrait analysis method based on electronic commerce big data and artificial intelligence platform
Technical Field
The disclosure relates to the technical field of electronic commerce and big data, in particular to a user portrait analysis method and an artificial intelligence platform based on electronic commerce big data.
Background
With the rapid development of the internet and the mobile communication technology, the live internet video broadcast can release contents such as electronic commerce commodities on the internet on site, and the interactive effect of the electronic commerce commodities is enhanced by utilizing the characteristics of intuition, rapidness, good expression form, rich contents, strong interactivity, unlimited regions, divisible audiences and the like of the internet.
At present, in the live broadcast process of online interactive live broadcast commodities initiating interaction, the interactive behavior characteristics of audiences can be effectively mined based on the graphic interaction condition of the audiences in the live broadcast process of the online interactive live broadcast commodities, so that interested live broadcast commodity recommendation information is recommended for the audiences. However, deep mining aiming at further browsing behaviors of the audience on the live broadcast commodity recommendation information is lacked at present, and the browsing tendency of the audience is difficult to further and effectively mine, so that the depth dimension of the analysis process of the user portrait aiming at the audience is not wide enough, and further the subsequent information push and audience experience optimization are influenced.
Disclosure of Invention
In order to overcome at least the above disadvantages in the prior art, an object of the present disclosure is to provide a user portrait parsing method and an artificial intelligence platform based on e-commerce big data, which can implement deep mining of further browsing behavior of a viewer on live broadcast commodity recommendation information, thereby effectively mining a browsing tendency of the viewer, and expanding a depth dimension of an analysis process of a user portrait for the viewer, so as to facilitate subsequent information push and viewer experience optimization.
In a first aspect, the present disclosure provides a user portrait parsing method based on e-commerce big data, applied to an artificial intelligence platform in communication connection with a plurality of live video terminals, the method including:
acquiring browsing behavior information of a video live broadcast terminal of the live broadcast audience account on live broadcast commodity recommendation information of the live broadcast audience account, and determining target browsing object information including an portrait characteristic target from an initial browsing node of a browsing node set corresponding to the browsing behavior information;
acquiring at least one piece of browsing access object information corresponding to an associated portrait feature target from comparative browsing nodes of the browsing node set, wherein the comparative browsing nodes are any browsing nodes except the initial browsing node in the browsing node set, and the associated portrait feature target is a portrait target of a main audience account corresponding to the portrait feature target in the associated browsing nodes of the comparative browsing nodes;
extracting features of the current browsing access object information and the target browsing object information to obtain a current browsing feature vector and a target browsing feature vector which respectively correspond to the current browsing access object information and the target browsing access object information; the current browsing access object information is any browsing access object information in the at least one browsing access object information;
extracting browsing relation features of the current browsing feature vector and the target browsing feature vector to obtain a current positive browsing relation feature and a current negative browsing relation feature; the current positive browsing relation feature and the current negative browsing relation feature are both browsing relation features;
and comparing the current positive browsing relation characteristic with the current negative browsing relation characteristic to obtain browsing tendency information, determining browsing behavior tag information of the anchor audience account from the at least one piece of browsing access object information based on the browsing tendency information, analyzing the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model, and generating a user portrait of the anchor audience account.
In a possible implementation manner of the first aspect, the step of determining, from an initial browsing node of a browsing node set corresponding to the browsing behavior information, target browsing object information including an image feature target includes:
acquiring a browsing object area corresponding to the image characteristic target from the comparative browsing node of the browsing node set corresponding to the browsing behavior information to obtain target browsing object information;
the step of obtaining at least one browsing access object information corresponding to the associated portrait feature target from the comparative browsing nodes of the browsing node set includes:
in the comparative browsing node, determining target browsing behavior tag information corresponding to the associated portrait feature target;
and acquiring at least one search interval according to a preset search range by taking the target browsing behavior tag information as a search target, and searching browsing access object information of the at least one search interval to obtain the at least one browsing access object information corresponding to the associated portrait feature target, wherein the preset search range is a preset search interval for determining search areas of different types of browsing behavior tags.
In a possible implementation manner of the first aspect, the step of performing feature extraction on both the current browsing access object information and the target browsing object information to obtain a current browsing feature vector and a target browsing feature vector that respectively correspond to each other includes:
carrying out naive Bayes feature extraction on the current browsing access object information and the target browsing object information to obtain corresponding current naive Bayes feature and target naive Bayes feature respectively;
and extracting feature vectors of the current naive Bayes feature and the target naive Bayes feature to obtain the current browsing feature vector and the target browsing feature vector which respectively correspond to each other.
In a possible implementation manner of the first aspect, the extracting browsing relation features from the current browsing feature vector and the target browsing feature vector to obtain a current positive browsing relation feature and a current negative browsing relation feature includes:
extracting browsing relation features of each browsing feature interval related to the current browsing feature vector and the target browsing feature vector, and determining a browsing relation description vector corresponding to the live audience account;
determining an associated browsing vector segment sequence according to the browsing relation description vector, extracting a browsing frequent vector segment of the browsing relation description vector, and extracting a concentrated associated browsing vector segment of the browsing frequent vector segment associated with the associated browsing vector segment sequence by taking a set threshold value as a browsing vector segment interval, wherein the browsing frequent vector segment is used for representing a vector segment formed by the fact that the number of associable vector points in a browsing relation description axis in the browsing relation description vector is greater than a set number;
generating a plurality of vector tendency units according to the vector tendency direction in the axis interval corresponding to the concentrated associated browsing vector points according to at least two concentrated associated browsing vector points associated in the concentrated associated browsing vector segments, calculating the overlapping interval between all the axis intervals in the next concentrated associated browsing vector point and all the axis intervals in the previous concentrated associated browsing vector point, and obtaining a corresponding vector tendency direction table according to each obtained overlapping interval;
according to the vector tendency direction table, acquiring a vector tendency unit of which the vector tendency directions are matched and the overlapping section between the axis sections of the two vector tendency units is smaller than the maximum continuous overlapping section of the browsing relation description vector in the overlapping section to form a concentrated associated browsing vector point space;
matching vector point spaces in each concentrated associated browsing vector point space to obtain a matching interval of each matched concentrated associated browsing vector point space, and describing the direction of an axis according to the browsing relation of the matching interval of each concentrated associated browsing vector point space;
and determining a positive direction and a negative direction in the direction of the browsing relation description axis according to the matching interval of each concentrated associated browsing vector point space, and taking the characteristic information of the matching interval of the concentrated associated browsing vector point space corresponding to the determined positive direction and negative direction as the current positive browsing relation characteristic and the current negative browsing relation characteristic.
In a possible implementation manner of the first aspect, the step of comparing the current positive browsing relationship characteristic with the current negative browsing relationship characteristic to obtain browsing tendency information, determining browsing behavior tag information of the anchor audience account from the at least one piece of browsing access object information based on the browsing tendency information, analyzing the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model, and generating a user representation of the anchor audience account includes:
comparing the current positive browsing relation characteristic with the current negative browsing relation characteristic to obtain browsing tendency information;
selecting browsing tendency information with the highest tendency degree from the browsing tendency information to obtain target browsing tendency information;
determining browsing access object information matched with the target browsing tendency information from the at least one piece of browsing access object information to obtain target browsing access object information;
calculating browsing behavior parameters of the target browsing access object information and the associated portrait feature target to obtain current browsing behavior parameters and obtain historical browsing behavior parameters; the historical browsing behavior parameters are browsing behavior parameters corresponding to historical browsing nodes before the comparison browsing node;
determining browsing behavior tag information of the anchor audience account from the at least one piece of browsing access object information according to the current browsing behavior parameters and the historical browsing behavior parameters;
and analyzing the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model to generate a user portrait of the anchor audience account.
In a possible implementation manner of the first aspect, the step of comparing the current positive browsing relationship feature with the current negative browsing relationship feature to obtain browsing tendency information includes:
comparing the browsing relation feature difference between the current positive browsing relation feature and the current negative browsing relation feature;
and obtaining browsing tendency information according to the browsing tendency browsing relationship corresponding to the browsing relationship characteristic difference.
In a possible implementation manner of the first aspect, the step of determining browsing behavior tag information of the anchor viewer account from the at least one piece of browsing access object information according to the current browsing behavior parameter and the historical browsing behavior parameter includes:
acquiring a coincident browsing behavior parameter between the current browsing behavior parameter and the historical browsing behavior parameter;
and determining browsing behavior tag information of the associated anchor audience account from the at least one piece of browsing access object information according to the coincident browsing behavior parameter.
In a possible implementation manner of the first aspect, the step of analyzing the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model to generate a user representation of the anchor audience account includes:
analyzing and identifying each browsing behavior tag item in the browsing behavior tag information of the anchor audience account according to the preset artificial intelligence model, determining candidate portrait characteristics of each browsing behavior tag item, determining main portrait distribution information of the browsing behavior tag item according to the candidate portrait characteristics, searching out slave portrait distribution information corresponding to the browsing behavior tag item based on the main portrait distribution information, and merging each browsing behavior tag item into at least one browsing behavior tag item display bar according to the slave portrait distribution information;
acquiring an image analysis parameter which corresponds to each browsing behavior tag item display bar and is used for representing image analysis of each browsing behavior tag item display bar based on each browsing behavior tag item display bar;
determining mapping information of each browsing behavior tag item display bar when the browsing behavior tag item is mapped according to the portrait analysis parameters, and fusing each browsing behavior tag item display bar according to the mapping logic relation of each mapping information to obtain a fused browsing behavior tag item display bar;
extracting corresponding fusion label project node image information according to fusion label project nodes on the fusion browsing behavior label project display strip, grouping the fusion label project node image information according to different object display labels, calculating image adjusting information of each object display label, and selecting a fusion label project node image interval according to the image adjusting information;
when an updating flow for updating a user portrait is generated in the fusion label project node portrait information according to the fusion label project node portrait interval, a portrait mapping label code corresponding to the fusion label project node portrait interval is obtained according to the updating flow;
generating a mapping bit space for recording the portrait mapping label codes, mapping the portrait mapping label codes to the mapping bit space, and setting mapping associated information of the portrait mapping label codes according to the object display labels of the fusion label item node portrait information;
and judging whether the fused label item node portrait information is matched with the user portrait according to the mapping association information, and determining at least one updating parameter for updating the user portrait according to the updating process when the fused label item node portrait information is matched with the user portrait so as to update the user portrait according to the at least one updating parameter.
In a possible implementation manner of the first aspect, the method further includes:
and distributing corresponding live broadcast commodity scene information to the anchor audience account according to the user portrait of the anchor audience account.
In a fourth aspect, an embodiment of the present disclosure further provides a user portrait parsing apparatus based on e-commerce big data, where the apparatus is applied to an artificial intelligence platform in communication connection with a plurality of live video terminals, and the apparatus includes:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring browsing behavior information of a video live broadcast terminal of a live broadcast audience account on live broadcast commodity recommendation information of the live broadcast audience account, and determining target browsing object information comprising an portrait characteristic target from initial browsing nodes of a browsing node set corresponding to the browsing behavior information;
a second obtaining module, configured to obtain at least one piece of browsing access object information corresponding to an associated portrait feature target from comparison browsing nodes of the browsing node set, where the comparison browsing node is any browsing node in the browsing node set except the initial browsing node, and the associated portrait feature target is a portrait target of an anchor audience account corresponding to the portrait feature target in the associated browsing nodes of the comparison browsing nodes;
the first extraction module is used for extracting the characteristics of the current browsing access object information and the target browsing object information to obtain a current browsing characteristic vector and a target browsing characteristic vector which respectively correspond to the current browsing access object information and the target browsing object information; the current browsing access object information is any browsing access object information in the at least one browsing access object information;
the second extraction module is used for extracting the browsing relation characteristics of the current browsing characteristic vector and the target browsing characteristic vector to obtain a current positive browsing relation characteristic and a current negative browsing relation characteristic; the current positive browsing relation feature and the current negative browsing relation feature are both browsing relation features;
and the portrait analysis module is used for comparing the current positive browsing relation characteristic with the current negative browsing relation characteristic to obtain browsing tendency information, determining browsing behavior tag information of the anchor audience account from the at least one piece of browsing access object information based on the browsing tendency information, analyzing the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model, and generating a user portrait of the anchor audience account.
In a third aspect, an embodiment of the present disclosure further provides a user portrait parsing system based on e-commerce big data, where the user portrait parsing system based on e-commerce big data includes an artificial intelligence platform and a plurality of video live broadcast terminals in communication connection with the artificial intelligence platform;
the artificial intelligence platform is used for acquiring browsing behavior information of a video live broadcast terminal of the live broadcast audience account on live broadcast commodity recommendation information of the live broadcast audience account, and determining target browsing object information including an image characteristic target from initial browsing nodes of a browsing node set corresponding to the browsing behavior information;
the artificial intelligence platform is used for acquiring at least one piece of browsing access object information corresponding to an associated portrait feature target from comparative browsing nodes of the browsing node set, wherein the comparative browsing nodes are any browsing nodes except the initial browsing node in the browsing node set, and the associated portrait feature target is a portrait target of an anchor audience account corresponding to the portrait feature target in the associated browsing nodes of the comparative browsing nodes;
the artificial intelligence platform is used for extracting the characteristics of the current browsing access object information and the target browsing object information to obtain a current browsing characteristic vector and a target browsing characteristic vector which respectively correspond to the current browsing access object information and the target browsing object information; the current browsing access object information is any browsing access object information in the at least one browsing access object information;
the artificial intelligence platform is used for extracting browsing relation characteristics of the current browsing characteristic vector and the target browsing characteristic vector to obtain a current positive browsing relation characteristic and a current negative browsing relation characteristic; the current positive browsing relation feature and the current negative browsing relation feature are both browsing relation features;
the artificial intelligence platform is used for comparing the current positive browsing relation characteristic with the current negative browsing relation characteristic to obtain browsing tendency information, determining browsing behavior tag information of the anchor audience account from the at least one piece of browsing access object information based on the browsing tendency information, analyzing the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model, and generating a user portrait of the anchor audience account.
In a fourth aspect, an embodiment of the present disclosure further provides an artificial intelligence platform, where the artificial intelligence platform includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is used for being communicatively connected to at least one live video terminal, the machine-readable storage medium is used for storing a program, an instruction, or a code, and the processor is used for executing the program, the instruction, or the code in the machine-readable storage medium to execute the method for parsing a user portrait based on e-commerce big data in any one of the first aspect or possible designs of the first aspect.
In a fifth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, where instructions are stored, and when executed, cause a computer to perform a method for user representation parsing based on e-commerce big data in the first aspect or any one of the possible designs of the first aspect.
Based on any one of the above aspects, the method includes the steps of obtaining browsing behavior information of a video live broadcast terminal of a live broadcast audience account on live broadcast commodity recommendation information of the live broadcast audience account, then performing deep analysis to obtain a current positive browsing relation characteristic and a current negative browsing relation characteristic, obtaining browsing tendency information through comparison, determining browsing behavior tag information of a main broadcast audience account based on the browsing tendency information, and then analyzing the browsing behavior tag information of the main broadcast audience account according to a preset artificial intelligence model to generate a user portrait of the main broadcast audience account. Therefore, the deep mining of further browsing behaviors of the audiences to the live broadcast commodity recommendation information can be realized, so that the browsing tendency of the audiences is effectively mined, the depth dimension of the analysis process of the user portrait aiming at the audiences is expanded, and the subsequent information pushing and audience experience optimization are facilitated.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present disclosure and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings may be obtained from the drawings without inventive effort.
FIG. 1 is a schematic diagram of an application scenario of a user representation parsing system based on e-commerce big data according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating a method for parsing a user portrait based on e-commerce big data according to an embodiment of the present disclosure;
FIG. 3 is a functional block diagram of a user representation parsing apparatus based on electronic commerce big data according to an embodiment of the present disclosure;
fig. 4 is a block diagram illustrating a structure of an artificial intelligence platform for implementing the above-described user portrait parsing method based on e-commerce big data according to an embodiment of the present disclosure.
Detailed Description
The present disclosure is described in detail below with reference to the drawings, and the specific operation methods in the method embodiments can also be applied to the device embodiments or the system embodiments.
FIG. 1 is a schematic diagram of an interaction of a user representation parsing system 10 based on e-commerce big data according to an embodiment of the present disclosure. The electronic commerce big data based user portrait parsing system 10 can comprise an artificial intelligence platform 100 and a video live broadcast terminal 200 which is in communication connection with the artificial intelligence platform 100. The e-commerce big data based user representation parsing system 10 shown in FIG. 1 is but one possible example, and in other possible embodiments, the e-commerce big data based user representation parsing system 10 may include only some of the components shown in FIG. 1 or may include other components.
In this embodiment, the live video terminal 200 may include a mobile device, a tablet computer, a laptop computer, or any combination thereof. In some embodiments, the mobile device may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include control devices of smart electrical devices, smart monitoring devices, smart televisions, smart cameras, and the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart lace, smart glass, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a personal digital assistant, a gaming device, and the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glass, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include various virtual reality products and the like.
In this embodiment, the artificial intelligence platform 100 and the live video terminal 200 in the system 10 for analyzing user portrait based on e-commerce big data may cooperatively perform the method for analyzing user portrait based on e-commerce big data described in the following method embodiments, and the following detailed description of the method embodiments may be referred to for the specific steps performed by the artificial intelligence platform 100 and the live video terminal 200.
To solve the technical problem in the background art, fig. 2 is a schematic flow chart of a user portrait parsing method based on e-commerce big data according to an embodiment of the present disclosure, which can be executed by the artificial intelligence platform 100 shown in fig. 1, and the user portrait parsing method based on e-commerce big data is described in detail below.
Step S110, acquiring browsing behavior information of the live broadcast commodity recommendation information of the live broadcast audience account by the video live broadcast terminal 200 of the live broadcast audience account, and determining target browsing object information including an portrait feature target from an initial browsing node of a browsing node set corresponding to the browsing behavior information.
And step S120, acquiring at least one piece of browsing access object information corresponding to the associated portrait feature target from the comparative browsing nodes of the browsing node set.
Step S130, performing feature extraction on both the current browsing access object information and the target browsing object information to obtain a current browsing feature vector and a target browsing feature vector respectively corresponding to the current browsing access object information and the target browsing object information.
Step S140, extracting the browsing relation characteristics of the current browsing characteristic vector and the target browsing characteristic vector to obtain the current positive browsing relation characteristics and the current negative browsing relation characteristics.
And S150, comparing the current positive browsing relation characteristic with the current negative browsing relation characteristic to obtain browsing tendency information, determining browsing behavior tag information of the anchor audience account from at least one piece of browsing access object information based on the browsing tendency information, analyzing the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model, and generating a user portrait of the anchor audience account.
In this embodiment, the online interactive live broadcast merchandise may be any merchandise for merchandise promotion and display, such as but not limited to electronic products, agricultural products, infant products, pregnant woman products, and the like.
In this embodiment, the browsing behavior information may be used to indicate a series of information generated by the anchor viewer account in the process of browsing the live broadcast commodity recommendation information, such as link click information, commodity staying information, commodity ordering information, and the like, which is not limited specifically herein.
In this embodiment, the comparison browsing node may be any browsing node in the browsing node set except the initial browsing node, and the associated portrait feature target may be a portrait target of an anchor audience account corresponding to the portrait feature target in the associated browsing node of the comparison browsing node. Each browsing node may refer to a respective node (e.g., a time node, etc.) in a browsing behavior process. The portrait target may be a target object to be acquired in the portrait analysis.
In this embodiment, the current browsing access object information may be any browsing access object information in the at least one browsing access object information. The current positive browsing relation characteristic and the current negative browsing relation characteristic are both browsing relation characteristics. It should be noted that the current positive browsing relationship characteristic and the current negative browsing relationship characteristic may refer to a tendency behavior characteristic of a browsing behavior generated by the anchor viewer account in a browsing tendency dimension.
In this embodiment, the preset artificial intelligence model may be trained in advance, for example, the preset artificial intelligence model may be obtained by inputting the sample browsing behavior tag information and the corresponding user portrait category into the initial artificial intelligence model for training in advance.
Based on the steps, the embodiment obtains the browsing behavior information of the video live broadcast terminal of the live broadcast audience account on the live broadcast commodity recommendation information of the live broadcast audience account, then obtains the current positive browsing relation characteristic and the current negative browsing relation characteristic after deep analysis, obtains the browsing tendency information after comparison, determines the browsing behavior tag information of the anchor audience account based on the browsing tendency information, and then analyzes the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model to generate the user portrait of the anchor audience account. Therefore, the deep mining of further browsing behaviors of the audiences to the live broadcast commodity recommendation information can be realized, so that the browsing tendency of the audiences is effectively mined, the depth dimension of the analysis process of the user portrait aiming at the audiences is expanded, and the subsequent information pushing and audience experience optimization are facilitated.
In a possible implementation manner, for step S110, in the process of acquiring the browsing behavior information of the live video broadcast terminal 200 of the live viewer account on the live commodity recommendation information of the live viewer account, the following sub-steps may be specifically implemented, and are described in detail below.
And a substep S111, acquiring an online interactive live broadcast commodity of which the video live broadcast terminal 200 initiates an interaction, determining an image interaction area according to a live broadcast graphic interaction tag of the online interactive live broadcast commodity, and acquiring an interaction behavior sequence and interaction type conversion information of each live broadcast audience account corresponding to the image interaction area.
And a substep S112, respectively inputting the interaction behavior sequence and the interaction type conversion information into the trained information push network, extracting a first interaction feature vector of each live audience account through a first feature vector extraction layer of the information push network, and extracting a second interaction feature vector of each live audience account through a second feature vector extraction layer of the information push network.
And a substep S113, fusing the first interaction feature vector and the second interaction feature vector through a feature vector fusion layer of the information push network to obtain a target interaction feature vector.
And a substep S114, determining live broadcast interaction hotspot nodes of online interactive live broadcast commodities corresponding to the live broadcast audience accounts according to the target interaction feature vectors, respectively generating live broadcast commodity recommendation information of the corresponding live broadcast audience accounts according to the live broadcast interaction hotspot nodes, and sending the live broadcast commodity recommendation information to the corresponding video live broadcast terminal 200.
And a substep S115, obtaining browsing behavior information of the video live broadcast terminal 200 for live broadcast commodity recommendation information of the live broadcast audience account.
In this embodiment, the live graphic interaction tag may refer to an interaction control type generated when the viewer initiates a live graphic interaction, for example, may refer to an interaction control type in a certain interaction area of a certain online interactive live commodity, or may also refer to an interaction control type of a certain interaction time node of a certain online interactive live commodity, where the interaction control type may represent a control type in an interaction process, such as "like" or "like".
In this embodiment, the image interaction area may be specifically determined according to a node where the live graphic interaction tag is located, for example, the node where the live graphic interaction tag is located is an interaction area B of the online interactive live commodity a in the live broadcasting process, and then the image interaction area is the interaction area B.
In this embodiment, the interaction behavior sequence may be used to represent a specifically generated interaction behavior (for example, a bullet screen behavior, a gift behavior, and the like), and the interaction type conversion information may be used to represent a forward-backward conversion process of a specifically generated interaction behavior type, for example, information in a process of switching from a bullet screen behavior to a gift behavior.
In this embodiment, the live broadcast interactive hotspot node may be configured to represent point of interest information corresponding to each live broadcast audience account, for example, a related commodity C in a live broadcast process of a certain online interactive live broadcast commodity a, or a temporarily mentioned online interactive live broadcast commodity D released next time, and the like, which is not specifically limited herein.
Based on the above steps, the image interaction area is determined according to the live image interaction tag of the online interactive live broadcast commodity, then the interaction behavior sequence and the interaction type conversion information of each live audience account corresponding to the image interaction area are obtained, then the first interaction feature vector of each live audience account and the second interaction feature vector of each live audience account are extracted, and after the first interaction feature vector and the second interaction feature vector are fused to obtain the target interaction feature vector, the live interaction hotspot node of the online interactive live broadcast commodity corresponding to each live audience account is determined, and therefore the live broadcast commodity recommendation information is pushed. Therefore, the interactive behavior characteristics of the audiences can be effectively mined based on the graph interaction condition in the live broadcast process of the online interactive live broadcast commodities of the audiences, so that the live broadcast commodity recommendation information which is possibly interested in the audiences is pushed for the audiences, and the searching cost of the audiences are reduced.
In one possible implementation manner, for the sub-step S111, the interaction type conversion information may specifically include an interaction node, a conversion type, and an interaction image position.
The interactive nodes may be time nodes or area nodes during interactive type conversion, the conversion types may be interactive types before interactive type conversion and interactive types after interactive type conversion, and the interactive image positions may be positions where interactive images are located during interactive type conversion.
On this basis, step S112 may be embodied by the following exemplary sub-steps, which are described in detail below.
And a substep S1121, inputting the interaction behavior sequence into a first feature vector extraction layer, and performing feature extraction on the interaction behavior in the interaction behavior sequence to obtain corresponding interaction behavior features.
In the substep S1122, feature migration processing is performed on the interactive behavior feature by using the interaction deviation parameters corresponding to the first feature vector extraction layer and the live broadcast graphic interactive label, so as to obtain the interactive behavior feature after the feature migration processing.
And a substep S1123 of extracting a first interaction feature vector of each live audience account according to the interaction behavior feature after the feature migration processing.
And a substep S1124 of inputting the interaction type conversion information into the second feature vector extraction layer, and performing feature extraction on the interaction type conversion information to obtain an interaction node feature, an interaction image position feature and a conversion type feature.
And S1125, performing feature migration processing on the interactive node features, the interactive image position features and the conversion type features by using the interactive deflection parameters corresponding to the second feature vector extraction layer and the live broadcast image interactive labels to obtain an interactive type conversion information array.
And a substep S1126 of obtaining the interactive behavior characteristics corresponding to the interactive behavior sequence, inputting the interactive behavior characteristics into the interactive type conversion information array for characteristic fusion to obtain a fused characteristic vector array, and extracting a second interactive characteristic vector of each live audience account according to the characteristic vector array.
In a possible implementation manner, for step S113, in order to improve the fusion efficiency, in this embodiment, feature vector nodes corresponding to the first interaction feature vector and the second interaction feature vector one to one respectively may be fused through a feature vector fusion layer of the information push network to obtain the target interaction feature vector.
In a possible implementation manner, for step S114, in order to accurately determine the live broadcast interaction hotspot node of the online interaction live broadcast commodity corresponding to each live broadcast audience account, the following exemplary sub-steps may be specifically implemented, which are described in detail below.
And a substep S1141 of obtaining interactive item feature data corresponding to interactive items participating in the interaction of the online interactive live broadcast commodities from the target interactive feature vector, wherein the interactive item feature data is obtained by performing feature embedded expression on interactive item process vectors in the target interactive feature vector by adopting a feature expression form matched with interactive item associated commodities of corresponding interactive items.
And a substep S1142 of performing feature mapping on the interactive item feature data sent by the corresponding interactive items according to the feature mapping modes respectively matched with the feature expression forms to obtain corresponding interactive item process vectors.
And a substep S1143 of analyzing the interaction rule of each interaction project process vector respectively and determining the interest degree of the interaction rule corresponding to each interaction project. The interest degree of the interaction rule is used for reflecting the interest degree of the interaction items participating in the online interactive live broadcast commodities.
And a substep S1144 of screening the highest interest degree of the interaction rule from the interest degrees of the interaction rules corresponding to the interaction items, and determining the interaction rule comparison degrees corresponding to the interaction items according to the comparison values between the interest degrees of the interaction rules corresponding to the interaction items and the highest interest degrees of the interaction rules. And the interactive rule comparison degree corresponding to the interactive item is in positive correlation with the corresponding contrast value.
And a substep S1145 of performing node source tracing on the interactive item process vector of the interactive item with the interactive rule comparison degree greater than the set interactive rule comparison degree to obtain live broadcast interactive hotspot nodes of the online interactive live broadcast commodities corresponding to the live broadcast audience accounts, wherein the live broadcast interactive hotspot nodes are used for representing interactive objects or interactive results in the interactive process.
In a possible implementation manner, in step S110, in the process of determining target browsing object information including the portrait feature object from the initial browsing node of the browsing node set corresponding to the browsing behavior information, a browsing object area corresponding to the portrait feature object may be obtained from the comparative browsing node of the browsing node set corresponding to the browsing behavior information, so as to obtain the target browsing object information.
Meanwhile, regarding step S120, in the process of acquiring at least one browsing access object information corresponding to the associated portrait feature target from the comparison browsing nodes in the browsing node set, the following exemplary sub-steps may be specifically implemented, and are described in detail below.
In the substep S121, in the comparison browsing node, target browsing behavior tag information corresponding to the associated portrait feature target is determined.
And the substep S122 is to take the target browsing behavior tag information as a search target, obtain at least one search interval according to a preset search range, search browsing access object information of the at least one search interval, and obtain at least one browsing access object information corresponding to the associated portrait feature target.
In this embodiment, the preset search range may be a preset search interval for determining search areas of different types of browsing behavior tags.
In one possible implementation, step S130 may be implemented by the following exemplary sub-steps, which are described in detail below.
And a substep S131, extracting naive Bayes characteristics from the current browsing access object information and the target browsing object information to obtain corresponding current naive Bayes characteristics and target naive Bayes characteristics respectively.
And a substep S132 of extracting feature vectors of both the current naive Bayes feature and the target naive Bayes feature to obtain a current browsing feature vector and a target browsing feature vector which respectively correspond to each other.
In one possible implementation, step S140 may be implemented by the following exemplary sub-steps, which are described in detail below.
And a substep S141, extracting browsing relation characteristics from each browsing characteristic interval related to the current browsing characteristic vector and the target browsing characteristic vector, and determining a browsing relation description vector corresponding to the live audience account.
And a substep S142, determining an associated browsing vector segment sequence according to the browsing relationship description vector, extracting a browsing frequent vector segment of the browsing relationship description vector, and extracting a concentrated associated browsing vector segment of the browsing frequent vector segment associated with the associated browsing vector segment sequence by taking a set threshold as a browsing vector segment interval, wherein the browsing frequent vector segment is used for indicating that the number of associable vector points in a browsing relationship description axis in the browsing relationship description vector is greater than a set number of formed vector segments.
And a substep S143, generating a plurality of vector tendency units according to the vector tendency directions for the axis segments corresponding to the concentrated associated browsing vector points according to the associated at least two concentrated associated browsing vector points in the concentrated associated browsing vector segments, calculating the overlapping segments between all the axis segments in the next concentrated associated browsing vector point and all the axis segments in the previous concentrated associated browsing vector point, and obtaining a corresponding vector tendency direction table according to each obtained overlapping segment.
And a substep S144 of obtaining the vector tendency units with the matched vector tendency directions and the overlapping section between the axis sections of the two vector tendency units being smaller than the maximum continuous overlapping section of the browsing relationship description vector in the overlapping section according to the vector tendency direction table to form a concentrated associated browsing vector point space.
And a substep S145, matching the vector point space in each concentrated associated browsing vector point space to obtain a matching interval of each matched concentrated associated browsing vector point space, and describing the direction of the axis in the browsing relationship according to the matching interval of each concentrated associated browsing vector point space.
And a substep S146, determining a positive direction and a negative direction in the direction of the browsing relationship description axis according to the matching interval of each concentrated associated browsing vector point space, and taking the feature information of the matching interval of the concentrated associated browsing vector point space corresponding to the determined positive direction and negative direction as the current positive browsing relationship feature and the current negative browsing relationship feature.
In one possible implementation, step S150 may be implemented by the following exemplary sub-steps, which are described in detail below.
And a substep S151, comparing the current positive browsing relation characteristic with the current negative browsing relation characteristic to obtain browsing tendency information.
In the substep S152, browsing tendency information with the highest tendency is selected from the browsing tendency information, and target browsing tendency information is obtained.
In the substep S153, browsing access object information matched with the target browsing tendency information is determined from at least one piece of browsing access object information, and target browsing access object information is obtained.
And a substep S154, calculating browsing behavior parameters of the target browsing access object information and the associated portrait characteristic target, obtaining the current browsing behavior parameters, and obtaining historical browsing behavior parameters. The historical browsing behavior parameters are browsing behavior parameters corresponding to historical browsing nodes before the comparison of the browsing nodes.
And a substep S155, determining browsing behavior tag information of the anchor viewer account from the at least one browsing access object information according to the current browsing behavior parameter and the historical browsing behavior parameter.
And a substep S156, analyzing the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model, and generating a user portrait of the anchor audience account.
In one possible implementation, in sub-step S151, it can be specifically implemented by the following exemplary embodiments.
(1) And comparing the browsing relation characteristic difference between the current positive browsing relation characteristic and the current negative browsing relation characteristic.
(2) And obtaining browsing tendency information according to the browsing tendency browsing relationship corresponding to the browsing relationship characteristic difference.
In one possible implementation, in sub-step S155, it may be specifically implemented by the following exemplary embodiments.
(1) And acquiring a coincident browsing behavior parameter between the current browsing behavior parameter and the historical browsing behavior parameter.
(2) And determining browsing behavior tag information of the associated anchor audience account from the at least one piece of browsing access object information according to the coincident browsing behavior parameters.
In one possible implementation, in sub-step S156, it may be embodied by the following exemplary embodiments.
(1) Analyzing and identifying each browsing behavior tag item in the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model, determining candidate portrait characteristics of each browsing behavior tag item, determining main portrait distribution information of the browsing behavior tag item according to the candidate portrait characteristics, finding out slave portrait distribution information corresponding to the browsing behavior tag item based on the main portrait distribution information, and combining each browsing behavior tag item into at least one browsing behavior tag item display strip according to the slave portrait distribution information.
(2) And acquiring an image analysis parameter which corresponds to each browsing behavior tag item display bar and is used for representing image analysis of each browsing behavior tag item display bar based on each browsing behavior tag item display bar.
(3) And determining mapping information of each browsing behavior tag item display bar when the browsing behavior tag item is mapped according to the portrait analysis parameters, and fusing each browsing behavior tag item display bar according to the mapping logic relation of each mapping information to obtain a fused browsing behavior tag item display bar.
(4) Extracting corresponding fusion label project node image information according to fusion label project nodes on a fusion browsing behavior label project display strip, grouping the fusion label project node image information according to different object display labels, calculating portrait adjusting information of each object display label, and selecting fusion label project node image intervals according to the portrait adjusting information.
(5) When an updating flow for updating the user portrait is generated in the fusion label project node portrait information according to the fusion label project node portrait section, portrait mapping label codes corresponding to the fusion label project node portrait section are obtained according to the updating flow.
(6) Generating a mapping bit space for recording the portrait mapping label codes, mapping the portrait mapping label codes to the mapping bit space, and setting mapping associated information of the portrait mapping label codes according to the object display labels fused with the project node portrait information of the labels.
(7) And judging whether the fused label item node portrait information is matched with the user portrait according to the mapping association information, and determining at least one updating parameter for updating the user portrait according to an updating flow when the fused label item node portrait information is matched with the user portrait so as to update the user portrait according to the at least one updating parameter.
Therefore, after the user portrait of each anchor audience account is determined, corresponding live broadcast commodity scene information can be distributed to the anchor audience accounts according to the user portrait of the anchor audience accounts, so that interested live broadcast commodities are further deeply recommended to the anchor audience accounts, and the anchor audience accounts can pay attention to more other commodities similar to the interested live broadcast commodities by combining with specific live broadcast commodity scenes.
Fig. 3 is a schematic diagram of functional modules of a user representation analyzing device 300 based on e-commerce big data according to an embodiment of the present disclosure, in this embodiment, the user representation analyzing device 300 based on e-commerce big data may be divided into the functional modules according to the method embodiment executed by the artificial intelligence platform 100, that is, the following functional modules corresponding to the user representation analyzing device 300 based on e-commerce big data may be used to execute the method embodiments executed by the artificial intelligence platform 100. The device 300 for analyzing a user portrait based on e-commerce big data may include a first obtaining module 310, a second obtaining module 320, a first extracting module 330, a second extracting module 340, and a portrait analyzing module 350, and the functions of the functional modules of the device 300 for analyzing a user portrait based on e-commerce big data are described in detail below.
The first obtaining module 310 is configured to obtain browsing behavior information of the live broadcast commodity recommendation information of the live broadcast audience account of the video live broadcast terminal 200 of the live broadcast audience account, and determine target browsing object information including a portrait feature target from an initial browsing node of a browsing node set corresponding to the browsing behavior information. The first obtaining module 310 may be configured to perform the step S110, and for a detailed implementation of the first obtaining module 310, reference may be made to the detailed description of the step S110.
The second obtaining module 320 is configured to obtain at least one piece of browsing access object information corresponding to the associated portrait feature target from the comparison browsing nodes in the browsing node set, where the comparison browsing node is any browsing node in the browsing node set except the initial browsing node, and the associated portrait feature target is a portrait target of a main broadcast audience account corresponding to the portrait feature target in the associated browsing nodes of the comparison browsing node. The second obtaining module 320 may be configured to perform the step S120, and for a detailed implementation of the second obtaining module 320, reference may be made to the detailed description of the step S120.
The first extraction module 330 is configured to perform feature extraction on the current browsing access object information and the target browsing object information to obtain a current browsing feature vector and a target browsing feature vector, which correspond to each other. The current browsing access object information is any browsing access object information in the at least one browsing access object information. The first extraction module 330 may be configured to perform the step S130, and the detailed implementation of the first extraction module 330 may refer to the detailed description of the step S130.
The second extraction module 340 is configured to extract browsing relationship features from the current browsing feature vector and the target browsing feature vector, so as to obtain a current positive browsing relationship feature and a current negative browsing relationship feature. The current positive browsing relation characteristic and the current negative browsing relation characteristic are both browsing relation characteristics. The second extraction module 340 may be configured to perform the step S140, and the detailed implementation of the second extraction module 340 may refer to the detailed description of the step S140.
And the portrait analysis module 350 is configured to compare the current positive browsing relationship characteristic with the current negative browsing relationship characteristic to obtain browsing tendency information, determine browsing behavior tag information of the anchor audience account from at least one piece of browsing access object information based on the browsing tendency information, analyze the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model, and generate a user portrait of the anchor audience account. The image analysis module 350 may be configured to perform the step S150 of analyzing the user image based on the e-commerce big data, and the detailed implementation of the image analysis module 350 may refer to the detailed description of the step S150.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the first obtaining module 310 may be a separate processing element, or may be integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the functions of the first obtaining module 310. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when some of the above modules are implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor that can call program code. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).
Fig. 4 is a schematic diagram illustrating a hardware structure of the artificial intelligence platform 100 for implementing the control device according to the embodiment of the disclosure, and as shown in fig. 4, the artificial intelligence platform 100 may include a processor 110, a machine-readable storage medium 120, a bus 130, and a transceiver 140.
In a specific implementation process, at least one processor 110 executes computer-executable instructions stored in the machine-readable storage medium 120 (for example, the first obtaining module 310, the second obtaining module 320, the first extracting module 330, the second extracting module 340, and the representation analyzing module 350 included in the device 300 for parsing user representations based on e-commerce big data shown in fig. 3), so that the processor 110 may execute the method for parsing user representations based on e-commerce big data according to the above method embodiment, where the processor 110, the machine-readable storage medium 120, and the transceiver 140 are connected through the bus 130, and the processor 110 may be configured to control transceiving actions of the transceiver 140, so as to transceive data with the video live broadcast terminal 200.
For a specific implementation process of the processor 110, reference may be made to the above-mentioned various method embodiments executed by the artificial intelligence platform 100, which implement principles and technical effects similar to each other, and details of this embodiment are not described herein again.
In the embodiment shown in fig. 4, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The machine-readable storage medium 120 may comprise high-speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus 130 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus 130 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
In addition, the embodiment of the disclosure also provides a readable storage medium, in which computer execution instructions are stored, and when a processor executes the computer execution instructions, the user portrait parsing method based on e-commerce big data is implemented.
The readable storage medium described above may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (10)

1. A user portrait analysis method based on electronic commerce big data is characterized by being applied to an artificial intelligence platform in communication connection with a plurality of live video terminals, and the method comprises the following steps:
acquiring browsing behavior information of a video live broadcast terminal of the live broadcast audience account on live broadcast commodity recommendation information of the live broadcast audience account, and determining target browsing object information including an portrait characteristic target from an initial browsing node of a browsing node set corresponding to the browsing behavior information;
acquiring at least one piece of browsing access object information corresponding to an associated portrait feature target from comparative browsing nodes of the browsing node set, wherein the comparative browsing nodes are any browsing nodes except the initial browsing node in the browsing node set, and the associated portrait feature target is a portrait target of a main audience account corresponding to the portrait feature target in the associated browsing nodes of the comparative browsing nodes;
extracting features of the current browsing access object information and the target browsing object information to obtain a current browsing feature vector and a target browsing feature vector which respectively correspond to the current browsing access object information and the target browsing access object information; the current browsing access object information is any browsing access object information in the at least one browsing access object information;
extracting browsing relation features of the current browsing feature vector and the target browsing feature vector to obtain a current positive browsing relation feature and a current negative browsing relation feature; the current positive browsing relation feature and the current negative browsing relation feature are both browsing relation features;
and comparing the current positive browsing relation characteristic with the current negative browsing relation characteristic to obtain browsing tendency information, determining browsing behavior tag information of the anchor audience account from the at least one piece of browsing access object information based on the browsing tendency information, analyzing the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model, and generating a user portrait of the anchor audience account.
2. The method for parsing a user portrait based on e-commerce big data as claimed in claim 1, wherein the step of determining target browsing object information including a portrait feature target from an initial browsing node of a browsing node set corresponding to the browsing behavior information comprises:
acquiring a browsing object area corresponding to the image characteristic target from the comparative browsing node of the browsing node set corresponding to the browsing behavior information to obtain target browsing object information;
the step of obtaining at least one browsing access object information corresponding to the associated portrait feature target from the comparative browsing nodes of the browsing node set includes:
in the comparative browsing node, determining target browsing behavior tag information corresponding to the associated portrait feature target;
and acquiring at least one search interval according to a preset search range by taking the target browsing behavior tag information as a search target, and searching browsing access object information of the at least one search interval to obtain the at least one browsing access object information corresponding to the associated portrait feature target, wherein the preset search range is a preset search interval for determining search areas of different types of browsing behavior tags.
3. The method as claimed in claim 1, wherein the step of extracting features of the current browsing access object information and the target browsing object information to obtain corresponding current browsing feature vector and target browsing feature vector includes:
carrying out naive Bayes feature extraction on the current browsing access object information and the target browsing object information to obtain corresponding current naive Bayes feature and target naive Bayes feature respectively;
and extracting feature vectors of the current naive Bayes feature and the target naive Bayes feature to obtain the current browsing feature vector and the target browsing feature vector which respectively correspond to each other.
4. The method for parsing a user portrait based on e-commerce big data as recited in claim 1, wherein the extracting browsing relation features from the current browsing feature vector and the target browsing feature vector to obtain a current positive browsing relation feature and a current negative browsing relation feature comprises:
extracting browsing relation features of each browsing feature interval related to the current browsing feature vector and the target browsing feature vector, and determining a browsing relation description vector corresponding to the live audience account;
determining an associated browsing vector segment sequence according to the browsing relation description vector, extracting a browsing frequent vector segment of the browsing relation description vector, and extracting a concentrated associated browsing vector segment of the browsing frequent vector segment associated with the associated browsing vector segment sequence by taking a set threshold value as a browsing vector segment interval, wherein the browsing frequent vector segment is used for representing a vector segment formed by the fact that the number of associable vector points in a browsing relation description axis in the browsing relation description vector is greater than a set number;
generating a plurality of vector tendency units according to the vector tendency direction in the axis interval corresponding to the concentrated associated browsing vector points according to at least two concentrated associated browsing vector points associated in the concentrated associated browsing vector segments, calculating the overlapping interval between all the axis intervals in the next concentrated associated browsing vector point and all the axis intervals in the previous concentrated associated browsing vector point, and obtaining a corresponding vector tendency direction table according to each obtained overlapping interval;
according to the vector tendency direction table, acquiring a vector tendency unit of which the vector tendency directions are matched and the overlapping section between the axis sections of the two vector tendency units is smaller than the maximum continuous overlapping section of the browsing relation description vector in the overlapping section to form a concentrated associated browsing vector point space;
matching vector point spaces in each concentrated associated browsing vector point space to obtain a matching interval of each matched concentrated associated browsing vector point space, and describing the direction of an axis according to the browsing relation of the matching interval of each concentrated associated browsing vector point space;
and determining a positive direction and a negative direction in the direction of the browsing relation description axis according to the matching interval of each concentrated associated browsing vector point space, and taking the characteristic information of the matching interval of the concentrated associated browsing vector point space corresponding to the determined positive direction and negative direction as the current positive browsing relation characteristic and the current negative browsing relation characteristic.
5. The method as claimed in claim 1, wherein the step of comparing the current positive browsing relationship characteristic with the current negative browsing relationship characteristic to obtain browsing tendency information, determining browsing behavior tag information of the anchor viewer account from the at least one browsing access object information based on the browsing tendency information, and analyzing the browsing behavior tag information of the anchor viewer account according to a preset artificial intelligence model to generate the user portrait of the anchor viewer account comprises:
comparing the current positive browsing relation characteristic with the current negative browsing relation characteristic to obtain browsing tendency information;
selecting browsing tendency information with the highest tendency degree from the browsing tendency information to obtain target browsing tendency information;
determining browsing access object information matched with the target browsing tendency information from the at least one piece of browsing access object information to obtain target browsing access object information;
calculating browsing behavior parameters of the target browsing access object information and the associated portrait feature target to obtain current browsing behavior parameters and obtain historical browsing behavior parameters; the historical browsing behavior parameters are browsing behavior parameters corresponding to historical browsing nodes before the comparison browsing node;
determining browsing behavior tag information of the anchor audience account from the at least one piece of browsing access object information according to the current browsing behavior parameters and the historical browsing behavior parameters;
and analyzing the browsing behavior tag information of the anchor audience account according to a preset artificial intelligence model to generate a user portrait of the anchor audience account.
6. The method as claimed in claim 5, wherein the step of comparing the current positive browsing relation feature with the current negative browsing relation feature to obtain browsing tendency information comprises:
comparing the browsing relation feature difference between the current positive browsing relation feature and the current negative browsing relation feature;
and obtaining browsing tendency information according to the browsing tendency browsing relationship corresponding to the browsing relationship characteristic difference.
7. The method of claim 5, wherein the step of determining the browsing behavior tag information of the anchor viewer account from the at least one browsing access object information according to the current browsing behavior parameter and the historical browsing behavior parameter comprises:
acquiring a coincident browsing behavior parameter between the current browsing behavior parameter and the historical browsing behavior parameter;
and determining browsing behavior tag information of the associated anchor audience account from the at least one piece of browsing access object information according to the coincident browsing behavior parameter.
8. The method as claimed in claim 5, wherein the step of generating the user representation of the anchor viewer account by analyzing the browsing behavior tag information of the anchor viewer account according to a preset artificial intelligence model comprises:
analyzing and identifying each browsing behavior tag item in the browsing behavior tag information of the anchor audience account according to the preset artificial intelligence model, determining candidate portrait characteristics of each browsing behavior tag item, determining main portrait distribution information of the browsing behavior tag item according to the candidate portrait characteristics, searching out slave portrait distribution information corresponding to the browsing behavior tag item based on the main portrait distribution information, and merging each browsing behavior tag item into at least one browsing behavior tag item display bar according to the slave portrait distribution information;
acquiring an image analysis parameter which corresponds to each browsing behavior tag item display bar and is used for representing image analysis of each browsing behavior tag item display bar based on each browsing behavior tag item display bar;
determining mapping information of each browsing behavior tag item display bar when the browsing behavior tag item is mapped according to the portrait analysis parameters, and fusing each browsing behavior tag item display bar according to the mapping logic relation of each mapping information to obtain a fused browsing behavior tag item display bar;
extracting corresponding fusion label project node image information according to fusion label project nodes on the fusion browsing behavior label project display strip, grouping the fusion label project node image information according to different object display labels, calculating image adjusting information of each object display label, and selecting a fusion label project node image interval according to the image adjusting information;
when an updating flow for updating a user portrait is generated in the fusion label project node portrait information according to the fusion label project node portrait interval, a portrait mapping label code corresponding to the fusion label project node portrait interval is obtained according to the updating flow;
generating a mapping bit space for recording the portrait mapping label codes, mapping the portrait mapping label codes to the mapping bit space, and setting mapping associated information of the portrait mapping label codes according to the object display labels of the fusion label item node portrait information;
and judging whether the fused label item node portrait information is matched with the user portrait according to the mapping association information, and determining at least one updating parameter for updating the user portrait according to the updating process when the fused label item node portrait information is matched with the user portrait so as to update the user portrait according to the at least one updating parameter.
9. The method for parsing a user portrait based on e-commerce big data as recited in any one of claims 1-8, further comprising:
and distributing corresponding live broadcast commodity scene information to the anchor audience account according to the user portrait of the anchor audience account.
10. An artificial intelligence platform, comprising a processor, a machine-readable storage medium, and a network interface, wherein the machine-readable storage medium, the network interface, and the processor are connected via a bus system, the network interface is configured to be communicatively connected to at least one live video terminal, the machine-readable storage medium is configured to store a program, instructions, or codes, and the processor is configured to execute the program, instructions, or codes in the machine-readable storage medium to perform the method for parsing a user portrait based on e-commerce big data according to any one of claims 1 to 9.
CN202010389256.1A 2020-05-10 2020-05-10 User portrait analysis method based on electronic commerce big data and artificial intelligence platform Active CN111626817B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010389256.1A CN111626817B (en) 2020-05-10 2020-05-10 User portrait analysis method based on electronic commerce big data and artificial intelligence platform
CN202011341364.8A CN112465590A (en) 2020-05-10 2020-05-10 User portrait analysis method and system based on electronic commerce big data
CN202011341365.2A CN112465591A (en) 2020-05-10 2020-05-10 User portrait analysis method, system and platform based on electronic commerce big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010389256.1A CN111626817B (en) 2020-05-10 2020-05-10 User portrait analysis method based on electronic commerce big data and artificial intelligence platform

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202011341365.2A Division CN112465591A (en) 2020-05-10 2020-05-10 User portrait analysis method, system and platform based on electronic commerce big data
CN202011341364.8A Division CN112465590A (en) 2020-05-10 2020-05-10 User portrait analysis method and system based on electronic commerce big data

Publications (2)

Publication Number Publication Date
CN111626817A true CN111626817A (en) 2020-09-04
CN111626817B CN111626817B (en) 2020-12-22

Family

ID=72272806

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202011341364.8A Withdrawn CN112465590A (en) 2020-05-10 2020-05-10 User portrait analysis method and system based on electronic commerce big data
CN202010389256.1A Active CN111626817B (en) 2020-05-10 2020-05-10 User portrait analysis method based on electronic commerce big data and artificial intelligence platform
CN202011341365.2A Withdrawn CN112465591A (en) 2020-05-10 2020-05-10 User portrait analysis method, system and platform based on electronic commerce big data

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011341364.8A Withdrawn CN112465590A (en) 2020-05-10 2020-05-10 User portrait analysis method and system based on electronic commerce big data

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202011341365.2A Withdrawn CN112465591A (en) 2020-05-10 2020-05-10 User portrait analysis method, system and platform based on electronic commerce big data

Country Status (1)

Country Link
CN (3) CN112465590A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112015998A (en) * 2020-11-02 2020-12-01 北京值得买科技股份有限公司 Commodity recommendation method based on user portrait
CN112216359A (en) * 2020-09-29 2021-01-12 百度国际科技(深圳)有限公司 Medical data verification method and device and electronic equipment
CN112671886A (en) * 2020-12-21 2021-04-16 刚倩 Information pushing method based on edge calculation and artificial intelligence and big data server
CN112671885A (en) * 2020-12-21 2021-04-16 刚倩 Information analysis method based on cloud computing and big data and digital financial service platform
CN112990323A (en) * 2021-03-23 2021-06-18 李光伟 User portrait mining method based on big data online mode and machine learning system
CN114117089A (en) * 2022-01-25 2022-03-01 广州拟实网络科技有限公司 Intelligent information pushing management system based on image data processing and analyzing

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641893A (en) * 2021-07-15 2021-11-12 红毛猩猩(深圳)科技有限公司 Preference recommendation method based on user portrait deep analysis technology
CN114245165B (en) * 2021-12-16 2023-04-25 北京达佳互联信息技术有限公司 Live broadcast recommendation method and device, electronic equipment and storage medium
CN114329213B (en) * 2021-12-30 2022-08-30 深圳前海浩方科技有限公司 E-commerce platform optimization method, device, equipment and medium based on user behaviors
CN115689648B (en) * 2022-10-28 2023-07-28 广东柏烨互动网络科技有限公司 User information processing method and system applied to directional delivery
CN117135379B (en) * 2023-10-26 2023-12-22 武汉耳东信息科技有限公司 Live broadcast platform data analysis management system based on big data

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394126A (en) * 2014-11-07 2015-03-04 广州酷狗计算机科技有限公司 Information recommendation method, server, client side and system
CN107608990A (en) * 2016-07-12 2018-01-19 上海视畅信息科技有限公司 A kind of live personalized recommendation method
CN108664654A (en) * 2018-05-18 2018-10-16 北京奇艺世纪科技有限公司 A kind of main broadcaster's recommendation method and device based on user's similarity
CN109040297A (en) * 2018-08-30 2018-12-18 广州酷狗计算机科技有限公司 User's portrait generation method and device
CN110225388A (en) * 2019-06-06 2019-09-10 腾讯科技(深圳)有限公司 Processing method, device, electronic equipment and the storage medium of living broadcast interactive
CN110769270A (en) * 2019-11-08 2020-02-07 网易(杭州)网络有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN110781321A (en) * 2019-08-28 2020-02-11 腾讯科技(深圳)有限公司 Multimedia content recommendation method and device
CN110798692A (en) * 2019-09-27 2020-02-14 咪咕视讯科技有限公司 Video live broadcast method, server and storage medium
CN110896488A (en) * 2018-08-23 2020-03-20 武汉斗鱼网络科技有限公司 Recommendation method for live broadcast room and related equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394126A (en) * 2014-11-07 2015-03-04 广州酷狗计算机科技有限公司 Information recommendation method, server, client side and system
CN107608990A (en) * 2016-07-12 2018-01-19 上海视畅信息科技有限公司 A kind of live personalized recommendation method
CN108664654A (en) * 2018-05-18 2018-10-16 北京奇艺世纪科技有限公司 A kind of main broadcaster's recommendation method and device based on user's similarity
CN110896488A (en) * 2018-08-23 2020-03-20 武汉斗鱼网络科技有限公司 Recommendation method for live broadcast room and related equipment
CN109040297A (en) * 2018-08-30 2018-12-18 广州酷狗计算机科技有限公司 User's portrait generation method and device
CN110225388A (en) * 2019-06-06 2019-09-10 腾讯科技(深圳)有限公司 Processing method, device, electronic equipment and the storage medium of living broadcast interactive
CN110781321A (en) * 2019-08-28 2020-02-11 腾讯科技(深圳)有限公司 Multimedia content recommendation method and device
CN110798692A (en) * 2019-09-27 2020-02-14 咪咕视讯科技有限公司 Video live broadcast method, server and storage medium
CN110769270A (en) * 2019-11-08 2020-02-07 网易(杭州)网络有限公司 Live broadcast interaction method and device, electronic equipment and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112216359A (en) * 2020-09-29 2021-01-12 百度国际科技(深圳)有限公司 Medical data verification method and device and electronic equipment
CN112216359B (en) * 2020-09-29 2024-03-26 百度国际科技(深圳)有限公司 Medical data verification method and device and electronic equipment
CN112015998A (en) * 2020-11-02 2020-12-01 北京值得买科技股份有限公司 Commodity recommendation method based on user portrait
CN112671886A (en) * 2020-12-21 2021-04-16 刚倩 Information pushing method based on edge calculation and artificial intelligence and big data server
CN112671885A (en) * 2020-12-21 2021-04-16 刚倩 Information analysis method based on cloud computing and big data and digital financial service platform
CN112671886B (en) * 2020-12-21 2021-09-03 曙光星云信息技术(北京)有限公司 Information pushing method based on edge calculation and artificial intelligence and big data server
CN112990323A (en) * 2021-03-23 2021-06-18 李光伟 User portrait mining method based on big data online mode and machine learning system
CN114117089A (en) * 2022-01-25 2022-03-01 广州拟实网络科技有限公司 Intelligent information pushing management system based on image data processing and analyzing
CN114117089B (en) * 2022-01-25 2022-04-08 广州拟实网络科技有限公司 Intelligent information pushing management system based on image data processing and analyzing

Also Published As

Publication number Publication date
CN112465591A (en) 2021-03-09
CN112465590A (en) 2021-03-09
CN111626817B (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN111626817B (en) User portrait analysis method based on electronic commerce big data and artificial intelligence platform
CN111540466B (en) Big data based intelligent medical information pushing method and big data medical cloud platform
CN111626816B (en) Image interaction information processing method based on e-commerce live broadcast and cloud computing platform
CN111598164B (en) Method, device, electronic equipment and storage medium for identifying attribute of target object
US8983179B1 (en) System and method for performing supervised object segmentation on images
CN111444422B (en) Logistics information recommendation method and device, logistics distribution system and server
CN111611581B (en) Internet of things-based network big data information anti-disclosure method and cloud communication server
CN111556330B (en) Electronic commerce information pushing method based on artificial intelligence and artificial intelligence cloud platform
CN112116434B (en) Commodity searching method and system based on big data and electronic mall and cloud service platform
CN112069325B (en) Big data processing method based on block chain offline payment and cloud service pushing platform
CN113657087A (en) Information matching method and device
CN111723227B (en) Data analysis method based on artificial intelligence and Internet and cloud computing service platform
CN109816429B (en) Information popularization method and device
CN110413869B (en) Method and device for pushing information
CN110781388A (en) Information recommendation method and device for image information
CN111787081B (en) Information processing method based on Internet of things interaction and intelligent communication and cloud computing platform
CN111460419A (en) Internet of things artificial intelligence face verification method and Internet of things cloud server
CN114332477A (en) Feature recognition model training method, article feature recognition method and article feature recognition device
CN112784061A (en) Knowledge graph construction method and device, computing equipment and storage medium
CN112579451A (en) Test method, device and equipment based on test element sequencing
CN111767925A (en) Method, device, equipment and storage medium for extracting and processing features of article picture
CN111787080B (en) Data processing method based on artificial intelligence and Internet of things interaction and cloud computing platform
CN114816147B (en) Object attribute processing method and device, electronic equipment and storage medium
CN113282826A (en) Game recommendation method and device and cloud server
CN111813968A (en) New media data management method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 603, building B, liufangyuan Road, Dongyi Industrial Park, Donghu New Technology Development Zone, Wuhan, Hubei Province

Applicant after: Shi Wei

Address before: 215123, No. 99, benevolence Road, Lake Villa, Suzhou Industrial Park, Jiangsu, China

Applicant before: Shi Wei

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20201207

Address after: Room 507, 5 / F, building B3, B4, B5, lingzhi Industrial Park, 128 Jinhai Road, Yuhua District, Changsha City, Hunan Province

Applicant after: CHANGSHA JUMEI NETWORK TECHNOLOGY Co.,Ltd.

Address before: Room 603, building B, liufangyuan Road, Dongyi Industrial Park, Donghu New Technology Development Zone, Wuhan, Hubei Province

Applicant before: Shi Wei

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant