CN113536117A - Product pushing method, device, equipment and medium - Google Patents

Product pushing method, device, equipment and medium Download PDF

Info

Publication number
CN113536117A
CN113536117A CN202110729349.9A CN202110729349A CN113536117A CN 113536117 A CN113536117 A CN 113536117A CN 202110729349 A CN202110729349 A CN 202110729349A CN 113536117 A CN113536117 A CN 113536117A
Authority
CN
China
Prior art keywords
user
label
data
chat
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110729349.9A
Other languages
Chinese (zh)
Inventor
张映
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weikun Shanghai Technology Service Co Ltd
Original Assignee
Weikun Shanghai Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weikun Shanghai Technology Service Co Ltd filed Critical Weikun Shanghai Technology Service Co Ltd
Priority to CN202110729349.9A priority Critical patent/CN113536117A/en
Publication of CN113536117A publication Critical patent/CN113536117A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Computing Systems (AREA)
  • General Business, Economics & Management (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application relates to the technical field of big data, and discloses a product pushing method, which comprises the following steps: obtaining chat data of a user; the chat data comprises character data, voice data, expression data or image data; determining whether the user has a user representation; if not, generating a plurality of labels according to the chat data, and generating a user portrait by the plurality of labels and the information of the user; the labels comprise a text label, a voice label, a chat character label and an active label for representing active chat of the user; if yes, updating the user image according to the chatting data; and generating a link for pushing the product according to the user portrait, and sending the link to a terminal for chatting with the user. The product pushing method, device, equipment and medium solve the technical problems that short message pushing and telephone pushing of mobile phones in the prior art are not targeted, low in efficiency and high in labor and financial expenditure.

Description

Product pushing method, device, equipment and medium
Technical Field
The present application relates to the field of big data technologies, and in particular, to a method, an apparatus, a device, and a medium for pushing a product.
Background
Currently, most marketing systems still maintain the original marketing method of telephone marketing and sms (short message service) push. By the method, the marketing is difficult to keep the client stickiness, the client potential is not deeply excavated, most salesmen can randomly market products, and some behavior data of the users, which can be checked by the salesmen, are not available, so that the salesmen can not carry out targeted marketing on the clients, the product data recommendation is not accurate, the marketing efficiency is low, the manpower and financial expenditure is high, and the product data can not be accurately marketed to the users.
Disclosure of Invention
The application mainly aims to provide a product pushing method, a product pushing device, product pushing equipment and a product pushing medium, and aims to solve the technical problems that product data recommendation is not accurate, promotion efficiency is low, and manpower and financial expenditure are high in the prior art.
The application provides a product pushing method, which comprises the following steps:
obtaining chat data of a user; the chat data comprises character data, voice data, expression data or image data;
searching in the user image database through the information of the user to judge whether the user has the user image; wherein the user representation comprises a text label, a voice label, a chat character label and an active label representing the active chat of the user;
if the user has the user portrait, judging whether the chatting data is character data or voice data;
if the chat data is not text data or voice data, increasing the weight of the chat character label set numerical value;
if the chat data is text data or voice data, counting the occurrence times of the text labels or the voice labels in the chat data;
judging whether the occurrence frequency of the text label or the voice label is greater than a second set frequency;
if the occurrence frequency of the text label or the voice label is less than or equal to a second set frequency, keeping the weight of the text label or the voice label unchanged;
if the number of times of the appearance of the character label or the voice label is larger than a second set number of times, increasing the weight of the set numerical value of the character label or the voice label according to the number of times of the appearance of the character label or the voice label, and reducing the weight of the set numerical value of the character label or the voice label which does not appear to finish the updating of the portrait of the user;
and generating a link for pushing a product according to the updated user portrait, and sending the link to a terminal chatting with the user.
Further, after the step of searching the user image database through the information of the user to determine whether the user has a user image, the method further includes:
if the user does not have the user portrait, generating a plurality of labels according to the chat data, and generating the plurality of labels and the user information to form the user portrait; wherein the labels comprise a text label, a voice label, a chat character label and an active label representing active chat of the user.
Further, when the chat data is text data, the step of generating a plurality of tags according to the chat data includes:
skimming and matching the chat data with preset sensitive words in a database;
if the matching is successful, counting the occurrence times of the preset sensitive words in the chat data;
judging whether the occurrence frequency of the preset sensitive words is greater than a first set frequency;
and if so, using the preset sensitive word as a character label.
Further, when the chat data is voice data, the step of generating a plurality of tags according to the chat data includes:
converting an analog sound signal in the voice data into a digital signal;
matching the digital signals with preset digital signals one by one; the preset digital signal is a digital signal of a specified vocabulary;
if the matching is successful, extracting the voice data corresponding to the successfully matched digital signal as target voice data;
and taking the target voice data as a voice tag.
Further, when the chat data includes image data, the step of generating a plurality of tags from the chat data includes:
sending the image data to an auditing end so that an auditor can audit the image data conveniently;
receiving an auditing result sent by an auditing end;
and if the verification result is that the user passes, generating a chat character label representing the character of the user.
Further, the step of generating a link to push a product from the user representation includes:
matching the correspondingly pushed products according to each label in the user picture;
adding favorite or disliked options into the product to form a link of the product;
and generating an instruction for pushing the link of the product, and sending the instruction to a terminal for chatting with the user.
Further, after the step of generating a link to push a product according to the user representation, the method further comprises:
judging whether a user selects options in the product or not;
if the user selects the option in the product, judging whether the user has a feedback label;
if the user has a feedback label, increasing the weight of the set numerical value of the feedback label, and if the user does not have the feedback label, adding the feedback label;
if the user does not select the option in the product, judging whether the user opens the product;
if the user opens the product, increasing the weight of the label setting value corresponding to the product;
and if the user does not open the product, the weight of the label corresponding to the product is kept unchanged.
The application also provides a product pusher, includes:
the obtaining module is used for obtaining the chatting data of the user; the chat data comprises character data, voice data, expression data or image data;
the first judgment module is used for searching the user image database through the information of the user so as to judge whether the user has the user image; wherein the user representation comprises a text label, a voice label, a chat character label and an active label representing the active chat of the user;
the second judging module is used for judging whether the chatting data is text data or voice data when the user has the user portrait;
the increasing module is used for increasing the weight of the set numerical value of the chat character label when the chat data is not the text data or the voice data;
the counting module is used for counting the occurrence frequency of the text labels or the voice labels in the chat data when the chat data is text data or voice data;
the third judging module is used for judging whether the occurrence frequency of the text label or the voice label is greater than a second set frequency;
the holding module is used for keeping the weight of the text label or the voice label unchanged when the occurrence frequency of the text label or the voice label is less than or equal to a second set frequency;
the updating module is used for increasing the weight of the set numerical value of the text label or the voice label according to the frequency of the text label or the voice label when the frequency of the text label or the voice label is greater than a second set frequency, and reducing the weight of the set numerical value of the text label or the voice label which does not appear to finish the updating of the portrait of the user;
and the link module is used for generating a link for pushing a product according to the updated user portrait and sending the link to a terminal for chatting with the user.
The present application further provides a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above method when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method.
The beneficial effect of this application does: through the chat data of the user and the salespersons or the robot, the weight values of a plurality of labels in the user portrait are dynamically increased or reduced, the recent preference change condition of the user is known, so that the product can be pushed according to the adjusted weight values of the labels in the user portrait, meanwhile, the pushing number and frequency of the product can be determined according to the weight values of the labels, the user preference is quickly grasped, the product is pertinently promoted, the product to be promoted is the product required by the user, the potential of the user can be deeply mined, the client stickiness is kept, the promotion efficiency is high, the manpower and financial expenditure is saved, and a proper promotion object is accurately searched.
Drawings
Fig. 1 is a schematic flow chart of a product pushing method according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a product pushing device according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an internal structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As shown in fig. 1, the present application provides a product pushing method, including:
s1, obtaining chat data of the user; the chat data comprises character data, voice data, expression data or image data;
s2, searching the user image database through the information of the user to judge whether the user has the user image; wherein the user representation comprises a text label, a voice label, a chat character label and an active label representing the active chat of the user;
s3, if the user has the user portrait, judging whether the chat data is the character data or the voice data;
s4, if the chat data is not text data or voice data, increasing the weight of the chat character label setting value;
s5, if the chat data are text data or voice data, counting the occurrence frequency of the text labels or the voice labels in the chat data;
s6, judging whether the occurrence frequency of the text label or the voice label is greater than a second set frequency;
s7, if the number of times of the text label or the voice label is less than or equal to a second set number of times, keeping the weight of the text label or the voice label unchanged;
s8, if the number of times of the text label or the voice label is larger than a second set number of times, increasing the weight of the set numerical value of the text label or the voice label according to the number of times of the text label or the voice label, and reducing the weight of the set numerical value of the text label or the voice label which does not appear, thereby completing the updating of the portrait of the user;
s9, generating a link for pushing products according to the user portrait and sending the link to a terminal chatting with the user.
As described in the above step S1, the chat data input by the user is obtained, the user may chat with the salesperson or the robot in the actual process to know the product, and the chat data of the user may be chat data that completes one chat process, or may be content sent in one real-time chat between the user and the salesperson or the robot. The chat content comprises voice data, text data, expression data or image data, different types of the chat data indicate that the user is in different states or has different characters or wants to know, different analysis processing modes are adopted according to different types of the chat data, for example, the chat data is text data, whether preset sensitive words appear in the text data or not and the occurrence frequency of the preset sensitive words need to be inquired, so that the preset sensitive words can be directly used as text labels; or when the chatting data is voice data, judging whether the digital signal converted by the voice data is the same as a preset digital signal or not so as to directly use the appointed voice data vocabulary as a voice tag; or when the chatting data is image data or expression data, generating chatting character labels which show that the user has characters. Finally, the preference and the requirement of a user can be quickly grasped, and the product can be pushed accurately.
As described in step S2, after the user chat data is acquired, the user image needs to be generated or updated according to the different chat data and the type of the different chat data of the user, if the user is chatting for the first time, the user image does not exist, the user image needs to be created and updated, and if the user is not chatting for the first time, the user image already exists, the user image may be updated according to the chat data.
As described above in steps S3-S4, when the user has a user representation, the user representation is updated based on the user 'S chat data after the user has chatted again, with each label in the user representation having a weight value that expresses the user' S preference and emphasis on the character. Judging whether the chat data of the user is character data or voice data, if not, indicating that the chat data is character data, image data and the like sent by the user, judging whether the chat data is image data or not, if so, correspondingly increasing the weight of the chat character label setting value corresponding to the image data, if so, correspondingly increasing the weight of the chat character label setting value corresponding to the expression data, and if so, specifically setting the setting value according to the needs of the user, and changing the setting value in real time, wherein the setting value is not limited herein.
As described in the above steps S5-S8, if the chat data of the user is determined to be voice data or text data, the operation is performed according to the step of generating a label for the voice data and the text data, and it is determined whether a preset sensitive word without a text label or a voice label is repeatedly appeared in the new chat data of the user, and a new text label or a new voice label is formed and recorded in the user image (specifically, step 1, it is determined whether a text label or a voice label is formed for the voice data corresponding to the preset sensitive word or the preset digital signal; step 2, if not, it is determined whether the number of times of the preset sensitive word appearing in the chat data is greater than a first set number of times, it is determined whether the preset digital signal is successfully matched with the digital signal converted from the voice data of the user, and step 3, if both are true, the preset sensitive word is used as a text label, and extracting voice data corresponding to the successfully matched digital signal as a voice tag). Then, counting the number of times of occurrence of the existing text labels or voice labels in the chat data, and judging whether the number of times of occurrence of the text labels or voice labels is greater than a second set number of times (which may be 2 times or 3 times, and is set according to specific situations), when the number of times of occurrence of the text labels or voice labels is greater than the second set number of times, increasing the weight of the set numerical values of the corresponding text labels or voice labels, and reducing the weight of the set numerical values of the text labels or voice labels which do not occur because the text labels or voice labels which do not occur indicate that the user may not have interest any more. When the number of times of occurrence of the text label or the voice label is less than or equal to the second set number of times, the user may be unable to avoid the occurrence of the word in the chat data, and thus the weight of the corresponding text label or voice label is kept unchanged.
As described in step S9, when the user has the user portrait, it is found that the user 'S preference and demand may be changed in the subsequent chat process, and therefore, in the process of each later chat, the user portrait needs to be updated according to the chat data input by the user, so as to know in real time that the user' S preference, demand, character and the like are changed, and thus, the pushed product is matched according to the user portrait and updated according to the updated user portrait, and the product can be pushed more accurately, and the client stickiness can be maintained.
In one embodiment, after the step of searching the user image database by the information of the user to determine whether the user has a user image, the method further comprises:
s21, if the user does not have the user portrait, generating a plurality of labels according to the chat data, and generating a user portrait by the plurality of labels and the information of the user; wherein the labels comprise a text label, a voice label, a chat character label and an active label representing active chat of the user.
As described in step S21, when the user does not have the user representation, a plurality of labels are generated according to the chat data of the user, the labels represent the user' S preferences, characters, needs, and the like, and the labels include text labels, voice labels, chat character labels, and in some embodiments, active labels, which are labels representing that the user actively chats with the salesperson or the robot; the text labels and the voice labels express the preference and the requirement of a user, products can be selected in a targeted mode according to the text data and the voice labels to be pushed, and the probability of acceptance of the user can be improved; the chat character labels and the active labels represent the characters, moods, interests and the like of the users, the frequency of product pushing can be increased or reduced according to the chat character labels and the active labels, the product pushing efficiency can be improved, and the situation that the users feel bored can be avoided.
The types of the labels comprise a text label, a voice label, a chat character label and an active label, wherein the text label, the voice label and the chat character label can be provided in a plurality, and different preferences and requirements of users are formed. The information of the user generally comprises the name, sex, account number and other information which can identify the user; the plurality of labels and the information of the user are integrated into the user portrait, so that the user portrait can be updated in the following process, and meanwhile, products can be pushed in a targeted mode according to the user portrait.
In one embodiment, before the step of acquiring the chat data input by the user, the method further includes:
s01, acquiring target chat software;
s02, acquiring a configuration list corresponding to the target chat software according to the target chat software;
and S03, adding the configuration list and accessing the target chat software.
As described in the above steps S01-S03, the software for the user to chat with the salesperson or chat robot is generally the mainstream chat software, such as WeChat, QQ, etc. The target chatting software is software for acquiring chatting data of the user, and before the chatting data is acquired, the target chatting software needs to be accessed, and then the chatting data can be acquired. Before accessing the target chat software, the interfaces required by the main stream chat software are processed, and codes required by the chat software are built in the system. The general attributes of the chat software are extracted and configured by the configuration items, so that the target chat software can be conveniently accessed without customizing the general attributes. Therefore, when different chatting software is docked, the target chatting software can be successfully docked only by adding a configuration list which is different from other chatting software and is required by the current chatting software.
In one embodiment, when the chat data is text data, the step of generating a plurality of tags according to the chat data includes:
s211, matching the chat data with preset sensitive words in a database;
s212, if the matching is successful, counting the occurrence frequency of the preset sensitive words in the chat data;
s213, judging whether the occurrence frequency of the preset sensitive words is greater than a first set frequency;
and S214, if yes, using the preset sensitive word as a character label.
As described in step S211, the administrator adds a plurality of preset sensitive words in advance, where the preset sensitive words may be any words capable of representing products, such as "fund", "trust", and the like, and the administrator may add and delete the preset sensitive words in real time to ensure that the preset sensitive words can represent existing products and new products in an all-around manner.
As described in the above steps S212 to S214, when the chat data of the user is text data, whether any preset sensitive word that is added in advance by the administrator exists in the text data of the user chat data is searched for, when the preset sensitive word that is added by the administrator exists in the chat data of the user, the number of times of occurrence of the preset sensitive word is counted, and when the number of times of occurrence of the preset sensitive word is greater than a first set number of times (which may be 3 times, 4 times, and the like, and is set according to specific situations), it is indicated that the user has a certain interest, and at this time, the preset sensitive word is directly used as a text label in the label. For example, if "fund" appears 3 times in the chat data, which indicates that the user is interested in the fund, the "fund" is directly used as a text label in the user portrait, so that products related to the fund are pushed to the user according to the term of "fund".
In one embodiment, when the chat data is voice data, the step of generating a plurality of tags according to the chat data includes:
s221, converting the analog sound signals in the voice data into digital signals;
s222, matching the digital signals with preset digital signals one by one; the preset digital signal is a digital signal of a specified vocabulary;
s223, if the matching is successful, extracting the voice data corresponding to the successfully matched digital signal as target voice data;
and S224, using the target voice data as a voice tag.
As described in the above steps S221-S224, when the chat data of the user is voice data, the analog sound signal of the chat data of the user is converted into a digital signal, and the administrator adds the digital signal of the designated vocabulary in advance to form a preset digital signal, where the designated vocabulary may be preset sensitive words or other words capable of representing characteristics of the product. Matching the converted digital signal with a preset digital signal, wherein the matching is to judge whether the digital signal is the same as the preset digital signal or not, if so, the matching is successful, and if not, the matching is failed; and after the matching is successful, extracting the user voice data segment corresponding to the successfully matched digital signal as target voice data, wherein the target voice data is used as a voice tag in the user portrait.
In one embodiment, when the chat data includes image data, the step of generating a plurality of tags from the chat data includes:
s231, sending the image data to an auditing end so that an auditor can audit the image data conveniently;
s232, receiving an audit result sent by an audit end;
and S233, if the verification result is passed, generating a chat character label representing the character of the user.
As described in the above steps S231 to S233, when the chat data of the user includes the emotion data or the image data, the chat characters of the user can be known through the emotion data or the image data, and if the emotion data or the image data is liked and sent frequently, it indicates that the user may be happy and not easily overlooked and bored when pushing. Therefore, whether the chat data has emotion data or not can be judged firstly, if so, the chat character label is directly generated, the chat character label can be represented by words such as multiemotions and emotion enthusiasts, and the chat character label of the user can also be represented by directly using the description of the chat character with emotion, and the chat character label represents whether the chat of the user has characters or not, so that the chat character label is only available or unavailable, and is not classified. When the chat data includes image data, because there may be illegal or malicious image data due to uncertainty of the content of the image data (the image data may be content with chat characters such as emoticons, and may also be other image data with illegal content), the image data needs to be sent to an auditing terminal, and an auditor performs manual auditing on the image data.
In one embodiment, before the step of generating a plurality of labels according to the chat data, the method further includes:
s021, judging whether the user actively chats;
and S022, if the user actively chats, generating an active label.
As described in the above steps S021-S022, in addition to the chat data of the user being able to reflect the user 'S taste and interest, it is also able to greatly express the user' S interest if the user is actively looking for a salesperson or a robot chat. According to the personality habits of people, the chat can be actively carried out only when the users have great interest, therefore, when the users actively find the salespersons or the robots to chat, the users can be shown to have great interest, at the moment, active tags (which can be represented by 'active') are generated and added into the user portrait, the frequency of pushing products to the users is determined according to whether the user portrait has the active tags, and the products can be pushed more efficiently.
In one embodiment, before the step of determining whether the chat data is text data or voice data, the method further includes:
s031, judge whether the user actively chats;
s032, if the user actively conducts chatting, increasing the weight of the set numerical value of the active tag;
s033, if the user passively chats, counting the number of passive times of the user, and determining whether the number of passive times of the user reaches a third set number of times;
s034, if the passive times of the user reach a third set time, reducing the weight of the set numerical value of the active tag.
As described in the foregoing steps 031-S034, whether the user actively chats is also one of the important factors for determining the user' S interest, and according to the personality habit of the user, the user actively chats only when the user has great interest. Judging whether the user actively finds the salesperson or the robot for chatting, and when the user actively finds the salesperson or the robot for chatting, indicating that the user has great interest, at the moment, increasing the weight of the set value of the active tag, wherein the set value can be set according to specific needs, and is not limited herein. The user may take the initiative once, only has interest at that time, and then no longer has interest, and if the user is continuously pushed at high frequency, the user may be bored; therefore, when the user is in passive chat (i.e. the salesperson or the robot actively finds the user for chat), the passive times of the user are counted, and when the passive times of the user reach a third set time, the weight of the set value of the active tag is reduced until the weight of the active tag is zero, and the active tag is deleted. Therefore, the initiative of the user is updated, and the pushing frequency of the pushed product is determined.
In one embodiment, after the step of increasing the weight of the set numerical value of the text label or the voice label according to the number of times of occurrence of the text label or the voice label and decreasing the weight of the set numerical value of the text label or the voice label that does not occur, the method further includes:
s81, judging whether the weight of the character label or the voice label which does not appear is zero or negative;
and S82, if yes, deleting the text label or the voice label.
As described in steps S81-S82 above, if a text label or voice label has not been mentioned by the user, its weight is continuously decreased until it decreases to zero or a negative number, when the weight of a text label or voice label decreases to zero or a negative number, it indicates that the user' S preference is about the label, and therefore, the text label or voice label is deleted, avoiding pushing the user to a product that the user is not interested in.
In one embodiment, before the step of counting the number of times of occurrence of the text tag or the voice tag in the chat data, the method further includes:
s051, judging whether the chat character label is increased in a set time period;
s052, if not, reducing the weight of the chat character label set numerical value;
and S053, if so, performing the step of counting the occurrence times of the text labels or the voice labels in the chat data.
As described in the above steps S051 to S053, the chat characters of the user increase when the image data or the emotion data appear, however, the emotion data or the image data is not transmitted every time by the user, and the emotion data or the image data may be transmitted by the user for a while. Therefore, it is necessary to determine whether or not the chat character label in the user image is increased within a set time period, which may be 3 days, 5 days, or a week, and set the chat character label as the case may be, and if the chat character label is not increased within the set time period, which indicates that the user does not like to use the expression data or the image data recently, the weight of the chat character label set value is decreased until the weight of the chat character label is decreased to zero or negative, and the chat character label is deleted.
In one embodiment, the step of generating a link to push a product from the user representation includes:
s91, matching and correspondingly pushing products according to each label in the user picture;
s92, adding favorite or disliked options into the product to form a link of the product;
s93, generating a command for pushing the link of the product, and sending the command to a terminal chatting with the user;
as described in step S91, the user portrait includes text labels, voice labels, chat character labels, and active labels, the matched products are mainly for the text labels and the voice labels, and if the text labels include "fund" labels, the matched products related to the fund are pushed to the user, and the chat character labels and the active labels are mainly used to determine the pushing frequency of the products.
As described in the above steps S92-S93, like/dislike options are added to the pushed product for the user to select, and the user' S preference is further determined by the selection of the product by the user, so as to facilitate the subsequent pushing of the product. Finally, generating a link instruction of the product with the option, and when the user chats with the robot, sending the instruction to the robot and pushing the instruction to the user by the robot; when the user chats with the salesman, the instruction is sent to the salesman, the salesman selects whether to push the product to the user or not, or the salesman selects the product to push the product according to the label in the portrait.
In one embodiment, the step of generating a link to push a product from the user representation further comprises:
s10, judging whether the user selects options in the product;
s11, if the user selects the option in the product, judging whether the user has a feedback label;
s12, if the user has a feedback label, increasing the weight of the set value of the feedback label, and if the user does not have the feedback label, adding the feedback label;
s13, if the user does not select the option in the product, judging whether the user opens the product;
s14, if the user opens the product, increasing the weight of the label setting value corresponding to the product;
and S15, if the user does not open the product, keeping the weight of the label corresponding to the product unchanged.
As described in the above steps S10-S12, by pushing a product with like/dislike options to the user, the user may or may not select a product that is received as being pushed; therefore, whether the user selects options in the pushed product needs to be judged, if the user selects the options, the user is indicated to have feedback habits, whether the user has a feedback label is judged, if the user does not have the feedback label, the feedback label is added in the user portrait, and if the user has the feedback label, the weight of a set value of the feedback label is increased to indicate whether the feedback habits of the user are kept; similarly, if the product is pushed to the user within the preset time period without receiving the feedback of the user, the weight of the set value of the feedback label is reduced, and the deletion is performed until the weight of the feedback label is zero. The preference degree of the user to the product is further reflected by the feedback label, so that the preference degree of the user can be better determined by the salesperson, and accurate pushing is facilitated.
As described above in steps S13-S15, if the user has not selected an option in the product, there may be instances where the user has forgotten. Therefore, the preference degree of the user can be judged by judging whether the user opens the pushed product, if the user opens the pushed product, the user may have certain interest in the pushed product, and at the moment, the weight of the text label or the voice label corresponding to the pushed product is increased, so that more products corresponding to the label can be pushed conveniently; if the user does not open the pushed product, the user may forget to open the pushed product, and whether the user has certain interest cannot be determined, and at this time, the weight of the label corresponding to the pushed product is kept unchanged.
In one embodiment, after the step of selecting the option in the product by the user, the method further comprises:
s111, judging whether the user likes the products selectively;
s112, if yes, increasing the weight of the label setting value corresponding to the product;
and S113, if not, reducing the weight of the label setting value corresponding to the product.
As described in the above steps S111-S113, if the user selects the option in the product, the user' S preference is clearly known through the option selected by the user; that is, if the user chooses to like, the user is interested in the product, the weight of the tag setting value corresponding to the product is increased, if the user chooses not to like, the user is not interested in the product, the weight of the tag setting value corresponding to the product is decreased, and the setting value can be set according to specific situations, which is not limited herein.
As shown in fig. 2, the present application also provides a product pushing device, including:
the system comprises an acquisition module 1, a chat module and a chat module, wherein the acquisition module is used for acquiring chat data of a user; the chat data comprises character data, voice data, expression data or image data;
the first judging module 2 is used for searching the user image database through the information of the user to judge whether the user has the user image; wherein the user representation comprises a text label, a voice label, a chat character label and an active label representing the active chat of the user;
the second judging module 3 is used for judging whether the chatting data is character data or voice data when the user has the user portrait;
the adding module 4 is used for adding the weight of the chat character label set numerical value when the chat data is not the text data or the voice data;
the counting module 5 is used for counting the occurrence frequency of the text labels or the voice labels in the chat data when the chat data is text data or voice data;
the third judging module 6 is used for judging whether the occurrence frequency of the text label or the voice label is greater than a second set frequency;
the holding module 7 is used for holding the weight of the text label or the voice label unchanged when the occurrence frequency of the text label or the voice label is less than or equal to a second set frequency;
the updating module 8 is used for increasing the weight of the set numerical value of the character label or the voice label according to the frequency of the character label or the voice label when the frequency of the character label or the voice label is greater than a second set frequency, and reducing the weight of the set numerical value of the character label or the voice label which does not appear to finish the updating of the portrait of the user;
and the link module 9 is used for generating a link of a pushed product according to the updated user portrait and sending the link to a terminal for chatting with the user.
In one embodiment, further comprising:
the software acquisition module is used for acquiring target chat software by a user;
the configuration list module is used for acquiring a configuration list corresponding to the target chat software according to the target chat software;
and the adding module is used for adding the configuration list and accessing the target chat software.
In one embodiment, further comprising:
the generating module is used for generating a plurality of labels according to the chat data and generating the plurality of labels and the information of the user to form a user portrait when the user does not have the user portrait; wherein the labels comprise a text label, a voice label, a chat character label and an active label representing active chat of the user.
In one embodiment, when the chat data is text data, the generating module includes:
the sensitive word matching unit is used for matching the chatting data with preset sensitive words in a database;
a matching success unit, configured to count the occurrence times of the preset sensitive words in the chat data when matching is successful;
the first set frequency judging unit is used for judging whether the frequency of the preset sensitive words is greater than a first set frequency;
and the text label unit is used for taking the preset sensitive word as a text label when the occurrence frequency of the preset sensitive word is greater than a first set frequency.
In one embodiment, when the chat data is voice data, the generating module includes:
a digital signal unit for converting an analog sound signal in the voice data into a digital signal;
the matching unit is used for matching the digital signals with preset digital signals one by one; the preset digital signal is a digital signal of a specified vocabulary;
the target voice unit is used for extracting voice data corresponding to the successfully matched digital signals as target voice data when the matching is successful;
and the voice tag unit is used for taking the target voice data as a voice tag.
In one embodiment, when the chat data includes image data, the generating module includes:
the sending unit is used for sending the image data to an auditing end so that an auditor can audit the image data;
the receiving unit is used for receiving the auditing result sent by the auditing end;
and the chat character tag unit is used for generating a chat character tag which represents the character of the user when the auditing result is passed.
In one embodiment, further comprising:
the chat judging module is used for judging whether the user actively conducts chat;
and the active tag module is used for generating an active tag when a user actively chats.
In one embodiment, further comprising:
the active judgment module is used for judging whether the user actively conducts chat;
the active tag adding module is used for adding the weight of the set numerical value of the active tag when a user actively chats;
the passive times module is used for counting the passive times of the user when the user is in a passive chat and judging whether the passive times of the user reach a third set time;
and the active tag reduction module is used for reducing the weight of the set numerical value of the active tag when the passive times of the user reach a third set time.
In one embodiment, further comprising:
the weight judging module is used for judging whether the weight of the character label or the voice label which does not appear is zero or a negative number;
and the deleting module is used for deleting the character label or the voice label when the weight of the character label or the voice label which does not appear is zero or a negative number.
In one embodiment, further comprising:
the chat character label judging module is used for judging whether the chat character labels are increased in a set time period;
the chat character label reducing module is used for reducing the weight of the set numerical value of the chat character label when the chat character label is increased in a set time period;
and the execution unit is used for counting the occurrence times of the text labels or the voice labels in the chat data when the chat character labels are not increased in a set time period.
In one embodiment, the link module 9 includes:
the product matching unit is used for matching the correspondingly pushed products according to each label in the user picture;
the adding unit is used for adding favorite or disliked options into the product to form a link of the product;
and the link instruction unit is used for generating a pushed link instruction and sending the instruction to a terminal for chatting with the user.
In one embodiment, further comprising:
the selection module is used for judging whether the user selects the options in the product;
the feedback label module is used for judging whether the user has a feedback label or not when the user selects an option in the product;
the feedback label judging module is used for increasing the weight of the set numerical value of the feedback label when the user has the feedback label, and adding the feedback label when the user does not have the feedback label;
the opening module is used for judging whether the user opens the product or not when the user does not select the option in the product;
the product corresponding label module is used for increasing the weight of a label setting numerical value corresponding to the product when a user opens the product;
and the label weight keeping module is used for keeping the weight of the label corresponding to the product unchanged when the product is not opened by a user.
In one embodiment, further comprising:
the like module is used for judging whether the user selects like in the products;
the corresponding label weight increasing module is used for increasing the weight of the label setting value corresponding to the product when the user selects the favorite in the product;
and the corresponding label weight reducing module is used for reducing the weight of the label setting value corresponding to the product when the user selects dislike in the product.
The above units and modules are all used to correspondingly execute each step in the above product pushing method, and the specific implementation manner thereof is described with reference to the above method embodiment, and will not be described again here.
As shown in fig. 3, the present application also provides a computer device, which may be a server, and the internal structure of which may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The database of the computer device is used for storing all data required by the process of the product push method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a product push method.
Those skilled in the art will appreciate that the architecture shown in fig. 3 is only a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects may be applied.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements any one of the above product pushing methods.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware associated with instructions of a computer program, which may be stored on a non-volatile computer-readable storage medium, and when executed, may include processes of the above embodiments of the methods. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A product push method, comprising:
obtaining chat data of a user; the chat data comprises character data, voice data, expression data or image data;
searching in the user image database through the information of the user to judge whether the user has the user image; wherein the user representation comprises a text label, a voice label, a chat character label and an active label representing the active chat of the user;
if the user has the user portrait, judging whether the chatting data is character data or voice data;
if the chat data is not text data or voice data, increasing the weight of the chat character label set numerical value;
if the chat data is text data or voice data, counting the occurrence times of the text labels or the voice labels in the chat data;
judging whether the occurrence frequency of the text label or the voice label is greater than a second set frequency;
if the occurrence frequency of the text label or the voice label is less than or equal to a second set frequency, keeping the weight of the text label or the voice label unchanged;
if the number of times of the appearance of the character label or the voice label is larger than a second set number of times, increasing the weight of the set numerical value of the character label or the voice label according to the number of times of the appearance of the character label or the voice label, and reducing the weight of the set numerical value of the character label or the voice label which does not appear to finish the updating of the portrait of the user;
and generating a link for pushing a product according to the updated user portrait, and sending the link to a terminal chatting with the user.
2. The product pushing method according to claim 1, wherein after the step of searching the user image database for the user information to determine whether the user has a user image, the method further comprises:
if the user does not have the user portrait, generating a plurality of labels according to the chat data, and generating the plurality of labels and the user information to form the user portrait; wherein the labels comprise a text label, a voice label, a chat character label and an active label representing active chat of the user.
3. The product pushing method according to claim 2, wherein when the chat data is text data, the step of generating a plurality of tags according to the chat data includes:
matching the chat data with preset sensitive words in a database;
if the matching is successful, counting the occurrence times of the preset sensitive words in the chat data;
judging whether the occurrence frequency of the preset sensitive words is greater than a first set frequency;
and if so, using the preset sensitive word as a character label.
4. The product pushing method according to claim 2, wherein when the chat data is voice data, the step of generating a plurality of tags according to the chat data includes:
converting an analog sound signal in the voice data into a digital signal;
matching the digital signals with preset digital signals one by one; the preset digital signal is a digital signal of a specified vocabulary;
if the matching is successful, extracting the voice data corresponding to the successfully matched digital signal as target voice data;
and taking the target voice data as a voice tag.
5. The product pushing method according to claim 2, wherein when image data is included in the chat data, the step of generating a plurality of tags from the chat data includes:
sending the image data to an auditing end so that an auditor can audit the image data conveniently;
receiving an auditing result sent by an auditing end;
and if the verification result is that the user passes, generating a chat character label representing the character of the user.
6. The product push method of claim 1, wherein the step of generating a link to push a product from the user representation includes:
matching the correspondingly pushed products according to each label in the user picture;
adding favorite or disliked options into the product to form a link of the product;
and generating an instruction for pushing the link of the product, and sending the instruction to a terminal for chatting with the user.
7. The product push method of claim 1, further comprising, after the step of generating an instruction to push a link to a product from the user representation:
judging whether a user selects options in the product or not;
if the user selects the option in the product, judging whether the user has a feedback label;
if the user has a feedback label, increasing the weight of the set numerical value of the feedback label, and if the user does not have the feedback label, adding the feedback label;
if the user does not select the option in the product, judging whether the user opens the product;
if the user opens the product, increasing the weight of the label setting value corresponding to the product;
and if the user does not open the product, the weight of the label corresponding to the product is kept unchanged.
8. A product pusher device, comprising:
the obtaining module is used for obtaining the chatting data of the user; the chat data comprises character data, voice data, expression data or image data;
the first judgment module is used for searching the user image database through the information of the user so as to judge whether the user has the user image; wherein the user representation comprises a text label, a voice label, a chat character label and an active label representing the active chat of the user;
the second judging module is used for judging whether the chatting data is text data or voice data when the user has the user portrait;
the increasing module is used for increasing the weight of the set numerical value of the chat character label when the chat data is not the text data or the voice data;
the counting module is used for counting the occurrence frequency of the text labels or the voice labels in the chat data when the chat data is text data or voice data;
the third judging module is used for judging whether the occurrence frequency of the text label or the voice label is greater than a second set frequency;
the holding module is used for keeping the weight of the text label or the voice label unchanged when the occurrence frequency of the text label or the voice label is less than or equal to a second set frequency;
the updating module is used for increasing the weight of the set numerical value of the text label or the voice label according to the frequency of the text label or the voice label when the frequency of the text label or the voice label is greater than a second set frequency, and reducing the weight of the set numerical value of the text label or the voice label which does not appear to finish the updating of the portrait of the user;
and the link module is used for generating a link for pushing a product according to the updated user portrait and sending the link to a terminal for chatting with the user.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110729349.9A 2021-06-29 2021-06-29 Product pushing method, device, equipment and medium Pending CN113536117A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110729349.9A CN113536117A (en) 2021-06-29 2021-06-29 Product pushing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110729349.9A CN113536117A (en) 2021-06-29 2021-06-29 Product pushing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN113536117A true CN113536117A (en) 2021-10-22

Family

ID=78097280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110729349.9A Pending CN113536117A (en) 2021-06-29 2021-06-29 Product pushing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113536117A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489447A (en) * 2022-03-28 2022-05-13 山东大学 Word processing control method and system based on user behavior and readable storage medium
CN117520591A (en) * 2024-01-04 2024-02-06 济南辰阳信息技术有限公司 Network information technology consultation and communication platform for synchronization based on image analysis

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126582A (en) * 2016-06-20 2016-11-16 乐视控股(北京)有限公司 Recommend method and device
CN108831456A (en) * 2018-05-25 2018-11-16 深圳警翼智能科技股份有限公司 It is a kind of by speech recognition to the method, apparatus and system of video marker
CN110633413A (en) * 2019-08-26 2019-12-31 浙江大搜车软件技术有限公司 Label recommendation method and device, computer equipment and storage medium
CN111583023A (en) * 2020-05-07 2020-08-25 中国工商银行股份有限公司 Service processing method, device and computer system
CN111797210A (en) * 2020-03-03 2020-10-20 中国平安人寿保险股份有限公司 Information recommendation method, device and equipment based on user portrait and storage medium
CN112328849A (en) * 2020-11-02 2021-02-05 腾讯科技(深圳)有限公司 User portrait construction method, user portrait-based dialogue method and device
CN112948526A (en) * 2021-02-01 2021-06-11 大箴(杭州)科技有限公司 User portrait generation method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126582A (en) * 2016-06-20 2016-11-16 乐视控股(北京)有限公司 Recommend method and device
CN108831456A (en) * 2018-05-25 2018-11-16 深圳警翼智能科技股份有限公司 It is a kind of by speech recognition to the method, apparatus and system of video marker
CN110633413A (en) * 2019-08-26 2019-12-31 浙江大搜车软件技术有限公司 Label recommendation method and device, computer equipment and storage medium
CN111797210A (en) * 2020-03-03 2020-10-20 中国平安人寿保险股份有限公司 Information recommendation method, device and equipment based on user portrait and storage medium
CN111583023A (en) * 2020-05-07 2020-08-25 中国工商银行股份有限公司 Service processing method, device and computer system
CN112328849A (en) * 2020-11-02 2021-02-05 腾讯科技(深圳)有限公司 User portrait construction method, user portrait-based dialogue method and device
CN112948526A (en) * 2021-02-01 2021-06-11 大箴(杭州)科技有限公司 User portrait generation method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李昊轩: "《不会聊天,别说你懂销售》", 天津科学技术出版社 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489447A (en) * 2022-03-28 2022-05-13 山东大学 Word processing control method and system based on user behavior and readable storage medium
CN114489447B (en) * 2022-03-28 2022-07-12 山东大学 Word processing control method and system based on user behavior and readable storage medium
CN117520591A (en) * 2024-01-04 2024-02-06 济南辰阳信息技术有限公司 Network information technology consultation and communication platform for synchronization based on image analysis

Similar Documents

Publication Publication Date Title
US9245225B2 (en) Prediction of user response actions to received data
CN112380331A (en) Information pushing method and device
CN108809809B (en) Message sending method, computer device and storage medium
CN109819015B (en) Information pushing method, device and equipment based on user portrait and storage medium
CN112671886B (en) Information pushing method based on edge calculation and artificial intelligence and big data server
CN113536117A (en) Product pushing method, device, equipment and medium
CN110709852A (en) Automatic upselling in customer conversations
US11436446B2 (en) Image analysis enhanced related item decision
CN110019703B (en) Data marking method and device and intelligent question-answering method and system
CN113360622A (en) User dialogue information processing method and device and computer equipment
CN110597951B (en) Text parsing method, text parsing device, computer equipment and storage medium
CN112532507B (en) Method and device for presenting an emoticon, and for transmitting an emoticon
CN116541504A (en) Dialog generation method, device, medium and computing equipment
CN112671885A (en) Information analysis method based on cloud computing and big data and digital financial service platform
CN112087473A (en) Document downloading method and device, computer readable storage medium and computer equipment
CN116431912A (en) User portrait pushing method and device
CN115577171A (en) Information pushing method and device, electronic equipment and storage medium
CN113111157B (en) Question-answer processing method, device, computer equipment and storage medium
JP2023018360A (en) Information processing system, chatbot system, information management method, and program
CN114238798A (en) Search ranking method, system, device and storage medium based on neural network
CN114003699A (en) Method and device for matching dialect, electronic equipment and storage medium
CN109325234B (en) Sentence processing method, sentence processing device and computer readable storage medium
CN111859191A (en) GIS service aggregation method, device, computer equipment and storage medium
EP3660726A1 (en) Method for determining a conversational agent on a terminal
CN109977176B (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211022