CN111062332A - Information pushing method and device - Google Patents
Information pushing method and device Download PDFInfo
- Publication number
- CN111062332A CN111062332A CN201911311707.3A CN201911311707A CN111062332A CN 111062332 A CN111062332 A CN 111062332A CN 201911311707 A CN201911311707 A CN 201911311707A CN 111062332 A CN111062332 A CN 111062332A
- Authority
- CN
- China
- Prior art keywords
- binding
- emotion
- face image
- target
- voice data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000008451 emotion Effects 0.000 claims abstract description 72
- 230000011218 segmentation Effects 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 6
- 230000001815 facial effect Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 5
- 230000002996 emotional effect Effects 0.000 description 40
- 238000012545 processing Methods 0.000 description 9
- 238000004458 analytical method Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 235000014443 Pyrus communis Nutrition 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Physics & Mathematics (AREA)
- Development Economics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Entrepreneurship & Innovation (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Acoustics & Sound (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Game Theory and Decision Science (AREA)
- Economics (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an information pushing method and device. Wherein, the method comprises the following steps: acquiring voice data and a face image of a current object; determining a target object from the voice data; determining the emotion type of the current object according to the face image; binding the emotion type with the target object to obtain a first binding result; and sending the first binding result to a first object, wherein the distance between the first object and the current object is smaller than a first threshold value. The invention solves the technical problem of poor accuracy in acquiring the emotion of the customer on the commodity in the related technology.
Description
Technical Field
The invention relates to the field of intelligent equipment, in particular to an information pushing method and device.
Background
In the related art, if a customer gives an opinion about some goods during the course of sales, the emotion of the customer can be analyzed only by staff. Due to the fact that uncertainty of opinions and analysis capability of new staff are poor, hidden emotion of a user on commodities cannot be accurately analyzed, and emotion of the user on the commodities cannot be accurately and effectively determined in the service process.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an information pushing method and device, and aims to at least solve the technical problem that in the related technology, the accuracy of obtaining the emotion of a customer on a commodity is poor.
According to an aspect of the embodiments of the present invention, there is provided an information pushing method, including: acquiring voice data and a face image of a current object; determining a target object from the voice data; determining the emotion type of the current object according to the face image; binding the emotion type with the target object to obtain a first binding result; and sending the first binding result to a first object, wherein the distance between the first object and the current object is smaller than a first threshold value.
As an optional example, the determining the target item from the voice data includes: converting the voice data into character data; and performing word segmentation on the character data, and determining the target object from word segmentation results.
As an alternative example, the determining the emotion type of the current object according to the face image includes: and inputting the face image into a target recognition model, wherein the target recognition model is obtained by training an original recognition model by using a sample picture, the recognition accuracy of the target recognition model is greater than a first threshold value, and the sample picture comprises the face image with the annotated emotion type.
As an optional example, after the binding the emotion type with the target item to obtain a first binding result, the method further includes: acquiring the identity of the current object; and storing the first binding result into a storage table corresponding to the identity, wherein the storage table stores a plurality of binding results, a current binding result in the plurality of binding results comprises a first article and a first emotion, and the first emotion is the emotion of the current object to the first article.
As an optional example, after the storing the first binding result in the storage table corresponding to the identity, the method further includes: receiving a query instruction sent by the first object, wherein the query instruction comprises an identity of the current object; and returning the plurality of binding results in the storage table to the first object.
According to another aspect of the embodiments of the present invention, there is also provided an information pushing apparatus, including: the first acquisition unit is used for acquiring voice data and a face image of a current object; a first determining unit, configured to determine a target item from the voice data; a second determining unit, configured to determine an emotion type of the current object according to the face image; the binding unit is used for binding the emotion type with the target object to obtain a first binding result; a sending unit, configured to send the first binding result to a first object, where a distance between the first object and the current object is smaller than a first threshold.
As an alternative example, the first determining unit includes: the conversion module is used for converting the voice data into character data; and the word segmentation module is used for segmenting the word data and determining the target object from the word segmentation result.
As an optional example, the second determining unit includes: and the input module is used for inputting the face image into a target recognition model, wherein the target recognition model is obtained by training an original recognition model by using a sample picture, the recognition accuracy of the target recognition model is greater than a first threshold value, and the sample picture comprises the face image labeled with the emotion type.
As an optional example, the apparatus further includes: a second obtaining unit, configured to obtain an identity of the current object after the emotion type is bound with the target item to obtain a first binding result; and a storage unit, configured to store the first binding result in a storage table corresponding to the identifier, where the storage table stores multiple binding results, a current binding result of the multiple binding results includes a first item and a first emotion, and the first emotion is an emotion of the current object to the first item.
As an optional example, the apparatus further includes: a receiving unit, configured to receive an inquiry instruction sent by the first object after the first binding result is stored in a storage table corresponding to the identity, where the inquiry instruction includes the identity of the current object; a returning unit, configured to return the plurality of binding results in the storage table to the first object.
In the embodiment of the invention, the voice data and the face image of the current object are acquired; determining a target object from the voice data; determining the emotion type of the current object according to the face image; binding the emotion type with the target object to obtain a first binding result; and sending the first binding result to a first object, wherein the distance between the first object and the current object is smaller than a first threshold value, and in the mode, the facial image and the voice data of the current object can be acquired through equipment, so that an article is determined from the voice data, and the emotion of the customer to the article is analyzed according to the facial image, and the accuracy of determining the emotion is improved. And further, the technical problem that the emotion accuracy of the commodity acquired by the customer in the related technology is poor is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic flow chart of an alternative information pushing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an alternative information pushing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiments of the present invention, an information pushing method is provided, and optionally, as an optional implementation manner, the information pushing method includes:
s102, acquiring voice data and a face image of a current object;
s104, determining a target object from the voice data;
s106, determining the emotion type of the current object according to the face image;
s108, binding the emotion type with the target object to obtain a first binding result;
s110, sending the first binding result to a first object, wherein the distance between the first object and the current object is smaller than a first threshold value.
Optionally, the information pushing method may be applied to, but not limited to, a terminal capable of calculating data, for example, a mobile phone, a tablet computer, a notebook computer, a PC, and the like, where the terminal may interact with a server through a network, or may be applied to other mobile terminal hardware, such as an intelligent card, intelligent glasses, an intelligent bracelet, and the like. The network may include, but is not limited to, a wireless network or a wired network. Wherein, this wireless network includes: WIFI and other networks that enable wireless communication. Such wired networks may include, but are not limited to: wide area networks, metropolitan area networks, and local area networks. The server may include, but is not limited to, any hardware device capable of performing computations.
Alternatively, the present scheme may be applied, but not limited to, in a sales scenario. For example, taking sales as an example, in the process of communicating with staff, a customer acquires the voice content and the face image of the customer, extracts an article in the voice content, analyzes the emotion of the face image, and obtains the emotional tendency of the customer for the article. Machine analysis is more efficient and accurate than active perception by workers, especially less experienced workers. Auxiliary help can be provided for workers.
Optionally, in the scheme, a worker can carry an intelligent terminal, the intelligent terminal collects voice data and a face image of a current object, and after the voice data and the face image are collected, the intelligent terminal determines a first binding result and prompts the first binding result to the first object. Or after the intelligent terminal can acquire the voice data and the face image, the data can be sent to the server, the server analyzes the data and provides a first binding result, and the intelligent terminal prompts the first binding result to the first object.
Optionally, in the present solution, the distance between the first object and the current object may be smaller than the first threshold through a microphone and mobile phone voice data. E.g. the first object is 1m away from the current object, etc. After the voice data is obtained, the voice data can be converted into character data, word segmentation is carried out on the character data, and the target object is determined from the word segmentation result. For example, acquiring the voice data of the current object: "how do the one-piece dress sell? "can recognize voice data and recognize that the target article is a dress.
Meanwhile, the face image of the current object can be collected, and the target recognition model is used for recognizing the face image to obtain the emotion type of the face image. The target recognition model is obtained by training an original recognition model by using a sample picture, the recognition accuracy of the target recognition model is greater than a first threshold value, and the sample picture comprises a face image with an annotated emotion type. If the face image is input into the target recognition model, the emotion of the current object is output by the target recognition model.
The one-piece dress and the careless dress in the voice data of the current object are bound to obtain a first binding result, the first binding result is sent to a first object, namely a worker, and the worker provides service for the current object according to the first binding result.
Optionally, in this scheme, after the first binding result is obtained, an object identifier of the current object may also be obtained. For example, numbering each customer that has visited, e.g., 001, the first binding result for the current object is stored in a memory table with the identification object of the current object. For example, the preference of the current subject for a plurality of articles, such as apple, pear, etc., may be recorded in the memory table. After receiving the query instruction sent by the first object, the plurality of binding results in the storage table may be returned to the first object.
Note that, at the time of the prompt, the prompt may be presented in a display manner. Or prompted by an earphone carried by the first object. The current object is not perceptible for viewing or listening only by the first object. In addition, although different identification marks of different objects are recorded in the scheme, the identification information of the current object cannot be stored.
This is described below with reference to a specific example.
The intelligent card MIC collects customer audio tracks, an emotion algorithm is arranged in the intelligent card, an earphone of the intelligent card informs a wearer of the customer emotion, and the intelligent card uploads the audio tracks and the emotion to a platform for storage through 4G/Wi-Fi.
In the scheme, a face image is obtained through a camera to analyze emotion data, and voice data is obtained through a recording microphone; the method comprises the steps of acquiring a face image, carrying out face recognition, and setting an identity label.
And (3) timely processing big data (ASR, NLP and a knowledge graph) by the platform, and analyzing the customer requirements based on the emotion analysis result and the semantic text corresponding to the audio track to obtain the processing strategy of the current scene. In the scheme, deep learning technology is adopted for analysis, and preference information of a customer on the emotional tendency value of a certain commodity is obtained (a calculation strategy is preset, the emotional tendency value is set to be 1-10, and the higher the value is, the more satisfied/preferred the commodity is). The product characteristics can also be refined, for example, dish a-spicy, car a-turbo, etc. (product keywords are constructed in advance, and when the product keywords appear in the voice data, the emotion tendency values of the product keywords are analyzed corresponding to the emotion data).
Constructing a customer preference model, comprising: recording the face recognition result, namely recording the identity identification and time of the customer who is in the store and acquires the face image;
and, transmit the preference information to the customer preference model in a way of real-time processing, update the data in the customer preference model in real time; for example, after customer a enters the store at time T1, the emotional tendency value of customer a to commodity a is analyzed to be 8 through the voice data and the emotion data, and then the emotional tendency value is updated in the customer preference model in real time as:
at time T1 customer a (identified by the identity tag) has an article a sentiment propensity value of 8.
If the feature of the commodity A is refined in advance, the commodity A is 1 type (spicy) and 2 types (not spicy), when the voice data indicate the commodity 1 type, the emotional tendency value analyzed at the same time is 8, and when the voice data indicate the commodity 1 type, the emotional tendency value analyzed at the same time is 2; then the real-time update in the customer preference model is:
at time T1, customer a (identified by the id tag) has emotional tendency value 8 of item a1, and at time T1, customer a (identified by the id tag) has emotional tendency value 2 of item a2 (the time here is a preset time period, which may be 1 min; i.e. when a conversation occurs, the data of the customer preference model is automatically updated every 1 min).
After customer a enters the store at time T2, it is analyzed through the voice data and emotion data that the emotional tendency value for commodity B is 6, and the emotional tendency value for commodity a is reduced to 2, and then the emotional tendency value is updated in the customer preference model in real time as: at time T2, patron A (identified by the identity tag) has an emotional propensity value of 2 for merchandise A and at time T2, patron A (identified by the identity tag) has an emotional propensity value of 6 for merchandise B.
And analyzing the demands of the customers based on the data stored in the customer preference model to obtain the demand information of the customers. For example, in a restaurant, by comparing and analyzing emotional tendency values of customers for various dishes, the commodities with higher emotional tendency values are used as the demands of the customers to obtain demand information; (where the emotional tendency values are updated in real-time).
For example, in an automobile sales scene, the emotional tendency values of customers for various automobile types are compared and analyzed, and products with high emotional tendency values are used as customer demands to obtain demand information (wherein the emotional tendency values are updated in real time). And the merchant or the clerk acquires the demand information in real time as a processing strategy.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiment of the present invention, an information pushing apparatus for implementing the information pushing method is also provided. As shown in fig. 2, the apparatus includes:
(1) the first acquisition unit is used for acquiring voice data and a face image of a current object;
(2) a first determination unit for determining a target item from the voice data;
(3) the second determining unit is used for determining the emotion type of the current object according to the face image;
(4) the binding unit is used for binding the emotion type with the target object to obtain a first binding result;
(5) and the sending unit is used for sending the first binding result to the first object, wherein the distance between the first object and the current object is smaller than a first threshold value.
Alternatively, the present scheme may be applied, but not limited to, in a sales scenario. For example, taking sales as an example, in the process of communicating with staff, a customer acquires the voice content and the face image of the customer, extracts an article in the voice content, analyzes the emotion of the face image, and obtains the emotional tendency of the customer for the article. Machine analysis is more efficient and accurate than active perception by workers, especially less experienced workers. Auxiliary help can be provided for workers.
Optionally, in the scheme, a worker can carry an intelligent terminal, the intelligent terminal collects voice data and a face image of a current object, and after the voice data and the face image are collected, the intelligent terminal determines a first binding result and prompts the first binding result to the first object. Or after the intelligent terminal can acquire the voice data and the face image, the data can be sent to the server, the server analyzes the data and provides a first binding result, and the intelligent terminal prompts the first binding result to the first object.
Optionally, in the present solution, the distance between the first object and the current object may be smaller than the first threshold through a microphone and mobile phone voice data. E.g. the first object is 1m away from the current object, etc. After the voice data is obtained, the voice data can be converted into character data, word segmentation is carried out on the character data, and the target object is determined from the word segmentation result. For example, acquiring the voice data of the current object: "how do the one-piece dress sell? "can recognize voice data and recognize that the target article is a dress.
Meanwhile, the face image of the current object can be collected, and the target recognition model is used for recognizing the face image to obtain the emotion type of the face image. The target recognition model is obtained by training an original recognition model by using a sample picture, the recognition accuracy of the target recognition model is greater than a first threshold value, and the sample picture comprises a face image with an annotated emotion type. If the face image is input into the target recognition model, the emotion of the current object is output by the target recognition model.
The one-piece dress and the careless dress in the voice data of the current object are bound to obtain a first binding result, the first binding result is sent to a first object, namely a worker, and the worker provides service for the current object according to the first binding result.
Optionally, in this scheme, after the first binding result is obtained, an object identifier of the current object may also be obtained. For example, numbering each customer that has visited, e.g., 001, the first binding result for the current object is stored in a memory table with the identification object of the current object. For example, the preference of the current subject for a plurality of articles, such as apple, pear, etc., may be recorded in the memory table. After receiving the query instruction sent by the first object, the plurality of binding results in the storage table may be returned to the first object.
Note that, at the time of the prompt, the prompt may be presented in a display manner. Or prompted by an earphone carried by the first object. The current object is not perceptible for viewing or listening only by the first object. In addition, although different identification marks of different objects are recorded in the scheme, the identification information of the current object cannot be stored.
This is described below with reference to a specific example.
The intelligent card MIC collects customer audio tracks, an emotion algorithm is arranged in the intelligent card, an earphone of the intelligent card informs a wearer of the customer emotion, and the intelligent card uploads the audio tracks and the emotion to a platform for storage through 4G/Wi-Fi.
In the scheme, a face image is obtained through a camera to analyze emotion data, and voice data is obtained through a recording microphone; the method comprises the steps of acquiring a face image, carrying out face recognition, and setting an identity label.
And (3) timely processing big data (ASR, NLP and a knowledge graph) by the platform, and analyzing the customer requirements based on the emotion analysis result and the semantic text corresponding to the audio track to obtain the processing strategy of the current scene. In the scheme, deep learning technology is adopted for analysis, and preference information of a customer on the emotional tendency value of a certain commodity is obtained (a calculation strategy is preset, the emotional tendency value is set to be 1-10, and the higher the value is, the more satisfied/preferred the commodity is). The product characteristics can also be refined, for example, dish a-spicy, car a-turbo, etc. (product keywords are constructed in advance, and when the product keywords appear in the voice data, the emotion tendency values of the product keywords are analyzed corresponding to the emotion data).
Constructing a customer preference model, comprising: recording the face recognition result, namely recording the identity identification and time of the customer who is in the store and acquires the face image;
and, transmit the preference information to the customer preference model in a way of real-time processing, update the data in the customer preference model in real time; for example, after customer a enters the store at time T1, the emotional tendency value of customer a to commodity a is analyzed to be 8 through the voice data and the emotion data, and then the emotional tendency value is updated in the customer preference model in real time as:
at time T1 customer a (identified by the identity tag) has an article a sentiment propensity value of 8.
If the feature of the commodity A is refined in advance, the commodity A is 1 type (spicy) and 2 types (not spicy), when the voice data indicate the commodity 1 type, the emotional tendency value analyzed at the same time is 8, and when the voice data indicate the commodity 1 type, the emotional tendency value analyzed at the same time is 2; then the real-time update in the customer preference model is:
at time T1, customer a (identified by the id tag) has emotional tendency value 8 of item a1, and at time T1, customer a (identified by the id tag) has emotional tendency value 2 of item a2 (the time here is a preset time period, which may be 1 min; i.e. when a conversation occurs, the data of the customer preference model is automatically updated every 1 min).
After customer a enters the store at time T2, it is analyzed through the voice data and emotion data that the emotional tendency value for commodity B is 6, and the emotional tendency value for commodity a is reduced to 2, and then the emotional tendency value is updated in the customer preference model in real time as: at time T2, patron A (identified by the identity tag) has an emotional propensity value of 2 for merchandise A and at time T2, patron A (identified by the identity tag) has an emotional propensity value of 6 for merchandise B.
And analyzing the demands of the customers based on the data stored in the customer preference model to obtain the demand information of the customers. For example, in a restaurant, by comparing and analyzing emotional tendency values of customers for various dishes, the commodities with higher emotional tendency values are used as the demands of the customers to obtain demand information; (where the emotional tendency values are updated in real-time).
For example, in an automobile sales scene, the emotional tendency values of customers for various automobile types are compared and analyzed, and products with high emotional tendency values are used as customer demands to obtain demand information (wherein the emotional tendency values are updated in real time). And the merchant or the clerk acquires the demand information in real time as a processing strategy.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (10)
1. An information pushing method, comprising:
acquiring voice data and a face image of a current object;
determining a target object from the voice data;
determining the emotion type of the current object according to the face image;
binding the emotion type with the target object to obtain a first binding result;
and sending the first binding result to a first object, wherein the distance between the first object and the current object is smaller than a first threshold value.
2. The method of claim 1, wherein said determining a target item from said voice data comprises:
converting the voice data into text data;
and performing word segmentation on the character data, and determining the target object from word segmentation results.
3. The method of claim 1, wherein determining the emotion type of the current object from the facial image comprises:
inputting the face image into a target recognition model, wherein the target recognition model is obtained by training an original recognition model by using a sample picture, the recognition accuracy of the target recognition model is greater than a first threshold value, and the sample picture comprises the face image with the labeled emotion type.
4. The method according to any one of claims 1 to 3, wherein after said binding said emotion type with said target item yields a first binding result, said method further comprises:
acquiring the identity of the current object;
and storing the first binding result into a storage table corresponding to the identity, wherein the storage table stores a plurality of binding results, a current binding result in the plurality of binding results comprises a first article and a first emotion, and the first emotion is the emotion of the current object to the first article.
5. The method of claim 4, wherein after storing the first binding result in a storage table corresponding to the identity, the method further comprises:
receiving a query instruction sent by the first object, wherein the query instruction comprises the identity of the current object;
returning the plurality of binding results in the storage table to the first object.
6. An information pushing apparatus, comprising:
the first acquisition unit is used for acquiring voice data and a face image of a current object;
a first determining unit, configured to determine a target item from the voice data;
the second determining unit is used for determining the emotion type of the current object according to the face image;
the binding unit is used for binding the emotion type with the target object to obtain a first binding result;
a sending unit, configured to send the first binding result to a first object, where a distance between the first object and the current object is smaller than a first threshold.
7. The apparatus according to claim 6, wherein the first determining unit comprises:
the conversion module is used for converting the voice data into character data;
and the word segmentation module is used for segmenting words of the character data and determining the target object from word segmentation results.
8. The apparatus according to claim 6, wherein the second determining unit comprises:
the input module is used for inputting the face image into a target recognition model, wherein the target recognition model is obtained by training an original recognition model by using a sample picture, the recognition accuracy of the target recognition model is greater than a first threshold value, and the sample picture comprises a face image with an annotated emotion type.
9. The apparatus of any one of claims 6 to 8, further comprising:
the second obtaining unit is used for obtaining the identity of the current object after the emotion type and the target object are bound to obtain a first binding result;
and the storage unit is used for storing the first binding result into a storage table corresponding to the identity, wherein a plurality of binding results are stored in the storage table, a current binding result in the plurality of binding results comprises a first article and a first emotion, and the first emotion is an emotion of the current object to the first article.
10. The apparatus of claim 9, further comprising:
a receiving unit, configured to receive a query instruction sent by the first object after the first binding result is stored in a storage table corresponding to the identity, where the query instruction includes the identity of the current object;
a returning unit, configured to return the plurality of binding results in the storage table to the first object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911311707.3A CN111062332A (en) | 2019-12-18 | 2019-12-18 | Information pushing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911311707.3A CN111062332A (en) | 2019-12-18 | 2019-12-18 | Information pushing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111062332A true CN111062332A (en) | 2020-04-24 |
Family
ID=70301015
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911311707.3A Pending CN111062332A (en) | 2019-12-18 | 2019-12-18 | Information pushing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111062332A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858958A (en) * | 2019-01-17 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Aim client orientation method, apparatus, equipment and storage medium based on micro- expression |
CN109949071A (en) * | 2019-01-31 | 2019-06-28 | 平安科技(深圳)有限公司 | Products Show method, apparatus, equipment and medium based on voice mood analysis |
CN110223134A (en) * | 2019-04-28 | 2019-09-10 | 平安科技(深圳)有限公司 | Products Show method and relevant device based on speech recognition |
CN110379445A (en) * | 2019-06-20 | 2019-10-25 | 深圳壹账通智能科技有限公司 | Method for processing business, device, equipment and storage medium based on mood analysis |
-
2019
- 2019-12-18 CN CN201911311707.3A patent/CN111062332A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858958A (en) * | 2019-01-17 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Aim client orientation method, apparatus, equipment and storage medium based on micro- expression |
CN109949071A (en) * | 2019-01-31 | 2019-06-28 | 平安科技(深圳)有限公司 | Products Show method, apparatus, equipment and medium based on voice mood analysis |
CN110223134A (en) * | 2019-04-28 | 2019-09-10 | 平安科技(深圳)有限公司 | Products Show method and relevant device based on speech recognition |
CN110379445A (en) * | 2019-06-20 | 2019-10-25 | 深圳壹账通智能科技有限公司 | Method for processing business, device, equipment and storage medium based on mood analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112017015B (en) | Commodity information recommendation method, related device, equipment and storage medium | |
CN107563343B (en) | FaceID database self-improvement method based on face recognition technology | |
CN106445905B (en) | Question and answer data processing, automatic question-answering method and device | |
CN109558535A (en) | The method and system of personalized push article based on recognition of face | |
CN111027838A (en) | Crowdsourcing task pushing method, device, equipment and storage medium thereof | |
CN111209368A (en) | Information prompting method and device, computer readable storage medium and electronic device | |
CN112925973B (en) | Data processing method and device | |
US20200402076A1 (en) | Data processing method and apparatus, and storage medium | |
CN111784405A (en) | Off-line store intelligent shopping guide method based on face intelligent recognition KNN algorithm | |
JP2023507043A (en) | DATA PROCESSING METHOD, DEVICE, DEVICE, STORAGE MEDIUM AND COMPUTER PROGRAM | |
CN111539787A (en) | Information recommendation method, intelligent glasses, storage medium and electronic device | |
US20220108357A1 (en) | Retail intelligent display system and electronic displays thereof | |
CN108230033A (en) | For the method and apparatus of output information | |
CN109522947B (en) | Identification method and device | |
CN116739836A (en) | Restaurant data analysis method and system based on knowledge graph | |
WO2021129531A1 (en) | Resource allocation method, apparatus, device, storage medium and computer program | |
CN112418994B (en) | Commodity shopping guide method and device, electronic equipment and storage medium | |
CN111178923A (en) | Offline shopping guide method and device and electronic equipment | |
CN112712393A (en) | Method and device for adjusting house source price | |
KR101899193B1 (en) | Device and System for providing phone number service by providing advertisements using emotion analysis of customer and method thereof | |
CN111062332A (en) | Information pushing method and device | |
US20230107269A1 (en) | Recommender system using edge computing platform for voice processing | |
CN110942358A (en) | Information interaction method, device, equipment and medium | |
CN115187277A (en) | Intelligent marketing system based on electronic business card | |
CN109222510B (en) | Intelligent jewelry looks at pallet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200424 |