CN111652620A - Intelligent terminal interaction system - Google Patents
Intelligent terminal interaction system Download PDFInfo
- Publication number
- CN111652620A CN111652620A CN202010296805.0A CN202010296805A CN111652620A CN 111652620 A CN111652620 A CN 111652620A CN 202010296805 A CN202010296805 A CN 202010296805A CN 111652620 A CN111652620 A CN 111652620A
- Authority
- CN
- China
- Prior art keywords
- user
- information
- shopping
- commodity
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 12
- 230000009467 reduction Effects 0.000 claims description 11
- 230000003044 adaptive effect Effects 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 claims description 9
- 238000010801 machine learning Methods 0.000 claims description 6
- 238000001228 spectrum Methods 0.000 claims description 6
- 239000000126 substance Substances 0.000 claims description 6
- 230000000737 periodic effect Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000015556 catabolic process Effects 0.000 claims description 3
- 238000006731 degradation reaction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 230000003247 decreasing effect Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000001680 brushing effect Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4014—Identity check for transactions
- G06Q20/40145—Biometric identity checks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/08—Payment architectures
- G06Q20/18—Payment architectures involving self-service terminals [SST], vending machines, kiosks or multimedia terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0207—Discounts or incentives, e.g. coupons or rebates
- G06Q30/0224—Discounts or incentives, e.g. coupons or rebates based on user history
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
Landscapes
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Engineering & Computer Science (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Computer Security & Cryptography (AREA)
- Entrepreneurship & Innovation (AREA)
- Game Theory and Decision Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses an intelligent terminal interaction system, which comprises a payment recognition module, a voice interaction module, an intelligent shopping guide module and a user side, wherein the payment recognition module is used for recognizing the payment of a user; the payment identification module is used for identifying user information through payment of commodities purchased by a user; the voice interaction module is used for carrying out voice conversation with a user, and after a voice instruction of the user is sent out, the voice instruction is converted into a character instruction, and the character instruction and the user information are sent to the intelligent shopping guide module; the intelligent shopping guide module is used for acquiring shopping information of the user according to the user information, predicting a future shopping list of the user through comparison of the shopping information and intelligent analysis, matching coupons and recommending the shopping information to the user according to a text instruction; the user side is used for displaying recommended shopping information. The intelligent terminal interaction system gives the intelligent man-machine interaction experience of 'listening, speaking and understanding' type to the machine through the face recognition and the language module, and greatly facilitates the shopping requirements of users.
Description
Technical Field
The invention relates to the technical field of unmanned vending machines, in particular to an intelligent terminal interaction system.
Background
With the change of consumption mode, the unmanned vending machine is more and more appeared in the life of people. The vending machine can fully supplement the shortage of human resources and adapt to the change of consumption environment and consumption mode. The vending machine can be operated for 24 hours all weather, the investment capital is small, the area is small, the novel shopping mode is adopted, and the convenient and quick shopping mode attracts a great number of young people with curiosity and desire to buy goods.
But the existing unmanned vending machine has no more intelligent interaction mode.
Disclosure of Invention
In view of the above, in order to solve the above problems in the prior art, the present invention provides an intelligent terminal interaction system.
The invention solves the problems through the following technical means:
an intelligent terminal interaction system, comprising:
the payment identification module is used for identifying user information through payment of commodities purchased by a user;
the voice interaction module is used for carrying out voice conversation with a user, converting the voice instruction into a character instruction after the voice instruction of the user is sent out, and sending the character instruction and the user information to the intelligent shopping guide module;
the intelligent shopping guide module is used for acquiring shopping information of the user according to the user information, predicting a future shopping list of the user through comparison of the shopping information and intelligent analysis, matching coupons and recommending the shopping information to the user according to a text instruction;
and the user side is used for displaying the recommended shopping information.
Further, the payment identification module includes:
the camera is used for acquiring an image of a user and sending image information to the WeChat face-brushing payment unit;
and the WeChat face-brushing payment unit is used for receiving the image information, carrying out face-brushing payment of the commodities purchased by the user through the image information and identifying the user information.
Further, the voice interaction module comprises:
a voice device unit for providing input and output of sound;
the voice recognition unit is used for performing real-time voice recognition by using intelligent voice interaction;
the problem recording unit is used for recording dialogue information of a user, performing semantic analysis, providing a hotword model for the self-learning platform to perform machine learning, and improving the accuracy of sound recognition;
the self-learning platform unit is used for performing machine learning by using a deep learning algorithm and improving the recognition rate;
and the E-commerce unit is used for transmitting the text information after the voice conversion and the text information and the user information to the intelligent shopping guide module.
Further, the voice recognition unit performs signal noise reduction using an LMS adaptive filtering noise reduction algorithm, where the LMS adaptive filtering noise reduction algorithm specifically includes:
1) given W (0), and 1 < mu < 1/lambdamax;
2) Calculating an output value: y (k) ═ w (k)Tx(k);
3) Calculating an estimation error: e (k) ═ d (k) -y (k);
4) and weight updating: w (k +1) ═ w (k) + μ e (k) x (k);
wherein w is the array of adaptive filter weight coefficients updated once with each update of the estimation error e (k);
y (k) is the actual output signal, d (k) is the ideal output signal, x (k) is the input signal, k is the input signal length, μ is the convergence factor, and λ is the Lagrangian multiplier.
Further, the voice recognition unit performs signal noise reduction using a wiener filtering method, and the wiener filtering method specifically includes:
first, for a degraded image process, the following form is written:
wherein the content of the first and second substances,for a wiener filtered image, E is an expected value operator, f is an undegraded image, min is the minimum mean square error, and the expression is expressed in the frequency domain as:
wherein the content of the first and second substances,
h (u, v) represents a degradation function;
|H(u,v)|2=H*(u,v)H(u,v);
h × u, v represents the complex conjugate of H (u, v);
Sn(u,v)=|N(u,v)|2a power spectrum representing noise;
Sf(u,v)=|F(u,v)|2a power spectrum representing an undegraded image;
n (u, v) is the noise function, G (u, v) is the sampling result of the image, u, v are the points of the acquisition matrix.
Further, the self-learning platform unit improves the recognition rate by using a deep learning algorithm, and specifically comprises the following steps:
if the function f (x, y) has a first order continuous partial derivative, for any point p (x) of the function0,y0) There is one such vector: f (x)0,y0)xi+f(x0,y0)yjThen the vector is called the gradient of f (x, y) at p, denoted as grad f (x)0,y0) (ii) a Therefore, it is not only easy to use
Wherein the unit vector of the point L is (cos α, cos β), the directional derivative is the slope of the function in each direction, and the gradient is the direction with the largest slope, and the value of the gradient is the largest value of the directional derivative, so if the gradient can be decreased in the opposite direction, the value is the fastest and the lowest value is reached, so that the system is stable, and the efficiency of deep learning is improved.
Further, the intelligent shopping guide module comprises:
the information acquisition module is used for acquiring shopping records and consumption lists of the users on a shopping platform of a shopping mall each time according to the user information;
the information analysis module is used for analyzing the shopping habits of the user, counting the commodity consumption period of the user and the time length of the current purchased commodity to the next purchased similar commodity; according to the collected user information, making a consumption image of the user; acquiring shopping discount information of a shopping mall or an entity store, and automatically judging the matching degree of the discount information and a user shopping list;
the commodity recommending module is used for generating a commodity list required by the user and recommending the commodity list to the user when the corresponding time point is reached according to the previous commodity use duration and the commodity consumption period of the user; recommending commodities purchased by the user with the similar portrait to the user according to the consumption portrait of the user; and according to the matching degree of the preference information and the shopping list of the user, generating a recommended shopping list and preference information to be recommended to the user when the shopping habits of the user are matched with the preference.
Further, the information analysis module includes:
the shopping period analysis unit is used for analyzing the shopping habits of the users, counting the commodity consumption period of the users and the time length of the current purchased commodity to the next purchased similar commodity;
the user portrait analyzing unit is used for making a consumption portrait of the user according to the collected user information;
and the preference information analysis unit is used for acquiring shopping preference information of the shopping mall or the entity store and automatically judging the matching degree of the preference information and the shopping list of the user.
Further, the goods recommending module comprises:
the periodic commodity recommending unit is used for generating a commodity list required by the user and recommending the commodity list to the user when the corresponding time point is reached according to the past commodity use duration and the commodity consumption period of the user;
the similar commodity recommending unit is used for recommending commodities purchased by the user with the similar portrait to the user according to the consumption portrait of the user;
and the preferential commodity recommendation unit is used for generating a recommended shopping list and recommending preferential information to the user when the shopping habits of the user are matched according to the matching degree of the preferential information and the shopping list of the user.
Further, the recommendation method of the commodity recommendation module specifically comprises the following steps:
the method comprises the steps of performing collaborative filtering recommendation on the basis of content recommendation, firstly calculating the preference of a user to commodities to form a U-V matrix, wherein U is a user matrix, and V is a similar matrix, then calculating the similarity of U-U and V-V according to the user attribute and the similarity of U and V according to the preference of the user to the commodities, and respectively using a method for calculating the similarity as a Manhattan distance and a Pearson correlation coefficient;
the manhattan distance represents the sum of absolute wheel bases of two n-dimensional vectors a (x11, x11, … x1n) and b (x21, x21, … x2n) on a standard coordinate system:
wherein k is the latitude of the number one, and after values of all vectors are obtained, the minimum value is the highest similarity;
pearson correlation coefficient measures the linear correlation between two variables X, Y, Pearson: -1 to 1;
-1: a complete negative correlation; 1: complete positive correlation; 0: irrelevant, its formula is:
wherein E is mathematical expectation, and N represents the value number of the variable.
Compared with the prior art, the invention has the beneficial effects that at least:
the intelligent terminal interaction system gives the intelligent man-machine interaction experience of 'listening, speaking and understanding' type to the machine through the face recognition and the language module, and greatly facilitates the shopping requirements of users.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic structural diagram of an intelligent terminal interaction system of the present invention;
FIG. 2 is a timing diagram of the payment identification module of the present invention;
FIG. 3 is a schematic diagram of the structure of the voice interaction module of the present invention;
FIG. 4 is a schematic structural diagram of the intelligent shopping guide module according to the present invention;
FIG. 5 is a flow chart of the intelligent shopping guide module shopping guide method of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. It should be noted that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and all other embodiments obtained by those skilled in the art without any inventive work based on the embodiments of the present invention belong to the protection scope of the present invention.
Example 1
As shown in fig. 1, the present invention provides an intelligent terminal interaction system, which includes a payment recognition module, a voice interaction module, an intelligent shopping guide module, and a user side;
the payment identification module is used for identifying user information through payment of commodities purchased by a user; the user information comprises user personal information and a user commodity purchasing record;
the voice interaction module is used for carrying out voice conversation with a user, and after a voice instruction of the user is sent out, the voice instruction is converted into a character instruction, and the character instruction and user information are sent to the intelligent shopping guide module;
the intelligent shopping guide module is used for acquiring shopping information of a user according to the user information, predicting a future shopping list of the user through comparison of the shopping information and intelligent analysis, matching coupons and recommending the shopping information to the user according to a text instruction;
the user side is used for displaying recommended shopping information.
As shown in fig. 2, the payment recognition module includes a camera and a wechat face-brushing payment unit;
the camera is used for acquiring an image of a user and sending image information to the WeChat face brushing payment unit;
the WeChat face-brushing payment unit is used for receiving the image information, conducting face-brushing payment of commodities purchased by a user through the image information, and identifying the user information.
Example 2
As shown in fig. 3, in this embodiment, based on embodiment 1, the voice interaction module includes a voice device unit, a voice recognition unit, a problem recording unit, a self-learning platform unit, and an e-commerce unit;
the voice equipment unit is used for providing input and output of sound;
the voice recognition unit is used for performing real-time voice recognition by using intelligent voice interaction;
the problem recording unit is used for recording dialogue information of a user, performing semantic analysis, providing a hotword model for the self-learning platform to perform machine learning, and improving the accuracy of sound recognition;
the self-learning platform unit is used for performing machine learning by using a deep learning algorithm so as to improve the recognition rate;
and the E-commerce unit is used for converting the voice into the text information and sending the text information and the user information to the intelligent shopping guide module.
The invention firstly converts the spoken words into characters through speech recognition (ASR), then learns the intention of the user through Natural Language Understanding (NLU), further defines the intention of the user through questioning by using multi-round Dialogue Management (DM), determines the intention of the user to be converted into character information, processes the characters by using a forward iteration finest granularity segmentation algorithm to obtain inquired information, inquires E-commerce data, and finally speaks the characters through speech synthesis (TTS).
Specifically, the voice recognition unit performs signal noise reduction using a noise reduction algorithm, in which two noise reduction algorithms, LMS adaptive filtering and wiener filtering, are provided.
LMS adaptive filter
1) Given W (0), and 1 < mu < 1/lambdamax;
2) Calculating an output value: y (k) ═ w (k)Tx(k);
3) Calculating an estimation error: e (k) ═ d (k) -y (k);
4) and weight updating: w (k +1) ═ w (k) + μ e (k) x (k);
wherein w is the array of adaptive filter weight coefficients updated once with each update of the estimation error e (k);
y (k) is the actual output signal, d (k) is the ideal output signal, x (k) is the input signal, k is the input signal length, μ is the convergence factor (learning rate), and λ is the lagrange multiplier.
Wiener filtering method
First, for a degraded image process, the following form is written:
wherein the content of the first and second substances,for a wiener filtered image, E is an expected value operator, f is an undegraded image, min is the minimum mean square error, and the expression is expressed in the frequency domain as:
wherein the content of the first and second substances,
h (u, v) represents a degradation function;
|H(u,v)|2=H*(u,v)H(u,v);
h × u, v represents the complex conjugate of H (u, v);
Sn(u,v)=|N(u,v)|2a power spectrum representing noise;
Sf(u,v)=|F(u,v)|2a power spectrum representing an undegraded image;
n (u, v) is the noise function, G (u, v) is the sampling result of the image, u, v are the points of the acquisition matrix.
Specifically, the self-learning platform unit improves the recognition rate by using a deep learning algorithm, specifically:
if the function f (x, y) has a first order continuous partial derivative, for any point p (x) of the function0,y0) There is one such vector: f (x)0,y0)xi+f(x0,y0)yjThen this vector is called the gradient of f (x, y) at this point p. Denoted as grad f (x)0,y0) (ii) a Therefore, it is not only easy to use
Where the unit vector of point L is e ═ cos α, cos β, the directional derivative is the slope of the function in each direction, and the gradient is the direction in which the slope is greatest, the value of the gradient being the value in which the directional derivative is greatest. Therefore, if the speed of the descending along the opposite direction of the gradient is fastest, the speed is the lowest, the system is stable, and the efficiency of deep learning is improved.
Example 3
As shown in fig. 4 and 5, in this embodiment, on the basis of embodiment 1, the intelligent shopping guide module includes an information acquisition module, an information analysis module, and a commodity recommendation module;
the information acquisition module is used for acquiring shopping records and consumption lists of the users on a shopping platform of a shopping mall each time according to the user information;
the information analysis module is used for analyzing the shopping habits of the user, counting the commodity consumption period of the user and the time length of the current purchased commodity to the next purchased similar commodity; according to the collected user information, making a consumption image of the user; acquiring shopping discount information of a shopping mall or an entity store, and automatically judging the matching degree of the discount information and a user shopping list;
the commodity recommending module is used for generating a commodity list required by the user and recommending the commodity list to the user when the corresponding time point is reached according to the previous commodity use duration and the commodity consumption period of the user; recommending commodities purchased by the user with the similar portrait to the user according to the consumption portrait of the user; and according to the matching degree of the preference information and the shopping list of the user, generating a recommended shopping list and preference information to be recommended to the user when the shopping habits of the user are matched with the preference.
Specifically, the information analysis module comprises a shopping period analysis unit, a user portrait analysis unit and a preference information analysis unit;
the shopping period analysis unit is used for analyzing the shopping habits of the user, counting the commodity consumption period of the user, and counting the time length of the current purchased commodity from the next purchased similar commodity;
the user portrait analyzing unit is used for making a consumption portrait of the user according to the collected user information;
the discount information analysis unit is used for acquiring shopping discount information of a shopping mall or an entity store and automatically judging the matching degree of the discount information and a user shopping list.
Specifically, the commodity recommending module comprises a periodic commodity recommending unit, a similar commodity recommending unit and a preferential commodity recommending unit;
the periodic commodity recommending unit is used for generating a commodity list required by the user and recommending the commodity list to the user when the corresponding time point is reached according to the previous commodity use duration and the commodity consumption period of the user;
the similar commodity recommending unit is used for recommending commodities purchased by the similar portrait user to the user according to the consumption portrait of the user;
and the preferential commodity recommending unit is used for generating a recommended shopping list and recommending preferential information to the user when the shopping habits of the user are matched according to the matching degree of the preferential information and the shopping list of the user.
Specifically, the recommendation method of the commodity recommendation module specifically comprises the following steps:
the method comprises the steps of performing collaborative filtering recommendation on the basis of content recommendation, firstly calculating the preference of a user to commodities to form a U-V matrix, wherein U is a user matrix, and V is a similar matrix, then calculating the similarity of U-U and V-V according to the user attribute and the similarity of U and V according to the preference of the user to the commodities, and respectively using a method for calculating the similarity as a Manhattan distance and a Pearson correlation coefficient;
the manhattan distance represents the sum of absolute wheel bases of two n-dimensional vectors a (x11, x11, … x1n) and b (x21, x21, … x2n) on a standard coordinate system:
wherein k is the latitude of the number one, and after the values of all vectors are obtained, the minimum value is the highest similarity;
pearson correlation coefficient measures the linear correlation between two variables X, Y, Pearson: -1 to 1;
-1: a complete negative correlation; 1: complete positive correlation; 0: irrelevant, its formula is:
wherein E is mathematical expectation, and N represents the value number of the variable.
Compared with the prior art, the invention has the beneficial effects that at least:
the intelligent terminal interaction system gives the intelligent man-machine interaction experience of 'listening, speaking and understanding' type to the machine through the face recognition and the language module, and greatly facilitates the shopping requirements of users.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. An intelligent terminal interaction system, comprising:
the payment identification module is used for identifying user information through payment of commodities purchased by a user;
the voice interaction module is used for carrying out voice conversation with a user, converting the voice instruction into a character instruction after the voice instruction of the user is sent out, and sending the character instruction and the user information to the intelligent shopping guide module;
the intelligent shopping guide module is used for acquiring shopping information of the user according to the user information, predicting a future shopping list of the user through comparison of the shopping information and intelligent analysis, matching coupons and recommending the shopping information to the user according to a text instruction;
and the user side is used for displaying the recommended shopping information.
2. The intelligent terminal interaction system of claim 1, wherein the payment identification module comprises:
the camera is used for acquiring an image of a user and sending image information to the WeChat face-brushing payment unit;
and the WeChat face-brushing payment unit is used for receiving the image information, carrying out face-brushing payment of the commodities purchased by the user through the image information and identifying the user information.
3. The intelligent terminal interaction system according to claim 1, wherein the voice interaction module comprises:
a voice device unit for providing input and output of sound;
the voice recognition unit is used for performing real-time voice recognition by using intelligent voice interaction;
the problem recording unit is used for recording dialogue information of a user, performing semantic analysis, providing a hotword model for the self-learning platform to perform machine learning, and improving the accuracy of sound recognition;
the self-learning platform unit is used for performing machine learning by using a deep learning algorithm and improving the recognition rate;
and the E-commerce unit is used for transmitting the text information after the voice conversion and the text information and the user information to the intelligent shopping guide module.
4. The intelligent terminal interaction system according to claim 3, wherein the voice recognition unit performs signal noise reduction using an LMS adaptive filtering noise reduction algorithm, and the LMS adaptive filtering noise reduction algorithm specifically comprises:
1) given W (0), and 1 < mu < 1/lambdamax;
2) Calculating an output value: y (k) ═ W (k)Tx(k);
3) Calculating an estimation error: e (k) ═ d (k) -y (k);
4) and weight updating: w (k +1) ═ W (k) + μ e (k) x (k);
wherein w is the array of adaptive filter weight coefficients updated once with each update of the estimation error e (k);
y (k) is the actual output signal, d (k) is the ideal output signal, x (k) is the input signal, k is the input signal length, μ is the convergence factor, and λ is the Lagrangian multiplier.
5. The intelligent terminal interaction system according to claim 3, wherein the voice recognition unit performs signal noise reduction using a wiener filtering method, and the wiener filtering method specifically is:
first, for a degraded image process, the following form is written:
wherein the content of the first and second substances,for a wiener filtered image, E is an expected value operator, f is an undegraded image, min is the minimum mean square error, and the expression is expressed in the frequency domain as:
wherein the content of the first and second substances,
h (u, v) represents a degradation function;
|H(u,v)|2=H*(u,v)H(u,v);
h × u, v represents the complex conjugate of H (u, v);
Sη(u,v)=|N(u,v)|2 power spectrum representing noise;
Sf(u,v)=|F(u,v)|2A power spectrum representing an undegraded image;
n (u, v) is the noise function, G (u, v) is the sampling result of the image, u, v are the points of the acquisition matrix.
6. The intelligent terminal interaction system of claim 3, wherein the self-learning platform unit uses a deep learning algorithm to improve recognition rate, specifically:
if the function f (x, y) has a first order continuous partial derivative, for any point p (x) of the function0,y0) There is one such vector: f (x)0,y0)xi+f(x0,y0)yj, then the vector is called the gradient of f (x, y) at p, denoted as grad f (x)0,y0) (ii) a Therefore, it is not only easy to use
Wherein the unit vector of the point L is (cos α, cos β), the directional derivative is the slope of the function in each direction, and the gradient is the direction with the largest slope, and the value of the gradient is the largest value of the directional derivative, so if the gradient can be decreased in the opposite direction, the value is the fastest and the lowest value is reached, so that the system is stable, and the efficiency of deep learning is improved.
7. The intelligent terminal interaction system according to claim 1, wherein the intelligent shopping guide module comprises:
the information acquisition module is used for acquiring shopping records and consumption lists of the users on a shopping platform of a shopping mall each time according to the user information;
the information analysis module is used for analyzing the shopping habits of the user, counting the commodity consumption period of the user and the time length of the current purchased commodity to the next purchased similar commodity; according to the collected user information, making a consumption image of the user; acquiring shopping discount information of a shopping mall or an entity store, and automatically judging the matching degree of the discount information and a user shopping list;
the commodity recommending module is used for generating a commodity list required by the user and recommending the commodity list to the user when the corresponding time point is reached according to the previous commodity use duration and the commodity consumption period of the user; recommending commodities purchased by the user with the similar portrait to the user according to the consumption portrait of the user; and according to the matching degree of the preference information and the shopping list of the user, generating a recommended shopping list and preference information to be recommended to the user when the shopping habits of the user are matched with the preference.
8. The intelligent terminal interaction system of claim 7, wherein the information analysis module comprises:
the shopping period analysis unit is used for analyzing the shopping habits of the users, counting the commodity consumption period of the users and the time length of the current purchased commodity to the next purchased similar commodity;
the user portrait analyzing unit is used for making a consumption portrait of the user according to the collected user information;
and the preference information analysis unit is used for acquiring shopping preference information of the shopping mall or the entity store and automatically judging the matching degree of the preference information and the shopping list of the user.
9. The intelligent terminal interaction system of claim 7, wherein the goods recommendation module comprises:
the periodic commodity recommending unit is used for generating a commodity list required by the user and recommending the commodity list to the user when the corresponding time point is reached according to the past commodity use duration and the commodity consumption period of the user;
the similar commodity recommending unit is used for recommending commodities purchased by the user with the similar portrait to the user according to the consumption portrait of the user;
and the preferential commodity recommendation unit is used for generating a recommended shopping list and recommending preferential information to the user when the shopping habits of the user are matched according to the matching degree of the preferential information and the shopping list of the user.
10. The intelligent terminal interaction system of claim 7, wherein the recommendation method of the commodity recommendation module specifically comprises:
the method comprises the steps of performing collaborative filtering recommendation on the basis of content recommendation, firstly calculating the preference of a user to commodities to form a U-V matrix, wherein U is a user matrix, and V is a similar matrix, then calculating the similarity of U-U and V-V according to the user attribute and the similarity of U and V according to the preference of the user to the commodities, and respectively using a method for calculating the similarity as a Manhattan distance and a Pearson correlation coefficient;
the manhattan distance represents the sum of absolute wheel bases of two n-dimensional vectors a (x11, x11, … x1n) and b (x21, x21, … x2n) on a standard coordinate system:
wherein K is the latitude of the number, and after the values of all vectors are obtained, the minimum value is the highest similarity;
pearson correlation coefficient measures the linear correlation between two variables X, Y, Pearson: -1 to 1;
-1: a complete negative correlation; 1: complete positive correlation; 0: irrelevant, its formula is:
wherein E is mathematical expectation, and N represents the value number of the variable.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010296805.0A CN111652620A (en) | 2020-04-15 | 2020-04-15 | Intelligent terminal interaction system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010296805.0A CN111652620A (en) | 2020-04-15 | 2020-04-15 | Intelligent terminal interaction system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111652620A true CN111652620A (en) | 2020-09-11 |
Family
ID=72346085
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010296805.0A Pending CN111652620A (en) | 2020-04-15 | 2020-04-15 | Intelligent terminal interaction system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111652620A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112417271A (en) * | 2020-11-09 | 2021-02-26 | 杭州讯酷科技有限公司 | Intelligent construction method of system with field recommendation |
CN114138160A (en) * | 2021-08-27 | 2022-03-04 | 苏州探寻文化科技有限公司 | Learning equipment interacting with user based on multiple modules |
CN116468510A (en) * | 2023-03-07 | 2023-07-21 | 北京泰迪熊移动科技有限公司 | E-commerce shopping guide realization method based on mobile device operating system |
CN116700968A (en) * | 2023-06-09 | 2023-09-05 | 广州银汉科技有限公司 | Intelligent interaction system based on elastic expansion |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103559758A (en) * | 2013-11-06 | 2014-02-05 | 上海煦荣信息技术有限公司 | Intelligent vending system and intelligent vending method |
CN109165992A (en) * | 2018-07-16 | 2019-01-08 | 北京旷视科技有限公司 | A kind of intelligent shopping guide method, apparatus, system and computer storage medium |
CN109658191A (en) * | 2018-12-20 | 2019-04-19 | 中南大学 | A kind of member's purchase system and method based on language identification and recognition of face |
-
2020
- 2020-04-15 CN CN202010296805.0A patent/CN111652620A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103559758A (en) * | 2013-11-06 | 2014-02-05 | 上海煦荣信息技术有限公司 | Intelligent vending system and intelligent vending method |
CN109165992A (en) * | 2018-07-16 | 2019-01-08 | 北京旷视科技有限公司 | A kind of intelligent shopping guide method, apparatus, system and computer storage medium |
CN109658191A (en) * | 2018-12-20 | 2019-04-19 | 中南大学 | A kind of member's purchase system and method based on language identification and recognition of face |
Non-Patent Citations (1)
Title |
---|
李德毅等: "中国科协新一代信息技术系列丛书 人工智能导论", 西安电子科技大学出版社, pages: 178 - 182 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112417271A (en) * | 2020-11-09 | 2021-02-26 | 杭州讯酷科技有限公司 | Intelligent construction method of system with field recommendation |
CN112417271B (en) * | 2020-11-09 | 2023-09-01 | 杭州讯酷科技有限公司 | Intelligent system construction method with field recommendation |
CN114138160A (en) * | 2021-08-27 | 2022-03-04 | 苏州探寻文化科技有限公司 | Learning equipment interacting with user based on multiple modules |
CN116468510A (en) * | 2023-03-07 | 2023-07-21 | 北京泰迪熊移动科技有限公司 | E-commerce shopping guide realization method based on mobile device operating system |
CN116468510B (en) * | 2023-03-07 | 2024-05-10 | 北京泰迪未来科技股份有限公司 | E-commerce shopping guide realization method based on mobile device operating system |
CN116700968A (en) * | 2023-06-09 | 2023-09-05 | 广州银汉科技有限公司 | Intelligent interaction system based on elastic expansion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111652620A (en) | Intelligent terminal interaction system | |
CN110164476B (en) | BLSTM voice emotion recognition method based on multi-output feature fusion | |
CN106448670B (en) | Conversational system is automatically replied based on deep learning and intensified learning | |
CN108806667B (en) | Synchronous recognition method of voice and emotion based on neural network | |
CN110209789B (en) | Multi-modal dialog system and method for guiding user attention | |
CN101930735B (en) | Speech emotion recognition equipment and speech emotion recognition method | |
CN107657017A (en) | Method and apparatus for providing voice service | |
CN113408385A (en) | Audio and video multi-mode emotion classification method and system | |
WO2023273170A1 (en) | Welcoming robot conversation method | |
CN110990543A (en) | Intelligent conversation generation method and device, computer equipment and computer storage medium | |
CN111966800B (en) | Emotion dialogue generation method and device and emotion dialogue model training method and device | |
US20220328065A1 (en) | Speech emotion recognition method and system based on fused population information | |
CN109903750B (en) | Voice recognition method and device | |
CN112818861A (en) | Emotion classification method and system based on multi-mode context semantic features | |
CN114566189B (en) | Speech emotion recognition method and system based on three-dimensional depth feature fusion | |
CN114973062A (en) | Multi-modal emotion analysis method based on Transformer | |
Chamishka et al. | A voice-based real-time emotion detection technique using recurrent neural network empowered feature modelling | |
CN112151030A (en) | Multi-mode-based complex scene voice recognition method and device | |
CN111986661A (en) | Deep neural network speech recognition method based on speech enhancement in complex environment | |
CN110534133A (en) | A kind of speech emotion recognition system and speech-emotion recognition method | |
CN110297909A (en) | A kind of classification method and device of no label corpus | |
CN113988086A (en) | Conversation processing method and device | |
CN114386426B (en) | Gold medal speaking skill recommendation method and device based on multivariate semantic fusion | |
CN115269836A (en) | Intention identification method and device | |
CN108053826A (en) | For the method, apparatus of human-computer interaction, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |