WO2021082472A1 - Procédé, dispositif et système d'achat - Google Patents

Procédé, dispositif et système d'achat Download PDF

Info

Publication number
WO2021082472A1
WO2021082472A1 PCT/CN2020/097001 CN2020097001W WO2021082472A1 WO 2021082472 A1 WO2021082472 A1 WO 2021082472A1 CN 2020097001 W CN2020097001 W CN 2020097001W WO 2021082472 A1 WO2021082472 A1 WO 2021082472A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice
payment
standard information
information
client
Prior art date
Application number
PCT/CN2020/097001
Other languages
English (en)
Chinese (zh)
Inventor
宋飞豹
蔡继发
樊锅旭
赵井全
Original Assignee
苏宁易购集团股份有限公司
苏宁云计算有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏宁易购集团股份有限公司, 苏宁云计算有限公司 filed Critical 苏宁易购集团股份有限公司
Priority to CA3158927A priority Critical patent/CA3158927A1/fr
Publication of WO2021082472A1 publication Critical patent/WO2021082472A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Definitions

  • This application relates to the field of shopping, in particular to a video-based shopping method, device and system.
  • This application provides a shopping method, device and system to solve the problem of low efficiency of video shopping in the prior art.
  • the first aspect discloses a shopping method, which includes:
  • the voice payment request includes a user ID and voice associated with the client;
  • the voice and the voice payment standard information are verified, and if the verification is passed, a payment-related operation is performed on the order according to the voice.
  • the verification of the voice and the voice payment standard information, and if the verification is passed, performing payment-related operations based on the voice includes:
  • the determining the voice payment standard information whose similarity with the voice satisfies a preset condition as the target voice information includes:
  • the voice payment standard information with the highest similarity with the voice and greater than a preset threshold is determined as the target voice information.
  • the calculating the similarity between the voice and each voice payment standard information one by one includes:
  • Inputting the first feature into a preset similarity calculation model calculates the similarity between the voice and each voice payment standard information.
  • the extracting the voice and the feature of each of the voice payment standard information separately includes:
  • the fully connected layer is used to perform dimensionality reduction processing on the feature vector assigned the attention weight to obtain the feature vector of the preset dimension.
  • the second aspect of the present application also discloses a shopping method, which includes:
  • the client sends a voice payment request including the user identification and voice associated with the client to the transaction payment platform within a preset time based on the received order of the commodity in the current video;
  • the transaction payment platform verifies the voice and the voice payment standard information, and if the verification is passed, sends the verification result to the client and performs payment-related operations on the order according to the voice.
  • the third aspect of the present application also discloses a shopping device, which includes:
  • a voice payment request receiving unit configured to receive a voice payment request sent by the client within a preset time based on an order of a commodity in the current video; the voice payment request includes a user identification and voice associated with the client;
  • a voice payment standard information obtaining unit configured to obtain at least one voice payment standard information of the user according to the user identifier
  • the voice payment request verification unit is configured to verify the voice and the voice payment standard information, and if the verification is passed, perform payment-related operations on the order according to the voice.
  • the voice payment request verification unit includes a similarity calculation unit, a target voice information determination unit, and a payment operation unit;
  • the similarity calculation unit is used to calculate the similarity between the voice and each voice payment standard information one by one;
  • the target voice information determining unit is configured to determine voice payment standard information whose similarity with the voice meets a preset condition as target voice information
  • the payment operation unit is configured to perform payment-related operations according to the content of the target voice information.
  • This application also discloses a shopping system, which includes a client terminal and a transaction payment platform:
  • the client sends a voice payment request including the user identification and voice associated with the client to the transaction payment platform based on the received order of the goods in the current video;
  • the transaction payment platform verifies the voice and the voice payment standard information, and if the verification is passed, sends the verification result to the client and performs payment-related operations on the order according to the voice.
  • the last aspect of this application also discloses a computer system, including:
  • One or more processors are One or more processors.
  • a memory associated with the one or more processors where the memory is used to store program instructions that, when read and executed by the one or more processors, execute the method described in the first aspect.
  • the technical solution of this application applies voice payment to video shopping. Since the voice can be input on the video shopping interface, the payment can be completed without jumping to the video shopping interface, which improves the speed.
  • voice payment standard information is pre-stored, Therefore, it is only necessary to determine whether the received voice is sent by the user, and there is no need to determine who sent it, which reduces the workload and further improves the processing speed.
  • this application uses a deep learning method to extract the Mel frequency cepstral coefficient features of the two audio segments to be matched, and then uses the convolution pooling layer of the dual convolutional neural network to obtain the local information of the audio, and then uses two-way long and short memory
  • the network obtains the context information of the audio features, and then uses the attention mechanism to highlight the important segments in the two audio segments.
  • the cosine distance is used to solve the similarity between the output of the two audio segments.
  • the similarity calculation method of the present application can better extract the local information in the audio and avoid the interference of the global information.
  • Figure 1 is a system structure diagram provided by an embodiment of the present application.
  • Figure 2 is a flowchart of a method provided by an embodiment of the present application.
  • Figure 3 is a flow chart of model training provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of feature extraction provided by an embodiment of the present application.
  • FIG. 5 is a flow chart of the method provided by an embodiment of the present application.
  • Fig. 6 is a computer system architecture diagram provided by an embodiment of the present application.
  • the current video shopping methods are intuitive and three-dimensional, and are inevitably loved by users.
  • the payment process needs to jump out of the video shopping interface, which reduces shopping efficiency.
  • the present application provides a voice payment method based on video shopping.
  • the audio device that has been turned on can be used to receive the user's voice payment information for voice payment. Since there is no need to jump to the interface, it is more efficient than the existing technology.
  • the voice payment standard information of the user associated with the shopping client is pre-stored in the transaction payment platform, and then the voice payment information obtained from the client is combined with the voice payment information.
  • the user’s voice payment standard information uses the trained voice similarity calculation model to calculate the similarity. Once the calculated similarity exceeds the preset threshold, the currently received voice payment information is considered to be the same as the pre-stored voice payment standard of the user. The information is the same, and the voice payment verification is successful.
  • the voice standard information is the user’s own voice information
  • a certain amount of voice payment standard information such as less than 10 pieces, can be pre-stored in the transaction payment platform, which can specifically involve types such as "confirmation” and "cancellation".
  • types such as "confirmation” and "cancellation”.
  • Figure 1 shows a specific video shopping system based on voice payment, which specifically includes:
  • the client terminal 11 of the user shopping the merchandise sales platform 12 of the sales operator, the transaction payment platform 13 and the customer service terminal 14 of the operator.
  • the commodity sales platform 12 is used to provide data information such as commodity videos of various merchants, establish a long connection with the client, and send the video data information of the corresponding commodity to the client 11 for display according to the user's commodity browsing request.
  • the user browses the product on the client terminal 11 and places an order for the product on the video interface.
  • the order operation information is sent to the product sales platform 12 and then sent by the product sales platform 12 to the customer service terminal 14, and the customer service terminal 14 displays Commodity video information and establish a point-to-point connection with the client terminal 11, and send the order information of the commodity in the video to the client terminal 11.
  • the client terminal 11 detects that the order information is received, it enters the voice payment waiting state or directly enters the voice payment state.
  • the voice payment waiting state you need to enter an instruction to confirm the voice payment on the client, such as clicking the button on the video interface to enter the voice payment, or you can output a prompt message on the client 11 whether to perform voice payment, waiting for the user Enter the voice payment state after confirming.
  • the video interface of the client terminal 11 outputs "whether to enter voice payment", and the user enters "OK" to enter the voice payment state.
  • the voice information received within the preset time will be sent to the transaction payment platform for verification as voice payment information.
  • Entering the voice payment state directly means that once the order information is received, the voice information received within a preset time will be regarded as voice payment information and sent to the transaction payment platform 13 for voice verification.
  • the client 11 may confirm the interaction of information with the customer service terminal 14 through voice information. Not for payment. To this end, it is necessary to determine the moment when the client 11 and the transaction payment platform 13 enter the voice payment state through an order or a combination of the order and user confirmation, so that the client 11 will send the voice within a preset time to the transaction payment platform 13 for voice payment Verification.
  • the state of voice payment cannot be continuous. Therefore, the voice information set within the preset time in this application will be verified as voice payment information, such as 0.5 seconds.
  • the time setting can be long or short, and this application does not make specific restrictions.
  • the user can also re-enter the voice payment state based on another request.
  • the voice payment standard information of registered users is pre-stored in the transaction payment platform 13. Whenever the user registers for a shopping account, the user can enter several pieces of voice payment standard information and store the user ID in association with the standard information in the transaction The database of the payment platform 13.
  • the transaction payment platform 13 obtains all voice payment standard information pre-stored by the user in the database according to the user identification in the voice payment request of the user, and inputs the voice information in the received voice payment request into the model one by one to perform feature extraction and similarity calculation.
  • the similarity calculation result is higher than the preset threshold, indicating that the voice payment standard information is the same as the received voice information, the corresponding voice payment standard information with the highest similarity is selected as the matched target voice payment information and returned based on this Verify the result and perform subsequent payment operations.
  • the operator may complete the foregoing operations based on one or more platforms. For example, an operator can process all voice messages sent by clients based on a platform. After entering the voice payment state according to the order and other information, the voice payment verification operation is performed on the received voice, and the voice payment verification operation is not performed in other states.
  • This application focuses on the voice payment process after the order is completed, so the follow-up introduction will not go into details on how users browse products and place orders.
  • Embodiment 1 of the present application provides a video shopping method based on voice payment, which is applied to the transaction payment platform side. As shown in FIG. 2, the method includes the following steps:
  • S21 Receive a voice payment request sent by the client within a preset time based on the order of the commodity in the current video; the voice payment request includes a user ID and voice associated with the client.
  • the client will receive the order information of the goods in the video after executing the order operation under the current video interface, and enter the voice payment waiting or voice payment state at this time.
  • the user needs to enter the voice payment start instruction on the client to enter the voice payment state.
  • the user sends the voice payment instruction through the button on the video interface, and the operator side also enters the voice payment verification at this time status.
  • the operator side or the client side may also issue a prompt message whether to proceed with the voice payment, so that the user can make a selection.
  • Entering the voice payment verification state directly means that once an order is generated, all voice messages sent within a predetermined time will be regarded as voice payment information and sent to the transaction payment platform for voice verification.
  • Voice can be received and sent under the video interface, so there is no need to jump to the video interface to input voice for subsequent verification of payment.
  • the user’s identity is generally the ID registered by the user on the shopping platform.
  • the ID and the user’s voice payment standard information are associated and stored, and all voice payment standard information of the user needs to be obtained from the database according to the ID.
  • Each user may have more than one voice payment standard information.
  • there are not many instructions used to indicate user payment so the amount of voice payment standard information will not be very large.
  • the voice payment standard information includes information such as confirmation of payment and cancellation of payment, so subsequent payment-related operations also include operations such as canceling payment.
  • the voice can be input in the video shopping interface
  • the payment can be completed without jumping to the video shopping interface, which improves the speed.
  • the user's voice payment standard information is pre-stored, it is only necessary to determine whether the received voice is from the user. Judging who sent it specifically reduces the workload and further improves the processing speed.
  • this application can judge by the similarity between voices, and calculate the similarity between the voice and each voice payment standard information one by one, which will meet the voice payment standard of the preset conditions of similarity.
  • the information is determined as target voice information.
  • the voice payment standard information with the highest similarity to the voice and greater than a preset threshold is determined as target voice information, and payment-related operations are performed according to the content of the target voice information.
  • the data input in the training phase is in the form of [audio 1, audio 2, label 1 or 0], where 1 means that audio 1 and audio 2 are the same person speaking in different scenarios If the same content is spoken, 0 means no. For example, it is not the same person said or said different content, or the same person said different content.
  • the audio data is extracted by the feature extraction module for preprocessing, and two abstract features of the audio are extracted, mainly the Mel frequency cepstral coefficient mfcc feature.
  • the mfcc feature can obtain the feature expression of each frame of audio, and each frame has short-term stability. Sex.
  • the dual convolutional neural network of the convolution module is used to convolution pool the mfcc features.
  • the perception fields of the two convolutions are different, so that the two convolutions can obtain feature information of different dimensions.
  • the BN layer is used to maintain the network pan After that, it is organically connected through matrix multiplication, and finally the two convolution results are weighted through the fully connected layer.
  • Figure 4 shows the dual convolutional neural network processing process.
  • the BiLSTM network module uses the bidirectional long and short memory network BiLSTM to finally convert the frame-level mfcc feature vector of the audio into the hidden layer state feature vector of the audio for feature encoding.
  • the attention module connects the hidden layer state features after feature encoding and uses the attention mechanism Attention for weight distribution.
  • the specific attention mechanism used is BahdanauAttention. BahdanauAttention can extract contextual information well by giving weight to each feature vector, and the effect on long vectors is more obvious.
  • the features and labels that have been weighted by the attention mechanism are input into the similarity calculation model together for iterative optimization to obtain the trained similarity calculation model.
  • the above model uses the deep learning method, uses mfcc to extract the features of the two pieces of audio to be matched, then uses the convolutional pooling layer of the dual convolutional neural network to obtain the local information of the audio, and then uses BiLSTM to obtain the context information of the audio feature, and then Use the Attention mechanism to highlight the important segments in the two audio segments, and finally use the cosine distance to solve the similarity of the output of the two audio segments at the last fully connected layer.
  • the similarity calculation method of the present application can better extract the local information in the audio and avoid the interference of the global information.
  • the above feature extraction method can be used to perform feature extraction and then the feature vector is input into the trained similarity calculation model for similarity calculation, and the calculation result is obtained:
  • the second embodiment of the present application provides a shopping method applied to the client side, and the method includes:
  • the client sends a voice payment request including the user identification and voice associated with the client to the transaction payment platform based on the received order of the commodity in the current video;
  • the transaction payment platform obtains at least one voice payment standard information of the user according to the user identification;
  • the transaction payment platform verifies the voice and the voice payment standard information, and if the verification is passed, sends a verification result to the client and performs payment-related operations on the order according to the voice.
  • the above step S52 includes:
  • the transaction payment platform calculates the similarity between the voice and each voice payment standard information one by one;
  • the transaction payment platform determines the voice payment standard information whose similarity with the voice meets the preset condition as the target voice information, such as the voice payment with the highest similarity with the voice and greater than the preset threshold
  • the standard information is determined as the target voice information.
  • the transaction payment platform performs payment-related operations according to the content of the target voice information.
  • the calculation of the similarity between the voice and each voice payment standard information by the transaction payment platform one by one includes:
  • the transaction payment platform separately extracts the voice and the features of each of the voice payment standard information; specifically, the transaction payment platform extracts the voice and the mfcc features of each of the voice payment standard information, and then passes
  • the dual convolutional neural network model and the bidirectional long short memory network BiLSTM process the mfcc features to obtain feature vectors and perform feature encoding, and use the attention mechanism to perform weighted summation of the feature vectors in the encoding stage to obtain the attention weights of different feature vectors value.
  • the transaction payment platform inputs the extracted features into a preset similarity calculation model to calculate the similarity between the voice and each voice payment standard information.
  • this application also provides a shopping device, the device including:
  • a voice payment request receiving unit configured to receive a voice payment request sent by the client based on an order of a commodity in the current video; the voice payment request includes a user ID and voice associated with the client;
  • a voice payment standard information obtaining unit configured to obtain at least one voice payment standard information of the user according to the user identifier
  • the voice payment request verification unit is configured to verify the voice and the voice payment standard information, and if the verification is passed, perform payment-related operations on the order according to the voice.
  • the voice payment request verification unit includes a similarity calculation unit, a target voice information determination unit, and a payment operation unit;
  • the similarity calculation unit is used to calculate the similarity between the voice and each voice payment standard information one by one;
  • the target voice information determining unit is configured to determine voice payment standard information whose similarity with the voice meets a preset condition as target voice information
  • the payment operation unit is configured to perform payment-related operations according to the content of the target voice information.
  • the similarity calculation unit includes:
  • the mfcc feature extraction module is used to extract the abstract features of two audios, mainly mfcc features, and mfcc can obtain the feature expression of each frame of audio.
  • the convolution module adopts the idea of double convolution.
  • the perception fields of the two convolutions are different, so that the two convolutions can obtain feature information of different dimensions.
  • the BN layer is used to maintain the generalization of the network. Organically connected, and finally weight distribution through the fully connected layer.
  • the bidirectional long and short memory BiLSTM network module is used to finally convert the frame-level mfcc feature vector of the audio into the hidden layer state feature vector of the audio for feature encoding.
  • the Attention module of the attention mechanism is used to extract contextual information well by giving weight to each feature vector, and the effect is more obvious for long vectors.
  • the similarity measurement module is used to calculate the output similarity according to the feature vector input by the Attention module of the attention mechanism.
  • the fourth embodiment of the present application provides a shopping system, the system includes a client and a transaction payment platform:
  • the client sends a voice payment request including the user identification and voice associated with the client to the transaction payment platform within a preset time based on the received order of the commodity in the current video;
  • the transaction payment platform verifies the voice and the voice payment standard information, and if the verification is passed, sends the verification result to the client and performs payment-related operations on the order according to the voice.
  • Embodiment 5 of the present application provides a computer system, including:
  • One or more processors are One or more processors.
  • a memory associated with the one or more processors where the memory is used to store program instructions, and when the program instructions are read and executed by the one or more processors, perform the following operations:
  • the voice payment request includes a user ID and voice associated with the client;
  • the voice and the voice payment standard information are verified, and if the verification is passed, a payment-related operation is performed on the order according to the voice.
  • the verification of the voice and the voice payment standard information, and if the verification is passed, performing payment-related operations based on the voice includes:
  • the voice payment standard information whose similarity with the voice meets a preset condition is determined as the target voice information, for example, the voice payment standard information with the highest similarity value with the voice and greater than a preset threshold is determined Is the target voice message.
  • the calculating the similarity between the voice and each voice payment standard information one by one includes:
  • the extracting the voice and the feature of each of the voice payment standard information separately includes:
  • the attention mechanism is used to perform a weighted summation of the feature vectors in the encoding stage to obtain the attention weights of different feature vectors.
  • FIG. 5 exemplarily shows the architecture of the computer system, which may specifically include a processor 1510, a video display adapter 1511, a disk drive 1512, an input/output interface 1513, a network interface 1514, and a memory 1520.
  • the processor 1510, the video display adapter 1511, the disk drive 1512, the input/output interface 1513, the network interface 1514, and the memory 1520 may be communicatively connected through the communication bus 1530.
  • the processor 1510 can be implemented by a general CPU (Central Processing Unit, central processing unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc., for Perform relevant procedures to realize the technical solutions provided in this application.
  • a general CPU Central Processing Unit, central processing unit
  • a microprocessor e.g., a central processing unit
  • ASIC Application Specific Integrated Circuit
  • the processor 1510 can be implemented by a general CPU (Central Processing Unit, central processing unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc., for Perform relevant procedures to realize the technical solutions provided in this application.
  • ASIC Application Specific Integrated Circuit
  • the memory 1520 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory), static storage device, dynamic storage device, etc.
  • the memory 1520 may store an operating system 1521 used to control the operation of the computer system 1500, and a basic input output system (BIOS) used to control low-level operations of the computer system 1500.
  • BIOS basic input output system
  • a web browser 1523, a data storage management system 1524, and an icon font processing system 1525 can also be stored.
  • the foregoing icon font processing system 1525 may be an application program that specifically implements the foregoing steps in the embodiment of the present application.
  • the related program code is stored in the memory 1520 and is called and executed by the processor 1510.
  • the input/output interface 1513 is used to connect input/output modules to realize information input and output.
  • the input/output/module can be configured in the device as a component (not shown in the figure), or it can be connected to the device to provide corresponding functions.
  • the input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and an output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the network interface 1514 is used to connect a communication module (not shown in the figure) to realize the communication interaction between the device and other devices.
  • the communication module can realize communication through wired means (such as USB, network cable, etc.), or through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
  • the bus 1530 includes a path to transmit information between various components of the device (for example, the processor 1510, the video display adapter 1511, the disk drive 1512, the input/output interface 1513, the network interface 1514, and the memory 1520).
  • various components of the device for example, the processor 1510, the video display adapter 1511, the disk drive 1512, the input/output interface 1513, the network interface 1514, and the memory 1520.
  • the computer system 1500 can also obtain information about specific receiving conditions from the virtual resource object receiving condition information database 1541 for condition determination, and so on.
  • the above device only shows the processor 1510, the video display adapter 1511, the disk drive 1512, the input/output interface 1513, the network interface 1514, the memory 1520, the bus 1530, etc., in the specific implementation process, the The device may also include other components necessary for normal operation.
  • the above-mentioned device may also include only the components necessary to implement the solution of the present application, and not necessarily include all the components shown in the figure.

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

L'invention concerne un procédé, un dispositif et un système d'achat. Ledit procédé consiste à : recevoir une demande de paiement vocal envoyée par un client sur la base d'une commande d'une marchandise dans une vidéo actuelle dans un temps prédéfini (S21), la demande de paiement vocal comprenant un identifiant et une voix d'un utilisateur associés au client; acquérir, en fonction de l'identifiant de l'utilisateur, au moins un élément d'informations de norme de paiement vocal de l'utilisateur (S22); et réaliser une vérification sur la base de la voix et des informations de norme de paiement vocal, et si la vérification est réussie, effectuer une opération liée au paiement sur la commande en fonction de la voix (S23). Ledit procédé simplifie le processus de paiement et améliore en outre l'efficacité d'achat.
PCT/CN2020/097001 2019-11-01 2020-06-19 Procédé, dispositif et système d'achat WO2021082472A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA3158927A CA3158927A1 (fr) 2019-11-01 2020-06-19 Procede, dispositif et systeme d'achat

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911060944.7 2019-11-01
CN201911060944.7A CN110992125A (zh) 2019-11-01 2019-11-01 一种购物方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2021082472A1 true WO2021082472A1 (fr) 2021-05-06

Family

ID=70082913

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097001 WO2021082472A1 (fr) 2019-11-01 2020-06-19 Procédé, dispositif et système d'achat

Country Status (3)

Country Link
CN (1) CN110992125A (fr)
CA (1) CA3158927A1 (fr)
WO (1) WO2021082472A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230127314A1 (en) * 2017-06-16 2023-04-27 Alibaba Group Holding Limited Payment method, client, electronic device, storage medium, and server

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992125A (zh) * 2019-11-01 2020-04-10 苏宁云计算有限公司 一种购物方法、装置及系统
CN112450659A (zh) * 2020-11-10 2021-03-09 在线场景(北京)科技有限公司 一种可选接待的展商平台
CN113095902A (zh) * 2021-03-11 2021-07-09 北京联创新天科技有限公司 一种电子商务订单管理系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104134140A (zh) * 2014-07-23 2014-11-05 南宁市锋威科技有限公司 一种移动手机支付系统
CN105898589A (zh) * 2015-12-09 2016-08-24 乐视网信息技术(北京)股份有限公司 一种用于视频播放的支付方法、装置及电视设备
US20190230070A1 (en) * 2014-03-31 2019-07-25 Monticello Enterprises LLC System and Method for In-App Payments
CN110992125A (zh) * 2019-11-01 2020-04-10 苏宁云计算有限公司 一种购物方法、装置及系统

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732377A (zh) * 2013-12-24 2015-06-24 中国移动通信集团辽宁有限公司 一种语音转接支付方法及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190230070A1 (en) * 2014-03-31 2019-07-25 Monticello Enterprises LLC System and Method for In-App Payments
CN104134140A (zh) * 2014-07-23 2014-11-05 南宁市锋威科技有限公司 一种移动手机支付系统
CN105898589A (zh) * 2015-12-09 2016-08-24 乐视网信息技术(北京)股份有限公司 一种用于视频播放的支付方法、装置及电视设备
CN110992125A (zh) * 2019-11-01 2020-04-10 苏宁云计算有限公司 一种购物方法、装置及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230127314A1 (en) * 2017-06-16 2023-04-27 Alibaba Group Holding Limited Payment method, client, electronic device, storage medium, and server

Also Published As

Publication number Publication date
CA3158927A1 (fr) 2021-05-06
CN110992125A (zh) 2020-04-10

Similar Documents

Publication Publication Date Title
WO2021082472A1 (fr) Procédé, dispositif et système d'achat
CN110692048B (zh) 会话中任务改变的检测
US11804035B2 (en) Intelligent online personal assistant with offline visual search database
US20240037626A1 (en) Intelligent online personal assistant with multi-turn dialog based on visual search
CN109844767B (zh) 基于图像分析和预测的可视化搜索
US20150161613A1 (en) Methods and systems for authentications and online transactions
JP6812392B2 (ja) 情報出力方法、情報出力装置、端末装置及びコンピュータ読取可能な記憶媒体
JP2019510291A (ja) 人型ロボットを用いてトランザクションを支援する方法
US20190279273A1 (en) Shopping recommendation method, client, and server
KR20160011709A (ko) 지불 확인을 위한 방법, 장치 및 시스템
US11126685B2 (en) Preview and optimization of publication for target computing device
JP7504855B2 (ja) 相互接続された音声検証システムの使用を通して相互運用性を達成するためのシステム、方法、およびプログラム
US10657525B2 (en) Method and apparatus for determining expense category distance between transactions via transaction signatures
US11861318B2 (en) Method for providing sentences on basis of persona, and electronic device supporting same
KR20210050884A (ko) 화자 인식을 위한 등록 방법 및 장치
CN112446753A (zh) 一种数据处理方法、装置、设备和机器可读介质
US20190311368A1 (en) Facilitating user engagement in offline transactions
US20190311369A1 (en) User authentication in hybrid online and real-world environments
US11854008B2 (en) Systems and methods for conducting remote user authentication
US11705113B2 (en) Priority and context-based routing of speech processing
US11657805B2 (en) Dynamic context-based routing of speech processing
US11830497B2 (en) Multi-domain intent handling with cross-domain contextual signals
US20160321637A1 (en) Point of sale payment using mobile device and checkout credentials
TW201944320A (zh) 支付認證方法、裝置、設備及存儲介質
US20220415311A1 (en) Early invocation for contextual data processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20880464

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3158927

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20880464

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20880464

Country of ref document: EP

Kind code of ref document: A1