CN110880116A - Sound box and transaction method - Google Patents

Sound box and transaction method Download PDF

Info

Publication number
CN110880116A
CN110880116A CN201811043445.2A CN201811043445A CN110880116A CN 110880116 A CN110880116 A CN 110880116A CN 201811043445 A CN201811043445 A CN 201811043445A CN 110880116 A CN110880116 A CN 110880116A
Authority
CN
China
Prior art keywords
features
entered
advance
iris
living
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811043445.2A
Other languages
Chinese (zh)
Inventor
金友芝
孙宇新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
iFlytek Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical iFlytek Co Ltd
Priority to CN201811043445.2A priority Critical patent/CN110880116A/en
Publication of CN110880116A publication Critical patent/CN110880116A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • H04R1/028Casings; Cabinets ; Supports therefor; Mountings therein associated with devices performing functions other than acoustics, e.g. electric candles

Abstract

The present disclosure provides a sound box, including: the camera is used for acquiring biological characteristic information; the processor is used for comparing the biological characteristic information with pre-input standard biological characteristic information and determining whether the biological characteristic information is matched with the pre-input standard biological characteristic information; and executing a transaction operation for the transaction target object in a case where the biometric information matches the pre-entered standard biometric information. The present disclosure also provides a transaction method.

Description

Sound box and transaction method
Technical Field
The disclosure relates to the technical field of internet of things, and in particular relates to a sound box and a transaction method.
Background
With the rapid development of the technology, various intelligent devices are developed, and the application environment and the functions to be realized are more and more complex. Taking the sound box as an example, the user can control the sound box to liberate both hands of the person through voice, so that the sound box executes a series of operations, for example, the voice controls the sound box to play music, and the voice controls the sound box to search corresponding data and the like.
In implementing the disclosed concept, the inventors found that there are at least the following problems in the related art:
when a user operates the sound box to complete certain operations with higher safety performance requirements, the existing sound box is difficult to ensure a safe and effective operation environment due to hardware or algorithm limitation, and operation experience of the user is poor.
Disclosure of Invention
In view of the above, the present disclosure provides a sound box and a transaction method.
One aspect of the present disclosure provides a sound box, including a camera for acquiring biometric information; the processor is used for comparing the biological characteristic information with pre-input standard biological characteristic information and determining whether the biological characteristic information is matched with the pre-input standard biological characteristic information; and executing a transaction operation for the transaction target object in a case where the biometric information matches the pre-entered standard biometric information.
According to the embodiment of the disclosure, the camera is further used for pre-inputting standard biological characteristic information, wherein the pre-input standard biological characteristic information comprises facial characteristics and iris characteristics, and the sound box further comprises a memory used for storing the pre-input standard biological characteristic information.
According to the embodiment of the disclosure, the camera acquires the biological characteristic information and comprises a face characteristic and an iris characteristic of the living being; the processor is configured to, when determining whether the biometric information matches the pre-entered standard biometric information, perform: comparing the facial features of the creatures with facial features which are recorded in advance, and determining whether the facial features of the creatures are matched with the facial features which are recorded in advance; comparing the iris characteristics of the creatures with iris characteristics recorded in advance to determine whether the iris characteristics of the creatures are matched with the iris characteristics recorded in advance; the processor is further configured to execute a transaction operation for transacting the target object if the facial features of the living being match the pre-registered facial features and the iris features of the living being match the pre-registered iris features.
According to an embodiment of the present disclosure, the sound box further includes a microphone for recording a voiceprint feature of a living being, wherein the processor is further configured to compare the voiceprint feature of the living being with a voiceprint feature recorded in advance, and determine whether the voiceprint feature of the living being matches with the voiceprint feature recorded in advance; and executing a transaction operation for transacting the target object when the facial feature of the living being matches the pre-registered facial feature, the iris feature of the living being matches the pre-registered iris feature, and the voiceprint feature of the living being matches the pre-registered voiceprint feature.
According to an embodiment of the present disclosure, the microphone is further configured to acquire voice information input by a user, where the voice information is at least used to generate an order containing the target object; the processor is further configured to generate an order containing the target object according to the voice information; and executing a payment operation for paying the order if the biometric information matches the pre-entered standard biometric information.
According to an embodiment of the present disclosure, the pre-entered standard biometric information further includes a mouth shape feature, wherein: the camera is also used for acquiring the mouth shape characteristics when the organism makes a sound; the processor is further configured to compare the mouth shape feature when the living being uttered with a mouth shape feature recorded in advance, and determine whether the mouth shape feature when the living being uttered matches the mouth shape feature recorded in advance; and executing a transaction operation for transacting the target object in a case where the mouth shape feature at the time of the biological sound emission matches the mouth shape feature registered in advance.
Another aspect of the present disclosure provides a transaction method, including obtaining biometric information through a speaker configured with a camera; comparing the biological characteristic information with pre-input standard biological characteristic information to determine whether the biological characteristic information is matched with the pre-input standard biological characteristic information; and executing a transaction operation for the transaction target object when the biometric information matches the pre-entered standard biometric information.
According to an embodiment of the present disclosure, the method further includes: inputting standard biological characteristic information in advance through the sound box provided with the camera, wherein the standard biological characteristic information which is input in advance comprises facial characteristics and iris characteristics; and storing the pre-recorded standard biological characteristic information in the sound box provided with the camera.
According to the embodiment of the present disclosure, acquiring biometric information through a speaker configured with a camera includes: acquiring facial features and iris features of a living being through the camera; determining whether the biometric information matches the pre-entered standard biometric information comprises: comparing the facial features of the creatures with facial features which are recorded in advance, and determining whether the facial features of the creatures are matched with the facial features which are recorded in advance; comparing the iris features of the creatures with the iris features which are recorded in advance, and determining whether the iris features of the creatures are matched with the iris features which are recorded in advance to execute the transaction operation for the transaction target object comprises the following steps: and executing a transaction operation for transacting the target object when the facial feature of the living being matches the facial feature registered in advance and the iris feature of the living being matches the iris feature registered in advance.
According to an embodiment of the present disclosure, in a case where the pre-entered standard biometric information further includes a voiceprint feature, wherein: determining whether the biometric information matches the pre-entered standard biometric information further comprises: comparing the voiceprint characteristics of the living beings with the voiceprint characteristics which are input in advance, and determining whether the voiceprint characteristics of the living beings are matched with the voiceprint characteristics which are input in advance; and performing a trading operation for trading the target object further comprises: and executing a transaction operation for transacting the target object when the facial feature of the living being matches the pre-registered facial feature, the iris feature of the living being matches the pre-registered iris feature, and the voiceprint feature of the living being matches the pre-registered voiceprint feature.
According to an embodiment of the present disclosure, the method further includes acquiring, by the sound box configured with the camera, voice information input by a user, where the voice information is at least used to generate an order containing the target object; generating an order containing the target object according to the voice information; and executing a payment operation for paying the order if the biometric information matches the pre-entered standard biometric information.
According to an embodiment of the present disclosure, the pre-entered standard biometric information further includes a mouth shape feature, wherein: the method comprises the steps that biological characteristic information is obtained through a sound box provided with a camera, and mouth shape characteristics of a living being in sound production are obtained through the camera; determining whether the biometric information matches the pre-entered standard biometric information comprises: comparing the mouth shape characteristic when the organism makes a sound with a mouth shape characteristic which is recorded in advance, and determining whether the mouth shape characteristic when the organism makes a sound is matched with the mouth shape characteristic which is recorded in advance; performing a trading operation for trading the target object includes: and executing a transaction operation for transacting the target object when the mouth shape feature at the time of the biological sound emission matches the mouth shape feature registered in advance.
Another aspect of the disclosure provides a non-volatile storage medium storing computer-executable instructions for implementing a transaction method as described above when executed.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions for implementing a transaction method as described above when executed.
According to the embodiment of the disclosure, because a technical means of transaction authentication through the sound box provided with the camera is adopted, and transaction, such as shopping, can be directly performed through the sound box, the technical problem that in the related art, when a user operates the sound box to complete certain operation with higher safety performance requirements, the existing sound box is difficult to ensure a safe and effective operation environment due to hardware or algorithm limitation is at least partially solved, and further the technical effects of improving transaction safety and expanding transaction scenes are achieved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an exemplary system architecture to which the speaker and transaction method may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a block diagram of an acoustic enclosure according to an embodiment of the present disclosure;
FIGS. 3A and 3B schematically illustrate exterior views of an acoustic enclosure according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a block diagram of an acoustic enclosure according to another embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of an acoustic enclosure according to another embodiment of the present disclosure;
FIG. 6 schematically shows a flow chart of a transaction method according to an embodiment of the present disclosure;
FIG. 7 schematically illustrates a flow chart of pre-entry of standard biometric information according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow chart of a transaction method according to another embodiment of the present disclosure;
FIG. 9 schematically shows a flow chart of a transaction method according to another embodiment of the present disclosure;
FIG. 10 schematically illustrates a flow chart of a transaction method according to another embodiment of the present disclosure;
FIG. 11 schematically illustrates a flow chart of a transaction method according to another embodiment of the present disclosure; and
fig. 12 schematically illustrates a block diagram of an acoustic enclosure suitable for implementing the above-described method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
The embodiment of the disclosure provides a sound box and a transaction method, wherein the sound box comprises a camera for acquiring biological characteristic information; the processor is used for comparing the biological characteristic information with pre-input standard biological characteristic information and determining whether the biological characteristic information is matched with the pre-input standard biological characteristic information; and executing a transaction operation for the transaction target object in a case where the biometric information matches the pre-entered standard biometric information.
Fig. 1 schematically shows an exemplary system architecture to which the speaker and the transaction method may be applied according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include a speaker 102, a network 103, and a server 104. Network 103 is the medium used to provide a communication link between loudspeaker 102 and server 104. Network 103 may include various connection types, such as wired and/or wireless communication links, and so forth.
User 101 may use speaker 102 to interact with server 104 over network 103 to receive or send messages, etc.
Server 104 may be a server that provides various services, such as a back-office management server (for example only) that provides support for websites that user 101 accesses using loudspeaker 102. The back-office management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the sound box 102.
Specifically, for example, the server 104 is a back-end server of an e-commerce platform, and the user 101 can provide services such as shopping, taking out, express delivery and the like based on voice interaction. The user 101 can complete the shopping cart adding work on the sound box 102 by voice, and the user 101 can be verified through the sound box 102 when shopping settlement and service purchase ordering confirmation are related subsequently, so that the safety and convenience of payment are ensured.
It should be noted that the transaction method provided by the embodiment of the present disclosure may be executed by the sound box 102, or may also be executed by another sound box different from the sound box 102.
It should be understood that the number of speakers, networks, and servers in fig. 1 is merely illustrative. There may be any number of speakers, networks, and servers, as desired for implementation.
FIG. 2 schematically illustrates a block diagram of an acoustic enclosure according to an embodiment of the present disclosure.
As shown in fig. 2, loudspeaker 200 includes a camera 210 and a processor 220.
The camera 210 is used to acquire biometric information.
The processor 220 is configured to compare the biometric information with pre-entered standard biometric information, and determine whether the biometric information matches the pre-entered standard biometric information; and executing a transaction operation for the transaction target object in a case where the biometric information matches the pre-entered standard biometric information.
According to embodiments of the present disclosure, biometric information includes, but is not limited to, facial features, iris features, mouth shape features when the user is speaking. The camera 210 disposed on the speaker 200 can be used to obtain facial features, iris features, mouth shape features of the user when speaking, and the like.
Taking the example of obtaining facial features, according to an embodiment of the present disclosure, a user may place a human face in a specific area of the camera 210, and in order to improve interactivity, the sound box 2000 may use the LCD screen to playback the video captured by the camera 210 (for the sound box 200 with the screen) or use voice playing to instruct the user to adjust the posture. When the biological characteristics are collected, it is necessary to ensure that the external light source is sufficient, and different postures (such as head-up, head-down, head-shaking and the like) can be collected at the same time. After the camera 210 collects the face picture, the face features are extracted and can be stored in the face database in the sound box 200.
Taking iris feature acquisition as an example, according to the embodiment of the present disclosure, a user can place a face in a specific region of the camera 210, and in order to ensure the recognition rate, the long-focus camera 210 can be separately added, so as to ensure the definition of iris acquisition. After the iris picture is collected by the camera 210, the iris features are extracted and can be stored in the iris database in the sound box 200.
Taking the mouth shape characteristics of the user when speaking as an example, according to the embodiment of the disclosure, the user can place the lips in a specific area of the camera 210, and in order to ensure the recognition rate, the long-focus camera 210 can be separately added, so as to ensure the definition of the collected lips when speaking. After the mouth shape is collected by the camera 210, the mouth shape characteristic features are extracted and can be stored in the mouth shape characteristic database in the sound box 200.
According to embodiments of the present disclosure, standard biometric information may be extracted from different databases within loudspeaker 200.
According to the embodiment of the present disclosure, the appearance of the sound box 200 is not limited, for example, the shape of the sound box 200 may be a rectangular parallelepiped, or the shape of the sound box 200 may be a square or a sphere, etc., as shown in fig. 3A and 3B, which are not described herein again.
According to the embodiment of the present disclosure, the position of the camera 210 on the sound box 200 is not limited, and for example, the camera may be disposed at any position around the sound box 200. Or, the device is arranged inside the sound box 200 and can extend out of the sound box through the lifting device when photographing is needed.
According to the embodiment of the disclosure, because a technical means of transaction authentication through the sound box provided with the camera is adopted, and transaction such as shopping can be directly carried out through the sound box, the technical problem that when a user operates the sound box to complete certain operation with higher safety performance requirements in the related art, the existing sound box is difficult to ensure a safe and effective operation environment due to hardware or algorithm limitation is at least partially solved, and the technical effects of improving transaction safety and expanding transaction scenes are further achieved.
Fig. 4 schematically illustrates a block diagram of an acoustic enclosure according to another embodiment of the present disclosure.
As shown in fig. 4, the camera 210 is further configured to pre-enter standard biometric information, where the pre-entered standard biometric information includes facial features and iris features, and the speaker 200 further includes a memory 230 configured to store the pre-entered standard biometric information.
Taking pre-entered standard biological characteristic information as the iris characteristic as an example, according to the embodiment of the disclosure, a user can place a face in a specific area of the camera 210, and in order to ensure the recognition rate, the long-focus camera 210 can be separately added, so that the definition of collecting the iris is ensured. After the iris picture is collected by the camera 210, the iris features are extracted and may be stored in the memory 230.
According to the embodiment of the present disclosure, the manner of entering the standard facial features or mouth shape features in advance is the same as or similar to the above manner, and is not described again here.
According to the embodiment of the present disclosure, the pre-entered biometric information is stored in the memory 230 of the speaker 200, so that the standard biometric characteristic can be quickly acquired during authentication, thereby improving authentication efficiency.
According to an embodiment of the present disclosure, the camera 210 includes acquiring facial features and iris features of a living being when acquiring biometric information.
The processor 220 is configured to compare the facial features of the living being with pre-entered facial features and determine whether the facial features of the living being match with pre-entered facial features when determining whether the biometric information matches with pre-entered standard biometric information; and comparing the iris characteristics of the creatures with the iris characteristics recorded in advance, and determining whether the iris characteristics of the creatures are matched with the iris characteristics recorded in advance. The processor 220 is also configured to perform a transaction operation for a transaction target object if the facial features of the living being match the pre-entered facial features and the iris features of the living being match the pre-entered iris features.
According to the embodiment of the disclosure, during face recognition, the sound box 200 may prompt the user to adjust the posture through a screen or voice to be right opposite to the camera, the sound box 200 collects a face image through the camera, and the processor 220 compares the collected facial features with the facial features recorded in advance to determine whether the facial features of the living being are matched with the facial features recorded in advance.
During iris recognition, the speaker 200 may prompt the user to adjust the posture to the camera through a screen or voice, the speaker collects iris data through the camera, and the processor 220 compares the collected iris features with the iris features previously entered to determine whether the iris features of the creature are matched with the iris features previously entered.
According to an embodiment of the present disclosure, in a case where all the biometric information match, indicating that the authentication is successful, the processor 220 may perform a transaction operation for transacting the target object. If there is a case where one of the biometric information does not match, the processor 220 does not perform a transaction operation for transacting the target object.
According to the embodiment of the present disclosure, the speaker is used for face recognition and iris recognition, and when the recognition is correct, the processor 220 performs the transaction operation for the transaction target object, so that the security of the authentication can be improved.
Fig. 5 schematically illustrates a block diagram of an acoustic enclosure according to another embodiment of the present disclosure.
As shown in fig. 5, according to an embodiment of the present disclosure, the sound box 200 further includes a microphone 240 for entering a voiceprint feature of a living being, wherein the processor 220 is further configured to perform:
comparing the voiceprint characteristics of the living beings with the voiceprint characteristics which are input in advance, and determining whether the voiceprint characteristics of the living beings are matched with the voiceprint characteristics which are input in advance; and
in the case where the facial feature of the living organism matches with the facial feature registered in advance, the iris feature of the living organism matches with the iris feature registered in advance, and the voiceprint feature of the living organism matches with the voiceprint feature registered in advance, a transaction operation for a transaction target object is performed.
According to an embodiment of the present disclosure, when a voiceprint is recorded, the sound box 200 may provide a text (with a screen sound box) or a voice prompt to instruct a user to speak a specified text to complete a voiceprint recording operation. In order to ensure the voiceprint recognition rate, it is necessary to ensure that the external environment noise is relatively low during voiceprint recording, and meanwhile, for recognition at different distances, a user uses different sound sizes to record, and after the microphone 240 records audio, voiceprint feature data are extracted and can be stored in the memory 230.
According to an embodiment of the present disclosure, during voiceprint authentication, the speaker 200 may prompt the user to speak a voice, for example, in a transaction scenario, voice input confirms payment, the user speaks a confirmation payment, and the microphone 240 records the voice frequency of the confirmation payment from the user and compares the voice frequency with the voiceprint features previously recorded in the memory 230.
According to the embodiments of the present disclosure, in the case where the facial feature of the living organism matches the facial feature entered in advance, the iris feature of the living organism matches the iris feature entered in advance, and the voiceprint feature of the living organism matches the voiceprint feature entered in advance, the transaction operation for the transaction target object is performed, and the security of authentication can be further improved.
Since voiceprint feature recognition is a biometric recognition technology, the voiceprint feature recognition itself is easily affected by physical conditions (cold), age, emotion and the like, and the recognition rate is unstable. Meanwhile, the voice containing the voiceprint features is acquired too conveniently and can be completed unconsciously. Therefore, as a single authentication means, the security is low, and the method is not suitable for a confirmation means in a payment scene. Compared with the security of only adopting voiceprint feature recognition, the method and the device can further improve the security of authentication.
According to an embodiment of the present disclosure, for improving user convenience, voiceprint authentication may be performed in the steps of the transaction flow, for example, each time the user confirms to add a shopping cart and confirms to submit an order, the processor 220 may automatically compare the standard voiceprint features in the identification memory 230. For face recognition, the face may be captured in real time during the shopping process, and the processor 220 may automatically compare the standard facial features in the recognition memory 230.
For iris recognition, the iris image may be captured in real time during the shopping process, and the processor 220 may automatically compare the standard iris features in the recognition memory 230.
According to an embodiment of the present disclosure, the microphone 240 is further configured to obtain voice information input by the user, wherein the voice information is at least used for generating an order containing the target object.
The processor 220 is further configured to generate an order containing the target object according to the voice information; and in the event that the biometric information matches pre-entered standard biometric information, performing a payment operation for the payment order.
According to embodiments of the present disclosure, a user shopping process generally first learns information such as the role or price of an item, then confirms the addition of goods or services to a shopping cart, confirms the submission of an order, and pays for the order. When each step is completed and the next step is entered, the user generally needs to use voice to confirm, generally uses voice to confirm addition and confirm order submission, and the like, after receiving the voice information of the user, the microphone 240 sends the voice information to the processor 220, and the processor 220 can generate an order containing the target object according to the voice information. Once the payment process is entered, the speaker 200 may prompt the user to enter the authentication process through voice or screen (for a screened speaker).
According to the embodiment of the disclosure, the user can operate the sound box through voice and can generate an order, so that shopping can be completed on the sound box, and a transaction scene is expanded.
According to the embodiment of the present disclosure, the pre-entered standard biometric information further includes a mouth shape feature, wherein the camera 210 is further configured to obtain the mouth shape feature when the living being makes a sound. The processor 220 is further configured to compare the mouth shape feature when the living being makes a sound with the mouth shape feature recorded in advance, and determine whether the mouth shape feature when the living being makes a sound matches the mouth shape feature recorded in advance; and performing a transaction operation for the transaction target object in a case where the mouth shape feature at the time of the biological sound emission matches the mouth shape feature entered in advance.
According to an embodiment of the present disclosure, the camera 210 is further used for obtaining mouth shape characteristics when the living being makes a sound, the mouth shape characteristics including, but not limited to, the opening degree of the mouth when speaking, the shape of the mouth, and the like. The pre-entered mouth shape characteristic may be a mouth shape characteristic of a living being emitting a predetermined sound. For example, a user's mouth shape feature when saying "confirm payment", or a user's mouth shape feature when saying a specific payment password, may also be a user's mouth shape feature when saying a password.
According to the embodiment of the present disclosure, in the case where it is determined whether the mouth shape feature at the time of the biological utterance matches the mouth shape feature entered in advance, the security authentication may be further improved in combination with one or more of the above-described face recognition, iris recognition, and voiceprint recognition.
According to the embodiment of the present disclosure, the camera 210 may also acquire a mouth shape intentionally made by a living being when the living being does not make a sound, and the specific authentication is as described above and will not be described herein again.
According to the embodiment of the disclosure, the authentication is carried out through the mouth shape characteristics of organisms, the types of authentication modes can be increased, meanwhile, in the transaction process of a user, the mouth shape can be naturally collected through the camera, the matching authentication work can be rapidly completed, and the transaction efficiency is improved.
Fig. 6 schematically shows a flow chart of a transaction method according to an embodiment of the present disclosure.
As shown in fig. 6, the method includes operations S210 to S230.
In operation S210, biometric information is acquired through a speaker equipped with a camera.
In operation S220, the biometric information is compared with the pre-entered standard biometric information to determine whether the biometric information matches the pre-entered standard biometric information.
In operation S230, in the case where the biometric information matches the pre-entered standard biometric information, a transaction operation for the transaction target object is performed.
According to the embodiment of the disclosure, because a technical means of transaction authentication through the sound box provided with the camera is adopted, and transaction such as shopping can be directly carried out through the sound box, the technical problem that when a user operates the sound box to complete certain operation with higher safety performance requirements in the related art, the existing sound box is difficult to ensure a safe and effective operation environment due to hardware or algorithm limitation is at least partially solved, and the technical effects of improving transaction safety and expanding transaction scenes are further achieved.
The method shown in fig. 6 is further described with reference to fig. 7-11 in conjunction with specific embodiments.
Fig. 7 schematically illustrates a flow chart of pre-entry of standard biometric information according to an embodiment of the present disclosure.
As shown in fig. 7, the method includes operations S240 to S250.
In operation S240, standard biometric information is pre-entered through a speaker equipped with a camera, wherein the pre-entered standard biometric information includes facial features and iris features.
In operation S240, the pre-entered standard biometric information is stored in the speaker in which the camera is disposed.
According to the embodiment of the disclosure, the pre-entered standard biological characteristic information is stored in the sound box, so that the standard biological characteristic can be rapidly acquired during authentication, and the authentication efficiency is improved.
According to the embodiment of the present disclosure, acquiring biometric information through a speaker configured with a camera includes: acquiring facial features and iris features of a living being through the camera;
determining whether the biometric information matches the pre-entered standard biometric information comprises:
comparing the facial features of the creatures with facial features which are input in advance, and determining whether the facial features of the creatures are matched with the facial features which are input in advance;
comparing the iris features of the creatures with iris features which are input in advance, and determining whether the iris features of the creatures are matched with the iris features which are input in advance;
performing a trading operation for trading the target object includes: performing a transaction operation for transacting the target object if the facial features of the living being match the pre-entered facial features and the iris features of the living being also match the pre-entered iris features.
Fig. 8 schematically shows a flow chart of a transaction method according to another embodiment of the present disclosure.
As shown in fig. 8, the method includes operations S211, S221, S222, and S231.
In operation S211, facial features and iris features of a living being are acquired through a camera.
In operation S221, the facial features of the living being are compared with the pre-entered facial features, and it is determined whether the facial features of the living being match the pre-entered facial features.
In operation S222, the iris feature of the creature is compared with the iris feature registered in advance, and it is determined whether the iris feature of the creature matches the iris feature registered in advance.
In operation S231, in a case where the facial features of the creature match the facial features registered in advance, and the iris features of the creature also match the iris features registered in advance, a transaction operation for the transaction target object is performed.
According to the embodiment of the disclosure, the voice box is used for face recognition and iris recognition, and under the condition of correct recognition, the transaction operation for the transaction target object is executed, so that the authentication safety can be improved.
According to an embodiment of the present disclosure, in a case where the pre-entered standard biometric information further includes a voiceprint feature, wherein:
determining whether the biometric information matches pre-entered standard biometric information further comprises: comparing the voiceprint characteristics of the living beings with the voiceprint characteristics which are input in advance, and determining whether the voiceprint characteristics of the living beings are matched with the voiceprint characteristics which are input in advance; and
performing a trading operation for trading the target object further comprises: in the case where the facial feature of the living organism matches with the facial feature registered in advance, the iris feature of the living organism matches with the iris feature registered in advance, and the voiceprint feature of the living organism matches with the voiceprint feature registered in advance, a transaction operation for a transaction target object is performed.
Fig. 9 schematically shows a flow chart of a transaction method according to another embodiment of the present disclosure.
As shown in fig. 9, the method includes operations S223 and S232.
In operation S223, the voiceprint feature of the living being is compared with the voiceprint feature recorded in advance, and it is determined whether the voiceprint feature of the living being matches with the voiceprint feature recorded in advance.
In operation S232, in the case where the facial feature of the living being matches the facial feature registered in advance, the iris feature of the living being matches the iris feature registered in advance, and the voiceprint feature of the living being matches the voiceprint feature registered in advance, a transaction operation for the transaction target object is performed.
According to the embodiments of the present disclosure, in the case where the facial feature of the living organism matches the facial feature entered in advance, the iris feature of the living organism matches the iris feature entered in advance, and the voiceprint feature of the living organism matches the voiceprint feature entered in advance, the transaction operation for the transaction target object is performed, and the security of authentication can be further improved.
Fig. 10 schematically shows a flow chart of a transaction method according to another embodiment of the present disclosure.
As shown in fig. 10, the method includes operations S260 and S280.
In operation S260, voice information input by a user is acquired through a speaker equipped with a camera, where the voice information is at least used to generate an order containing a target object.
In operation S270, an order containing the target object is generated according to the voice information.
In operation S280, in case the biometric information matches the pre-entered standard biometric information, a payment operation for the payment order is performed.
According to the embodiment of the disclosure, the user can operate the sound box through voice and can generate an order, so that shopping can be completed on the sound box, and a transaction scene is expanded.
According to an embodiment of the present disclosure, the pre-entered standard biometric information further includes a mouth shape feature, wherein:
obtain biological characteristic information through the audio amplifier that disposes the camera and still include: acquiring mouth shape characteristics of a living being when the living being makes a sound through a camera;
determining whether the biometric information matches pre-entered standard biometric information includes: comparing the mouth shape characteristic when the organism makes a sound with the mouth shape characteristic which is recorded in advance, and determining whether the mouth shape characteristic when the organism makes a sound is matched with the mouth shape characteristic which is recorded in advance;
performing a trading operation for trading the target object includes: in the case where the mouth shape feature at the time of the biological sound emission matches the mouth shape feature entered in advance, a transaction operation for the transaction target object is performed.
Fig. 11 schematically shows a flow chart of a transaction method according to another embodiment of the present disclosure.
As shown in fig. 11, the method includes operations S212, S224, and S233.
In operation S212, a mouth shape feature when the living being uttered is acquired through the camera.
In operation S224, the mouth shape feature when the living being uttered is compared with the mouth shape feature registered in advance, and it is determined whether the mouth shape feature when the living being uttered matches the mouth shape feature registered in advance.
In operation S233, in the case where the mouth shape feature at the time of the biological sound emission matches the mouth shape feature entered in advance, a transaction operation for the transaction target object is performed.
According to the embodiment of the disclosure, the authentication is carried out through the mouth shape characteristics of organisms, the types of authentication modes can be increased, meanwhile, in the transaction process of a user, the mouth shape can be naturally collected through the camera, the matching authentication work can be rapidly completed, and the transaction efficiency is improved.
Fig. 12 schematically illustrates a block diagram of an acoustic enclosure suitable for implementing the above-described method according to an embodiment of the present disclosure. The sound box shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 12, the sound box 500 according to the embodiment of the present disclosure includes a processor 501, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 501 may also include onboard memory for caching purposes. Processor 501 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM 503, various programs and data necessary for the operation of the audio amplifier 500 are stored. The processor 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 502 and/or the RAM 503. Note that the programs may also be stored in one or more memories other than the ROM 502 and the RAM 503. The processor 501 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the sound box 500 may further include an input/output (I/O) interface 505, the input/output (I/O) interface 505 also being connected to the bus 504. The loudspeaker 500 may also include one or more of the following components connected to the I/O interface 505: comprises a camera 5061, a microphone 5062; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511.
The present disclosure also provides a computer-readable storage medium, which may be included in the sound box/system described in the above embodiments; or may exist separately and not be assembled into the enclosure/system. The computer readable medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, a computer-readable storage medium may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, optical fiber cable, radio frequency signals, etc., or any suitable combination of the foregoing.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include ROM 502 and/or RAM 503 and/or one or more memories other than ROM 502 and RAM 503 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods and computer program products according to various embodiments of the present disclosure. It should be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (13)

1. An acoustic enclosure, comprising:
the camera is used for acquiring biological characteristic information; and
the processor is used for comparing the biological characteristic information with pre-input standard biological characteristic information and determining whether the biological characteristic information is matched with the pre-input standard biological characteristic information; and executing a trading operation for the trading target object in the case that the biometric information matches the pre-entered standard biometric information.
2. The loudspeaker of claim 1, wherein the camera is further configured to pre-enter standard biometric information, wherein the pre-entered standard biometric information includes facial features and iris features, the loudspeaker further comprising:
and the memory is used for storing the pre-entered standard biological characteristic information.
3. The loudspeaker of claim 2, wherein:
the camera acquires the facial features and iris features of a living being when acquiring the biological feature information;
the processor, when determining whether the biometric information matches the pre-entered standard biometric information, is configured to perform:
comparing the facial features of the creatures with facial features which are input in advance, and determining whether the facial features of the creatures are matched with the facial features which are input in advance;
comparing the iris features of the creatures with iris features which are input in advance, and determining whether the iris features of the creatures are matched with the iris features which are input in advance;
the processor is further configured to perform a transaction operation for transacting the target object if the facial features of the living being match the pre-entered facial features and the iris features of the living being also match the pre-entered iris features.
4. The loudspeaker of claim 3, wherein the loudspeaker further comprises a microphone for entering a voiceprint feature of a living being, wherein the processor is further configured to perform:
comparing the voiceprint features of the creature with the voiceprint features which are input in advance, and determining whether the voiceprint features of the creature are matched with the voiceprint features which are input in advance; and
performing a transaction operation for transacting the target object if the facial features of the living being match the pre-entered facial features, the iris features of the living being match the pre-entered iris features, and the voiceprint features of the living being match the pre-entered voiceprint features.
5. The loudspeaker of claim 3, wherein:
the microphone is also used for acquiring voice information input by a user, wherein the voice information is at least used for generating an order containing the target object;
the processor is further used for generating an order containing the target object according to the voice information; and executing a payment operation for paying the order if the biometric information matches the pre-entered standard biometric information.
6. The loudspeaker of claim 2, wherein the pre-entered standard biometric information further comprises a mouth-shape feature, wherein:
the camera is also used for acquiring the mouth shape characteristics when the organism makes a sound;
the processor is further used for comparing the mouth shape feature when the organism makes a sound with a mouth shape feature which is recorded in advance, and determining whether the mouth shape feature when the organism makes a sound is matched with the mouth shape feature which is recorded in advance; and performing a trading operation for trading the target object in a case where the mouth shape feature at the time of the biological sound emission matches the mouth shape feature entered in advance.
7. A method of trading, comprising:
acquiring biological characteristic information through a sound box provided with a camera;
comparing the biological characteristic information with pre-entered standard biological characteristic information to determine whether the biological characteristic information is matched with the pre-entered standard biological characteristic information;
and executing the transaction operation for the transaction target object under the condition that the biological characteristic information is matched with the pre-input standard biological characteristic information.
8. The method of claim 7, wherein the method further comprises:
inputting standard biological characteristic information in advance through the sound box provided with the camera, wherein the standard biological characteristic information which is input in advance comprises facial characteristics and iris characteristics; and
and storing the pre-recorded standard biological characteristic information in the sound box provided with the camera.
9. The method of claim 8, wherein:
obtaining the biological characteristic information through the audio amplifier that disposes the camera includes: acquiring facial features and iris features of a living being through the camera;
determining whether the biometric information matches the pre-entered standard biometric information comprises:
comparing the facial features of the creatures with facial features which are input in advance, and determining whether the facial features of the creatures are matched with the facial features which are input in advance;
comparing the iris features of the creatures with iris features which are input in advance, and determining whether the iris features of the creatures are matched with the iris features which are input in advance;
performing a trading operation for trading the target object includes: performing a transaction operation for transacting the target object if the facial features of the living being match the pre-entered facial features and the iris features of the living being also match the pre-entered iris features.
10. The method of claim 9, in the case where the pre-entered standard biometric information further comprises a voiceprint feature, wherein:
determining whether the biometric information matches the pre-entered standard biometric information further comprises: comparing the voiceprint features of the creature with the voiceprint features which are input in advance, and determining whether the voiceprint features of the creature are matched with the voiceprint features which are input in advance; and
performing a trading operation for trading the target object further comprises: performing a transaction operation for transacting the target object if the facial features of the living being match the pre-entered facial features, the iris features of the living being match the pre-entered iris features, and the voiceprint features of the living being match the pre-entered voiceprint features.
11. The method of claim 9, wherein the method further comprises:
acquiring voice information input by a user through the sound box provided with the camera, wherein the voice information is at least used for generating an order containing the target object;
generating an order containing the target object according to the voice information; and
and executing a payment operation for paying the order if the biological characteristic information is matched with the pre-entered standard biological characteristic information.
12. The method of claim 8, wherein the pre-entered standard biometric information further comprises a mouth shape characteristic, wherein:
obtain biological characteristic information through the audio amplifier that disposes the camera and still include: acquiring mouth shape characteristics of a living being when the living being makes a sound through the camera;
determining whether the biometric information matches the pre-entered standard biometric information comprises: comparing the mouth shape feature when the organism makes a sound with a mouth shape feature which is recorded in advance, and determining whether the mouth shape feature when the organism makes a sound is matched with the mouth shape feature which is recorded in advance;
performing a trading operation for trading the target object includes: performing a trading operation for trading the target object in a case where the mouth shape feature at the time of the biological sound emission matches the mouth shape feature entered in advance.
13. A non-volatile storage medium storing computer-executable instructions for implementing the transaction method of any one of claims 7 to 12 when executed.
CN201811043445.2A 2018-09-06 2018-09-06 Sound box and transaction method Pending CN110880116A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811043445.2A CN110880116A (en) 2018-09-06 2018-09-06 Sound box and transaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811043445.2A CN110880116A (en) 2018-09-06 2018-09-06 Sound box and transaction method

Publications (1)

Publication Number Publication Date
CN110880116A true CN110880116A (en) 2020-03-13

Family

ID=69727251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811043445.2A Pending CN110880116A (en) 2018-09-06 2018-09-06 Sound box and transaction method

Country Status (1)

Country Link
CN (1) CN110880116A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103730120A (en) * 2013-12-27 2014-04-16 深圳市亚略特生物识别科技有限公司 Voice control method and system for electronic device
CN106131718A (en) * 2016-08-10 2016-11-16 微云(武汉)科技有限公司 A kind of intelligent sound box system and control method thereof
CN206894856U (en) * 2017-04-17 2018-01-16 杨丽 A kind of audio amplifier of family digital assistant
CN107886330A (en) * 2017-11-28 2018-04-06 北京旷视科技有限公司 Settlement method, apparatus and system
CN108320757A (en) * 2018-01-30 2018-07-24 上海思愚智能科技有限公司 Distribution information reminding method, device, intelligent sound box and storage medium
CN108389098A (en) * 2017-02-03 2018-08-10 北京京东尚科信息技术有限公司 Voice purchase method and system
CN207744117U (en) * 2018-01-10 2018-08-17 珠海迈科智能科技股份有限公司 A kind of speaker based on recognition of face
CN207802318U (en) * 2017-12-29 2018-08-31 广东贝声实业有限公司 A kind of intelligent sound box based on Internet of Things

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103730120A (en) * 2013-12-27 2014-04-16 深圳市亚略特生物识别科技有限公司 Voice control method and system for electronic device
CN106131718A (en) * 2016-08-10 2016-11-16 微云(武汉)科技有限公司 A kind of intelligent sound box system and control method thereof
CN108389098A (en) * 2017-02-03 2018-08-10 北京京东尚科信息技术有限公司 Voice purchase method and system
CN206894856U (en) * 2017-04-17 2018-01-16 杨丽 A kind of audio amplifier of family digital assistant
CN107886330A (en) * 2017-11-28 2018-04-06 北京旷视科技有限公司 Settlement method, apparatus and system
CN207802318U (en) * 2017-12-29 2018-08-31 广东贝声实业有限公司 A kind of intelligent sound box based on Internet of Things
CN207744117U (en) * 2018-01-10 2018-08-17 珠海迈科智能科技股份有限公司 A kind of speaker based on recognition of face
CN108320757A (en) * 2018-01-30 2018-07-24 上海思愚智能科技有限公司 Distribution information reminding method, device, intelligent sound box and storage medium

Similar Documents

Publication Publication Date Title
US10923130B2 (en) Electronic device and method of performing function of electronic device
US10635893B2 (en) Identity authentication method, terminal device, and computer-readable storage medium
US10887690B2 (en) Sound processing method and interactive device
CN110288997B (en) Device wake-up method and system for acoustic networking
CN111415677B (en) Method, apparatus, device and medium for generating video
EP3673398B1 (en) Secure authorization for access to private data in virtual reality
US20190237076A1 (en) Augmentation of key phrase user recognition
CN108133707B (en) Content sharing method and system
US20190378494A1 (en) Method and apparatus for outputting information
JP2021514497A (en) Face recognition methods and devices, electronic devices and storage media
JP2018190413A (en) Method and system for processing user command to adjust and provide operation of device and content provision range by grasping presentation method of user speech
CN102903362A (en) Integrated local and cloud based speech recognition
US9129602B1 (en) Mimicking user speech patterns
WO2020051971A1 (en) Identity recognition method, apparatus, electronic device, and computer-readable storage medium
WO2016029806A1 (en) Sound image playing method and device
JP2020004381A (en) Information push method and apparatus
US20180068177A1 (en) Method, device, and non-transitory computer-readable recording medium
US10614815B2 (en) Conversational challenge-response system for enhanced security in voice only devices
CN113826135B (en) System, method and computer system for contactless authentication using voice recognition
CN113611318A (en) Audio data enhancement method and related equipment
CN110880116A (en) Sound box and transaction method
CN110634498A (en) Voice processing method and device
CN111415662A (en) Method, apparatus, device and medium for generating video
KR100976514B1 (en) Method for Operating Bank Robot with Electronic Documents Processing Application, Bank Robot, and Recording Medium
JP7271821B2 (en) Cloud voice conversion system

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination