CN113496702A - Audio signal response method and device, computer readable medium and electronic equipment - Google Patents

Audio signal response method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN113496702A
CN113496702A CN202010261372.5A CN202010261372A CN113496702A CN 113496702 A CN113496702 A CN 113496702A CN 202010261372 A CN202010261372 A CN 202010261372A CN 113496702 A CN113496702 A CN 113496702A
Authority
CN
China
Prior art keywords
audio signal
target data
execution
character sequence
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010261372.5A
Other languages
Chinese (zh)
Inventor
杨建民
李雪芳
严孝男
鞠万奎
曲真龙
王超
张仲良
王茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Zhenshi Information Technology Co Ltd
Original Assignee
Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Zhenshi Information Technology Co Ltd filed Critical Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority to CN202010261372.5A priority Critical patent/CN113496702A/en
Publication of CN113496702A publication Critical patent/CN113496702A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • G06Q30/0635Processing of requisition or of purchase orders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)

Abstract

The application provides an audio signal response method, an audio signal response device, a computer readable medium and an electronic device, and relates to the technical field of computers. The method comprises the following steps: when the audio signal is detected, converting the audio signal into a character sequence; identifying keywords in the character sequence, and performing data completion on the keywords according to a preset keyword mapping relation to obtain target data; determining an execution rule corresponding to the character sequence according to the target data; the audio signal is responded to according to the target data and the execution rule. The method can extract keywords through the character sequence corresponding to the recognized audio signal, and perform data completion on the recognized audio signal, and when the audio signal is a voice signal input by a user during ordering, the ordering efficiency can be improved by implementing the method.

Description

Audio signal response method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an audio signal response method, an audio signal response apparatus, a computer-readable medium, and an electronic device.
Background
With the continuous development of the e-commerce field, people can select an online shopping mode in addition to a traditional offline shopping mode. When a person selects a commodity to be purchased from shopping software, the person can enter a placing page, and complete the purchasing operation of the commodity by filling a series of placing information (such as a sender, a sender contact way, a sender address, a receiver contact way, a receiver address and the like) and clicking the placing information after confirming that the information is correct. The user can complete the order placing operation by manually filling the order placing information, and can also complete the order placing operation by inputting voice to trigger the system to automatically fill the order placing information. For example, the user may enter the voice "buy this cell phone to send to a company" to cause the system to recognize the voice content to enable automatic completion of the order placement information. However, the information input by the user voice usually has an incomplete problem, which may result in a problem that the order placing information cannot be filled or the filled order placing information is wrong, and further result in low order placing efficiency.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present application and therefore may include information that does not constitute prior art known to a person of ordinary skill in the art.
Disclosure of Invention
The application aims to provide an audio signal response method, an audio signal response device, a computer readable medium and electronic equipment, which can improve ordering efficiency by extracting keywords from a character sequence corresponding to an identified audio signal and performing data completion on the extracted keywords, and when the audio signal is a voice signal input by a user during ordering.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
A first aspect of the present application provides an audio signal response method, which may include the steps of:
when the audio signal is detected, converting the audio signal into a character sequence;
identifying keywords in the character sequence, and performing data completion on the keywords according to a preset keyword mapping relation to obtain target data;
determining an execution rule corresponding to the character sequence according to the target data;
the audio signal is responded to according to the target data and the execution rule.
In an exemplary embodiment of the present application, converting an audio signal into a sequence of characters comprises:
and denoising the audio signal, and identifying a character sequence corresponding to the denoised audio signal.
In an exemplary embodiment of the present application, identifying a keyword in a sequence of characters comprises:
carrying out invalid information screening on the character sequence, and determining a vector corresponding to each character in a screening result;
and performing word segmentation processing on the screening result according to the vector distance between the vectors corresponding to the characters to obtain a plurality of keywords.
In an exemplary embodiment of the present application, performing data completion on a keyword according to a preset keyword mapping relationship to obtain target data includes:
converting the keywords from a text form to a character string form;
and performing data completion on the keywords in the character string form according to the preset keyword mapping relation to obtain target data.
In an exemplary embodiment of the present application, the determining an execution rule corresponding to a character sequence according to target data includes:
and determining delivery data for expressing a delivery mode from the order placing data, and determining an execution rule corresponding to the character sequence from preset execution rules according to the delivery data.
In an exemplary embodiment of the present application, the execution rule includes a plurality of execution functions and an execution sequence of the plurality of execution functions, and the execution functions include at least one of an order-receiving check function, an order-obtaining function, a package pre-sorting function, an aging calculation function, a resource pre-occupation function, a task issuing function, and a bill generation function.
In an exemplary embodiment of the present application, responding to an audio signal according to target data and an execution rule includes:
and generating a ordering message according to the target data and the execution rule and uploading the ordering message to the system so that the system responds to an ordering request corresponding to the audio signal according to the ordering message.
According to a second aspect of the present application, there is provided an audio signal response apparatus including a character conversion unit, a keyword recognition unit, a data completion unit, an execution rule determination unit, and an audio signal response unit, wherein:
the character conversion unit is used for converting the audio signal into a character sequence when the audio signal is detected;
a keyword recognition unit for recognizing a keyword in the character sequence;
the data completion unit is used for performing data completion on the keywords according to a preset keyword mapping relation to obtain target data;
the execution rule determining unit is used for determining an execution rule corresponding to the character sequence according to the target data;
and the audio signal response unit is used for responding to the audio signal according to the target data and the execution rule.
In an exemplary embodiment of the present application, a character conversion unit converts an audio signal into a character sequence, including:
the character conversion unit carries out denoising processing on the audio signal and identifies a character sequence corresponding to the denoised audio signal.
In an exemplary embodiment of the present application, the keyword recognition unit recognizes a keyword in a character sequence, including:
the keyword recognition unit screens the invalid information of the character sequence and determines a vector corresponding to each character in the screening result;
and the keyword recognition unit carries out word segmentation processing on the screening result according to the vector distance between the vectors corresponding to the characters to obtain a plurality of keywords.
In an exemplary embodiment of the present application, the data completing unit performs data completing on the keywords according to a preset keyword mapping relationship to obtain target data, including:
the data completion unit converts the keywords from a text form to a character string form;
and the data completion unit completes the data of the keywords in the character string form according to the preset keyword mapping relation to obtain the target data.
In an exemplary embodiment of the present application, the determining an execution rule corresponding to a character sequence according to the target data includes:
the execution rule determining unit determines delivery data indicating a delivery manner from the order data, and determines an execution rule corresponding to the character sequence from preset execution rules according to the delivery data.
In an exemplary embodiment of the present application, the execution rule includes a plurality of execution functions and an execution sequence of the plurality of execution functions, and the execution functions include at least one of an order-receiving check function, an order-obtaining function, a package pre-sorting function, an aging calculation function, a resource pre-occupation function, a task issuing function, and a bill generation function.
In an exemplary embodiment of the present application, an audio signal responding unit responds to an audio signal according to target data and an execution rule, including:
and the audio signal response unit generates a ordering message according to the target data and the execution rule and uploads the ordering message to the system, so that the system responds to an ordering request corresponding to the audio signal according to the ordering message.
According to a third aspect of the present application, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the audio signal response method as described in the first aspect of the embodiments above.
According to a fourth aspect of the present application, there is provided an electronic device comprising: one or more processors; storage means for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement the audio signal response method as described in the first aspect of the embodiments above.
The technical scheme provided by the application can comprise the following beneficial effects:
in the technical solution provided by the embodiment of the present application, when an audio signal (e.g., a voice signal for ordering) is detected, the audio signal can be converted into a character sequence (e.g., the mobile phone is purchased and sent to a company); identifying keywords (such as mobile phones and companies) in the character sequence, and completing the keywords according to a preset keyword mapping relation to obtain target data; determining an execution rule corresponding to the character sequence according to the target data; the audio signal is responded to according to the target data and the execution rule. According to the scheme, on one hand, keyword extraction can be carried out on the character sequence corresponding to the recognized audio signal, data completion can be carried out on the character sequence, and when the audio signal is a voice signal input by a user during ordering, ordering efficiency can be improved by implementing the method; on the other hand, when the voice ordering method and the voice ordering device are applied to the product ordering field, the voice ordering function can be achieved, the learning cost of the ordering operation of a user is reduced, the use experience of the user is improved, and the use viscosity of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 shows a schematic flow diagram of an audio signal response method according to an exemplary embodiment of the present application;
FIG. 2 illustrates a structural diagram of data completion for keywords according to an exemplary embodiment of the present application;
FIG. 3 illustrates a block diagram of determining an execution rule corresponding to a sequence of characters according to an exemplary embodiment of the present application;
FIG. 4 shows a schematic flow chart of another audio signal response method according to an exemplary embodiment of the present application;
fig. 5 illustrates a structural diagram of an audio signal response method according to an exemplary embodiment of the present application;
fig. 6 is a schematic diagram illustrating a page corresponding to a drop message according to an exemplary embodiment of the present application;
FIG. 7 shows a schematic flow chart diagram of another audio signal response method according to an exemplary embodiment of the present application;
fig. 8 is a block diagram illustrating a structure of an audio signal response apparatus according to an exemplary embodiment of the present application;
FIG. 9 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an exemplary embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Referring to fig. 1, fig. 1 is a flowchart illustrating an audio signal response method according to an exemplary embodiment of the present application, where the audio signal response method may be implemented by a server or a terminal device.
As shown in fig. 1, an audio signal response method according to an embodiment of the present application includes steps S110 to S140, where:
step S110: when an audio signal is detected, the audio signal is converted into a sequence of characters.
Step S120: and identifying keywords in the character sequence, and completing the data of the keywords according to a preset keyword mapping relation to obtain target data.
Step S130: and determining an execution rule corresponding to the character sequence according to the target data.
Step S140: the audio signal is responded to according to the target data and the execution rule.
By implementing the audio signal response method shown in fig. 1, keyword extraction can be performed on the character sequence corresponding to the recognized audio signal, and data completion can be performed on the character sequence. In addition, when this application is applied to the product field of placing an order, can realize the function of pronunciation placing an order, reduce the user to the study cost of operation of placing an order, improve user's use and experience, promote user's use viscosity.
The following describes the steps in detail:
in step S110, when an audio signal is detected, the audio signal is converted into a character sequence.
The audio signal may be a voice signal input by a user. In addition. The character sequence may include characters such as chinese, english, numbers, symbols, and the like, and the embodiment of the present application is not limited. For example, the sequence of characters may be: hi, I want to send a mobile phone to the old page from home, and send and pay at home in the morning and tomorrow today.
In addition, optionally, before converting the audio signal into the character sequence, the method may further include the steps of: when user operation for voice input is detected, a voice acquisition function is triggered to start, and an audio signal input by a user is acquired. The method for detecting the user operation for performing the voice input may specifically be: detecting whether a virtual identifier used for representing voice acquisition on the touch screen is triggered, and if so, judging that user operation is detected; or whether the voice assistant is triggered or not is detected, if so, a voice signal of the voice assistant triggered by the user is detected (e.g., the voice assistant asks me to help me to automatically place an order), and if so, the user operation is determined to be detected.
In this embodiment, optionally, converting the audio signal into a character sequence includes:
and denoising the audio signal, and identifying a character sequence corresponding to the denoised audio signal.
The method for denoising the audio signal may specifically be: detecting the noise probability corresponding to each moment in the audio signal through a noise detection network; smoothing the noise probability corresponding to each moment; marking the time greater than the preset threshold value as a first numerical value (such as 1) according to the smooth processing result, and marking the time less than or equal to the preset threshold value as a second numerical value (such as 0), so as to obtain a noise sequence corresponding to the audio signal; denoising the audio signal according to the noise sequence; wherein the noise probability is used to indicate the probability that the audio signal at that time does not include noise.
In addition, optionally, the manner of identifying the character sequence corresponding to the denoised audio signal may specifically be: calculating the audio characteristics corresponding to the audio signals after the denoising processing according to the characteristic parameters, and determining a character sequence corresponding to the audio characteristics; the characteristic parameters may include pitch period, formants, short-term average energy or amplitude, Linear Prediction Coefficients (LPC), perceptual weighted prediction coefficients (PLP), short-term average zero-crossing rate, Linear Prediction Cepstrum Coefficients (LPCC), autocorrelation functions, mel cepstrum coefficients (MFCC), wavelet transform coefficients, empirical mode decomposition coefficients (EMD), and gamma pass filter coefficients (GFCC).
Therefore, by implementing the optional embodiment, the efficiency of recognizing the character sequence corresponding to the audio signal can be improved by performing denoising processing on the audio signal.
In step S120, the keywords in the character sequence are identified, and data completion is performed on the keywords according to a preset keyword mapping relationship to obtain target data.
According to the above example for the character sequence, the keywords may be: the mobile phone can be sent to and sent to the morning, at home, in the old, at the mobile phone and in the tomorrow. In addition, the preset keyword mapping relationship is used to represent the corresponding relationship between the keyword and the complete data, for example, the keyword "home" corresponds to the complete data "three-unit 3001 of building 3 of old king cell 3 in sunny ward area in beijing city" in the preset keyword mapping relationship. The complete data set corresponding to all the keywords respectively can be used as the target data.
In this embodiment of the present application, optionally, identifying a keyword in a character sequence includes:
carrying out invalid information screening on the character sequence, and determining a vector corresponding to each character in a screening result;
and performing word segmentation processing on the screening result according to the vector distance between the vectors corresponding to the characters to obtain a plurality of keywords.
Wherein, according to the distance to the character sequence, the invalid information may be a line segment:hi. I want to get fromHome-useTo giveOld fashioned paperHost counterMobile phoneToday in the morningAt home, the user can select the appropriate time,Tomorrowthe delivery of the medicine to the patient is carried out,sending and paying. In addition, the method for screening out invalid information of the character sequence may specifically be: matching the character sequence with a preset dictionary, and screening out invalid information according to a matching result; the predetermined dictionary may include various types of keywords, such as a sender type keyword (e.g., joss), a recipient type keyword (e.g., pop-up), a time-capturing keyword (e.g., afternoon today), a product keyword (e.g., a mobile phone), and a value-added service keyword (e.g., a payment). In addition, the way of determining the vector corresponding to each character in the screening result may be: determining all characters in the screening result, and calculating a feature vector corresponding to each character, for example, the screening result may include the characters: it is used for transmitting, receiving, sending and paying. Further, the word segmentation processing is performed on the screening result according to the vector distance between vectors corresponding to the characters, and the manner of obtaining the plurality of keywords may specifically be: according to the arrangement order of the characters (e.g., jin → Tian → Shang → Wu → Jia → old → Zhang → Man → Mao → Ming → Tian → Send → Da → Send → Pay) and the charactersCalculating vector distance between adjacent characters according to the vector corresponding to the characters, and performing word segmentation processing on the screening result according to comparison between the vector distance and a preset distance to obtain a plurality of keywords (such as morning, family, old pages, mobile phone, tomorrow delivery and posting payment); the vector distance may be a cosine distance or an euclidean distance, and the embodiment of the present application is not limited.
Therefore, by implementing the optional embodiment, the user requirements can be determined by extracting the keywords in the audio signal input by the user, and the efficiency of responding to the audio signal is improved.
In the embodiment of the present application, optionally, performing data completion on the keyword according to a preset keyword mapping relationship to obtain target data, including:
converting the keywords from a text form to a character string form;
and performing data completion on the keywords in the character string form according to the preset keyword mapping relation to obtain target data.
The text form and the character string form are both used for representing the keyword, for example, the keyword "posting payment" is the text form, and the corresponding character string form is "settleType ═ 1". The target data is expressed in the form of a character string, and the target data includes complete data corresponding to each keyword.
In addition, before completing the data of the keywords in the form of character strings according to the preset keyword mapping relationship to obtain the target data, the method may further include the following steps: extracting term vocabularies corresponding to historical ordering services, and constructing a preset keyword mapping relation according to the term vocabularies; the term vocabulary may include vocabulary corresponding to a sender, vocabulary corresponding to a receiver, vocabulary corresponding to a collection time, vocabulary corresponding to a product, and vocabulary corresponding to a value added service. In addition, the preset keyword mapping relationship may include a plurality of relational knowledge bases, such as a mail sending information knowledge base, a mail receiving information knowledge base, a time information knowledge base, a product information knowledge base, a value added information knowledge base, and the like, which is not limited in the embodiment of the present application.
In addition, the data completion of the keywords in the form of the character string is performed according to the preset keyword mapping relationship, and the manner of obtaining the target data may specifically be: and matching complete data corresponding to the keywords in the character string form according to a preset keyword mapping relation so as to complete the data of the keywords in the character string form, and further determining a set of the complete data corresponding to each keyword as target data, namely, the target data comprises the complete data corresponding to each keyword.
In addition, optionally, when the audio signal is used to generate a placing order request correspondingly, after completing data of the keyword in the form of the character string according to a preset keyword mapping relationship to obtain target data, the method may further include the following steps: determining the unfilled mandatory fields except the target data, generating character information (such as postal fee self-care) corresponding to the mandatory fields according to the target data, and filling all the mandatory fields corresponding to the ordering request according to the target data and the character information.
Referring to fig. 2, fig. 2 is a schematic structural diagram illustrating data completion of a keyword according to an exemplary embodiment of the present application. As shown in fig. 2, the present application may be applied to the field of ordering by voice, and when an audio signal 210 input by a user is detected, a request for ordering by voice may be generated and the audio signal 210 may be converted into a character sequence, and a keyword in the character sequence may be identified: name 221 (e.g., zhang san, lie si), location 222 (e.g., old zhang cell, old lie cell), time 223 (e.g., today morning), product 224 (e.g., cell phone), value added service 225 (e.g., register payment). Further, the sender 231 (e.g., Zhang III), the recipient (e.g., Li Si IV), the time period 233 (e.g., today morning), the product 234 (e.g., cell phone), and the value added service 235 (e.g., consignment) can be determined based on the above keywords. Furthermore, the address of the mail corresponding to the sender 231 can be supplemented according to the mail information knowledge base 241, for example, the supplemented address of the mail can be the number 3 building 3 unit 3001 of the korean sunny district old residential quarter 3 in beijing; and, the recipient address corresponding to the recipient 232 may be completed according to the recipient information knowledge base 242, for example, the completed recipient address may be the No. 3 building 3 unit 3001 in the prunus humilis cell of shanghai city; and, the collecting time 233 can be completed according to the time information knowledge base 243, for example, the collecting time 233 after completion can be 2019, 2 month, 2 am; the information corresponding to the product 234 may be supplemented according to the product information knowledge base 244, for example, the information corresponding to the supplemented product 234 may be a small chrysanthemum mobile phone (with a weight of 256 g); and, the value added service 235 may be supplemented according to the value added information repository 245, for example, the supplemented value added service 235 may be an economic express (paid for).
The way of completing the mailing address corresponding to the sender 231 according to the mailing information knowledge base 241 may specifically be: completing the mail sending address corresponding to the mail sender 231 through GIS address resolution service and a mail sending information knowledge base 241; the manner of completing the recipient address corresponding to the recipient 232 according to the recipient information knowledge base 242 may specifically be: and completing the addressee corresponding to the addressee 232 through the GIS address resolution service and the addressee information knowledge base 242. The GIS (Geographic Information System) is a technical System for collecting, storing, managing, operating, analyzing, displaying and describing Geographic distribution data in the whole or partial earth surface (including the atmosphere) space under the support of a computer hardware System and a computer software System.
It should be noted that the mail information repository 241 is used for storing information such as a sender, a mail address, a mail telephone, etc.; the recipient information knowledge base 242 is used for storing information such as recipients, recipient addresses and recipient calls; the time information stored in the time information knowledge base 243 is used for complementing the sending time; the product information repository 244 is used to store parameter information corresponding to the product, such as weight, volume, etc.; the value added information knowledge base 245 is used for storing the expression mode corresponding to the value added service.
Therefore, by implementing the optional embodiment, the complete information required for placing orders can be determined when the ordering information input by the user is incomplete through data completion of the keywords, and the efficiency and accuracy of voice ordering are further improved.
In step S130, an execution rule corresponding to the character sequence is determined according to the target data.
The execution rule is used for limiting the execution mode of the ordering request corresponding to the character sequence. The execution rule comprises a plurality of execution functions and execution sequences of the plurality of execution functions, and the execution functions comprise at least one of a receipt checking function, a receipt number acquiring function, a receiving pre-sorting function, an aging calculation function, a resource pre-occupation function, a task issuing function and a bill generation function.
In this embodiment of the application, optionally, the audio signal corresponds to an order placing request, the target data is order placing data corresponding to the order placing request, and the determining, according to the target data, an execution rule corresponding to the character sequence includes:
and determining delivery data for expressing a delivery mode from the order placing data, and determining an execution rule corresponding to the character sequence from preset execution rules according to the delivery data.
The delivery data used for representing the delivery mode may include a special offer, a white-bar pre-payment mode, an economic express and the like. In addition, the preset execution rule may include a plurality of execution rules, for example, the execution rule 1: IF productType ═ 1 (special offer) & & inputType ═ C2C "& & payType ═ 6 (white bar pre-authorized payment means) THEN; rule 2 is executed: IF productType is 1 (special transmission) & & inputType ═ B2C & & waybillChannel is 8 (economy express) THEN. The execution functions included in the execution rule 1 and arranged according to the execution sequence include: the order receiving and checking function → the order number acquiring function → the grouping pre-sorting function → the aging calculation function → the resource pre-occupying function → the task issuing function; the execution rule 2 includes the following execution functions arranged in the execution order: the order receiving and checking function → the order number acquiring function → the grouping pre-sorting function → the task issuing function → the order generating function.
In addition, the manner of determining the execution rule corresponding to the character sequence from the preset execution rules according to the distribution data may specifically be: and determining an execution rule corresponding to the character sequence from preset execution rules based on the Drools rule engine and the distribution data. Drools is an open source code rule engine written in the Java language, among others.
Referring to fig. 3, fig. 3 is a schematic structural diagram illustrating a method for determining an execution rule corresponding to a character sequence according to an exemplary embodiment of the present application. As shown in fig. 3, the rule engine includes an execution rule a321, execution rules B322, … …, and an execution rule Z323; the execution rule a321 may include execution functions a, b, c, and d 331; the execution rule B322 may include the execution functions a, B, c, e 332; the execution rule Z323 may include the execution functions a, c, d, e 333. After completing the data of the keywords in the form of the character string according to the preset keyword mapping relationship to obtain the target data 310, an execution rule corresponding to the character sequence may be determined according to the target data 310, and the execution rule corresponding to the character sequence may be an execution rule a321, an execution rule B322, an execution rule … …, or an execution rule Z323. The target data 310 may be ordering data corresponding to an ordering request for ordering the product.
Therefore, by implementing the optional embodiment, the corresponding execution rule can be determined through the ordering data, and then the ordering request corresponding to the audio signal can be responded according to the execution rule, so that voice ordering is realized, the learning cost of ordering operation of a user is reduced, the use experience of the user is improved, and the use viscosity of the user is further improved.
In step S140, the audio signal is responded to according to the target data and the execution rule.
In this embodiment, optionally, responding to the audio signal according to the target data and the execution rule includes:
and generating a ordering message according to the target data and the execution rule and uploading the ordering message to the system so that the system responds to an ordering request corresponding to the audio signal according to the ordering message.
The ordering message includes target data, and an output form of the ordering message may be a window form or a page form. In addition, after generating a receipt message according to the target data and the execution rule and uploading the receipt message to the system, the method can further comprise the following steps: and outputting a page corresponding to the order message, and uploading the order message to the system after detecting user operation for indicating that the confirmed data is correct. The system may be a central station order receiving system, and the order placing operation is executed according to the execution rule corresponding to the order placing message and the execution sequence of the execution function, that is, the order placing request corresponding to the audio signal is responded.
Therefore, by implementing the optional embodiment, the order placing request can be responded through the generated order placing message, so that voice order placing is realized, the learning cost of the user is reduced, the use experience of the user is improved, and the use viscosity of the user is further improved.
Referring to fig. 4, fig. 4 is a flow chart illustrating another audio signal response method according to an exemplary embodiment of the present application. As shown in fig. 4, the flow chart of another audio signal response method includes steps S410 to S450, wherein:
step S410: and constructing a preset keyword mapping relation.
Step S420: and uploading to a receipt receiving cloud server.
Step S430: and identifying a character sequence corresponding to the audio signal.
Step S440: and matching target data corresponding to the keywords in the character sequence according to a preset keyword mapping relation and determining an execution rule corresponding to the character sequence.
Step S450: and generating a ordering message according to the target data and the execution rule and uploading the ordering message to the central station order receiving system.
Specifically, term vocabularies corresponding to historical ordering services can be extracted, and a preset keyword mapping relationship is established according to the term vocabularies; the term vocabulary may include vocabulary corresponding to a sender, vocabulary corresponding to a receiver, vocabulary corresponding to a collection time, vocabulary corresponding to a product, and vocabulary corresponding to a value added service. The preset keyword mapping relationship may include a plurality of relational knowledge bases, such as a mail information knowledge base, a time information knowledge base, a product information knowledge base, a value-added information knowledge base, and the like. Furthermore, the preset keyword mapping relationship can be uploaded to a single-receiving cloud server so as to be called when the audio signal is detected. Furthermore, when the audio signal is detected, the character sequence corresponding to the audio signal can be identified, i.e., the audio signal is converted into the character sequence as described above. Furthermore, the keywords in the character sequence can be identified, the preset keyword mapping relation is called to match the complete data corresponding to the keywords, the complete data corresponding to each keyword is obtained, and the set of the complete data is determined as the target data. Furthermore, an execution rule corresponding to the character sequence may be determined from preset execution rules based on delivery data indicating a delivery method in the target data. Furthermore, a ordering message can be generated and uploaded to the middle station order receiving system according to the target data and the execution rule, so that the middle station order receiving system responds to ordering requests according to the execution sequence of the execution functions in the execution rule.
It can be seen that, by implementing the audio signal response method shown in fig. 4, keyword extraction may be performed on the character sequence corresponding to the identified audio signal, and data completion may be performed on the character sequence, and when the audio signal is a voice signal input by the user when placing an order, the efficiency of placing an order may be improved by implementing the present application. In addition, when this application is applied to the product field of placing an order, can realize the function of pronunciation placing an order, reduce the user to the study cost of operation of placing an order, improve user's use and experience, promote user's use viscosity.
Referring to fig. 5, fig. 5 is a schematic structural diagram illustrating an audio signal response method according to an exemplary embodiment of the present application. As shown in fig. 5, the schematic structural diagram of the method for implementing audio signal response includes a single domain knowledge receiving system and an intelligent single receiving system. Specifically, when an audio signal (e.g., a next voice) input by a user is detected, the audio signal may be converted into an audio stream by using an ffmpeg encoding program and subjected to denoising processing, and then a denoising processing result is compressed and uploaded to a single cloud system, and the compressed audio signal is decompressed by using the ffmpeg encoding program, a character sequence corresponding to the compressed audio signal is identified, and a word segmentation processing is performed on the character sequence, so as to obtain a plurality of keywords; wherein the ffmpeg encoding program is an open source computer program for recording, converting digital audio or video and converting it into a stream. Further, the keyword 503 in the form of a character string can be obtained by converting the keyword from a text form to a character string form. Furthermore, the order placing vocabulary corresponding to the keyword 503 in the form of a character string can be identified through a preset keyword mapping relation 501 stored in the order receiving domain knowledge system, and the order placing vocabulary is complemented into target data through a data complementing module 505 in the intelligent order receiving system. Furthermore, the execution rule corresponding to the character sequence may be specified from the preset execution rules 502 of the order field knowledge system by the rule engine 504 in the order field knowledge system and delivery data indicating a delivery method in the target data, and the execution rule may include the execution function a, the execution function B, the execution function C, and the execution function … …. Furthermore, the order placing message may be generated according to the target data and the execution rule, and the order placing message may be uploaded to the middlebox order receiving system 506 in the intelligent order receiving system, so that the middlebox order receiving system 506 responds to the order placing request corresponding to the audio signal according to the order placing message.
It can be seen that, by implementing the schematic structural diagram shown in fig. 5, keyword extraction may be performed on the character sequence corresponding to the recognized audio signal, and data completion may be performed on the character sequence, and when the audio signal is a voice signal input by the user when placing an order, the order placing efficiency may be improved by implementing the present application. In addition, when this application is applied to the product field of placing an order, can realize the function of pronunciation placing an order, reduce the user to the study cost of operation of placing an order, improve user's use and experience, promote user's use viscosity.
Referring to fig. 6 based on fig. 1 to 5, fig. 6 is a schematic diagram of a page corresponding to a drop message according to an exemplary embodiment of the present application. As shown in fig. 6, the target data may include a sender (i.e., zhang san), a sender contact (i.e., 156 × 6834), a sender address (i.e., xxxxxxxxxxxxxxxxxxxx in beijing daxing), a recipient (i.e., lie four), a recipient contact (i.e., 139 × 8343), a recipient address (i.e., housin baoding city xxxxxxxxxxxxxxxxxx), product information (i.e., 1KG), and a time to market (i.e., within 1 hour). Other mandatory fields may be populated based on the target data, and may include: cost (i.e., 12), time to delivery (i.e., 17 days 22:00 ago).
Referring to fig. 7, fig. 7 is a flow chart illustrating another audio signal response method according to an exemplary embodiment of the present application. As shown in fig. 7, another audio signal response method includes the following steps: step S710 to step S770, wherein:
step S710: when the audio signal is detected, denoising the audio signal, and identifying a character sequence corresponding to the denoised audio signal; the audio signal corresponds to a request to place an order.
Step S720: and screening invalid information of the character sequence, and determining a vector corresponding to each character in a screening result.
Step S730: and performing word segmentation processing on the screening result according to the vector distance between the vectors corresponding to the characters to obtain a plurality of keywords.
Step S740: converting the keywords from text form to character string form.
Step S750: performing data completion on the keywords in the character string form according to a preset keyword mapping relation to obtain target data; the target data is ordering data corresponding to the ordering request.
Step S760: determining delivery data used for expressing a delivery mode from the ordering data, and determining an execution rule corresponding to the character sequence from a preset execution rule according to the delivery data; the execution rule comprises a plurality of execution functions and execution sequences of the plurality of execution functions, and the execution functions comprise at least one of a receipt checking function, a receipt number acquiring function, a receiving pre-sorting function, an aging calculation function, a resource pre-occupation function, a task issuing function and a bill generation function.
Step S770: and generating a ordering message according to the target data and the execution rule and uploading the ordering message to the system so that the system responds to an ordering request corresponding to the audio signal according to the ordering message.
It should be noted that steps S710 to S770 correspond to the steps and embodiments in fig. 1, and for the specific implementation of steps S710 to S770, please refer to the steps and embodiments in fig. 1, which are not described herein again.
It can be seen that, by implementing the audio signal response method shown in fig. 7, keyword extraction may be performed on the character sequence corresponding to the identified audio signal, and data completion may be performed on the character sequence, and when the audio signal is a voice signal input by the user when placing an order, the efficiency of placing an order may be improved by implementing the present application. In addition, when this application is applied to the product field of placing an order, can realize the function of pronunciation placing an order, reduce the user to the study cost of operation of placing an order, improve user's use and experience, promote user's use viscosity.
Referring to fig. 8, fig. 8 is a block diagram illustrating an audio signal response apparatus according to an exemplary embodiment of the present application. The audio signal response apparatus includes a character conversion unit 801, a keyword recognition unit 802, a data complementing unit 803, an execution rule determination unit 804, and an audio signal response unit 805, in which:
a character conversion unit 801 for converting an audio signal into a character sequence when the audio signal is detected;
a keyword recognition unit 802 for recognizing a keyword in the character sequence;
a data completion unit 803, configured to perform data completion on the keywords according to a preset keyword mapping relationship to obtain target data;
an execution rule determining unit 804, configured to determine, according to the target data, an execution rule corresponding to the character sequence;
an audio signal response unit 805 for responding to the audio signal according to the target data and the execution rule.
It can be seen that, by implementing the audio signal response apparatus shown in fig. 8, keyword extraction may be performed on the character sequence corresponding to the recognized audio signal, and data completion may be performed on the character sequence, and when the audio signal is a voice signal input by the user when placing an order, the efficiency of placing an order may be improved by implementing the present application. In addition, when this application is applied to the product field of placing an order, can realize the function of pronunciation placing an order, reduce the user to the study cost of operation of placing an order, improve user's use and experience, promote user's use viscosity.
In an exemplary embodiment of the present application, the character conversion unit 801 converts an audio signal into a character sequence, including:
the character conversion unit 801 performs denoising processing on the audio signal, and identifies a character sequence corresponding to the denoised audio signal.
Therefore, by implementing the optional embodiment, the efficiency of recognizing the character sequence corresponding to the audio signal can be improved by performing denoising processing on the audio signal.
In an exemplary embodiment of the present application, the keyword recognition unit 802 recognizes a keyword in a character sequence, including:
the keyword recognition unit 802 screens out invalid information of the character sequence and determines a vector corresponding to each character in the screening result;
the keyword recognition unit 802 performs word segmentation processing on the filtered result according to the vector distance between vectors corresponding to the characters, and obtains a plurality of keywords.
Therefore, by implementing the optional embodiment, the user requirements can be determined by extracting the keywords in the audio signal input by the user, and the efficiency of responding to the audio signal is improved.
In an exemplary embodiment of the present application, the data completing unit 803 completes the data of the keyword according to a preset keyword mapping relationship to obtain the target data, including:
the data completion unit 803 converts the keyword from a text form to a character string form;
the data completion unit 803 completes the data of the keywords in the form of the character string according to the preset keyword mapping relationship to obtain target data.
Therefore, by implementing the optional embodiment, the complete information required for placing orders can be determined when the ordering information input by the user is incomplete through data completion of the keywords, and the efficiency and accuracy of voice ordering are further improved.
In an exemplary embodiment of the present application, the audio signal corresponds to an order request, the target data is order data corresponding to the order request, and the execution rule determining unit 804 determines the execution rule corresponding to the character sequence according to the target data, including:
the execution rule determining unit 804 determines delivery data indicating a delivery method from the order data, and determines an execution rule corresponding to the character sequence from preset execution rules based on the delivery data.
The execution rule comprises a plurality of execution functions and execution sequences of the execution functions, and the execution functions comprise at least one of a receipt receiving and checking function, a receipt number acquiring function, a receiving and pre-sorting function, an aging calculation function, a resource pre-occupying function, a task issuing function and a bill generation function.
Therefore, by implementing the optional embodiment, the corresponding execution rule can be determined through the ordering data, and then the ordering request corresponding to the audio signal can be responded according to the execution rule, so that voice ordering is realized, the learning cost of ordering operation of a user is reduced, the use experience of the user is improved, and the use viscosity of the user is further improved.
In an exemplary embodiment of the present application, the audio signal response unit 805 responds to an audio signal according to target data and an execution rule, including:
the audio signal response unit 805 generates a receipt message according to the target data and the execution rule and uploads the receipt message to the system, so that the system responds to a receipt request corresponding to the audio signal according to the receipt message.
Therefore, by implementing the optional embodiment, the order placing request can be responded through the generated order placing message, so that voice order placing is realized, the learning cost of the user is reduced, the use experience of the user is improved, and the use viscosity of the user is further improved.
Since the functional blocks of the audio signal response device of the exemplary embodiment of the present application correspond to the steps of the exemplary embodiment of the audio signal response method described above, for details that are not disclosed in the embodiment of the device of the present application, please refer to the embodiment of the audio signal response method described above of the present application.
Referring to FIG. 9, FIG. 9 illustrates a block diagram of a computer system 900 suitable for use in implementing an electronic device in accordance with an exemplary embodiment of the present application. The computer system 900 of the electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 9, the computer system 900 includes a Central Processing Unit (CPU)901 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)902 or a program loaded from a storage section 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for system operation are also stored. The CPU 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
The following components are connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, and the like; an output section 907 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 908 including a hard disk and the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as necessary. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 910 as necessary, so that a computer program read out therefrom is mounted into the storage section 908 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 909, and/or installed from the removable medium 911. The above-described functions defined in the system of the present application are executed when the computer program is executed by a Central Processing Unit (CPU) 901.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the audio signal response method as described in the above embodiments.
For example, the electronic device may implement the following as shown in fig. 1: step S110: when the audio signal is detected, converting the audio signal into a character sequence; step S120: identifying keywords in the character sequence, and performing data completion on the keywords according to a preset keyword mapping relation to obtain target data; step S130: determining an execution rule corresponding to the character sequence according to the target data; step S140: the audio signal is responded to according to the target data and the execution rule.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. An audio signal response method, comprising:
when an audio signal is detected, converting the audio signal into a character sequence;
identifying keywords in the character sequence, and completing the keywords according to a preset keyword mapping relation to obtain target data;
determining an execution rule corresponding to the character sequence according to the target data;
responding to the audio signal according to the target data and the execution rule.
2. The method of claim 1, wherein converting the audio signal into a sequence of characters comprises:
and denoising the audio signal, and identifying a character sequence corresponding to the denoised audio signal.
3. The method of claim 1, wherein identifying keywords in the sequence of characters comprises:
carrying out invalid information screening on the character sequence, and determining a vector corresponding to each character in a screening result;
and performing word segmentation processing on the screening result according to the vector distance between the vectors corresponding to the characters to obtain a plurality of keywords.
4. The method of claim 1, wherein performing data completion on the keywords according to a preset keyword mapping relationship to obtain target data comprises:
converting the keywords from a text form to a character string form;
and performing data completion on the keywords in the character string form according to the preset keyword mapping relation to obtain target data.
5. The method according to claim 1, wherein the audio signal corresponds to an order placing request, the target data is order placing data corresponding to the order placing request, and determining the execution rule corresponding to the character sequence according to the target data comprises:
and determining delivery data for expressing delivery modes from the ordering data, and determining an execution rule corresponding to the character sequence from preset execution rules according to the delivery data.
6. The method of claim 5, wherein the execution rules comprise a plurality of execution functions and an execution sequence of the plurality of execution functions, and the execution functions comprise at least one of a pick order checking function, a get order number function, a blanket pre-sort function, an age calculation function, a resource pre-occupation function, a task issue function, and a bill generation function.
7. The method of claim 6, wherein responding to the audio signal according to the target data and the execution rules comprises:
and generating a ordering message according to the target data and the execution rule and uploading the ordering message to a system, so that the system responds to the ordering request corresponding to the audio signal according to the ordering message.
8. An audio signal response apparatus, comprising:
the character conversion unit is used for converting the audio signal into a character sequence when the audio signal is detected;
a keyword recognition unit for recognizing a keyword in the character sequence;
the data completion unit is used for performing data completion on the keywords according to a preset keyword mapping relation to obtain target data;
the execution rule determining unit is used for determining an execution rule corresponding to the character sequence according to the target data;
and the audio signal response unit is used for responding the audio signal according to the target data and the execution rule.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out an audio signal response method according to any one of claims 1 to 7.
10. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the audio signal response method of any one of claims 1 to 7.
CN202010261372.5A 2020-04-03 2020-04-03 Audio signal response method and device, computer readable medium and electronic equipment Pending CN113496702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010261372.5A CN113496702A (en) 2020-04-03 2020-04-03 Audio signal response method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010261372.5A CN113496702A (en) 2020-04-03 2020-04-03 Audio signal response method and device, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113496702A true CN113496702A (en) 2021-10-12

Family

ID=77995069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010261372.5A Pending CN113496702A (en) 2020-04-03 2020-04-03 Audio signal response method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113496702A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423275A (en) * 2017-06-27 2017-12-01 北京小度信息科技有限公司 Sequence information generation method and device
CN109346082A (en) * 2018-10-11 2019-02-15 平安科技(深圳)有限公司 Sales order acquisition methods, device, equipment and medium based on speech recognition
CN110310641A (en) * 2019-02-26 2019-10-08 北京蓦然认知科技有限公司 A kind of method and device for voice assistant
CN110414004A (en) * 2019-07-31 2019-11-05 阿里巴巴集团控股有限公司 A kind of method and system that core information extracts
CN110830665A (en) * 2019-11-12 2020-02-21 德邦物流股份有限公司 Voice interaction method and device and express service system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423275A (en) * 2017-06-27 2017-12-01 北京小度信息科技有限公司 Sequence information generation method and device
CN109346082A (en) * 2018-10-11 2019-02-15 平安科技(深圳)有限公司 Sales order acquisition methods, device, equipment and medium based on speech recognition
CN110310641A (en) * 2019-02-26 2019-10-08 北京蓦然认知科技有限公司 A kind of method and device for voice assistant
CN110414004A (en) * 2019-07-31 2019-11-05 阿里巴巴集团控股有限公司 A kind of method and system that core information extracts
CN110830665A (en) * 2019-11-12 2020-02-21 德邦物流股份有限公司 Voice interaction method and device and express service system

Similar Documents

Publication Publication Date Title
CN110674255B (en) Text content auditing method and device
CN110347908B (en) Voice shopping method, device, medium and electronic equipment
CN108388563B (en) Information output method and device
CN113064964A (en) Text classification method, model training method, device, equipment and storage medium
CN109783741A (en) Method and apparatus for pushed information
CN111694926A (en) Interactive processing method and device based on scene dynamic configuration and computer equipment
CN110798567A (en) Short message classification display method and device, storage medium and electronic equipment
CN109145050B (en) Computing device
CN110309469A (en) A kind of user clicks behavior visual analysis method, system, medium and electronic equipment
CN112183078A (en) Text abstract determining method and device
CN110826619A (en) File classification method and device of electronic files and electronic equipment
CN113032674A (en) Project publishing method, device, equipment and medium
CN108921562A (en) A kind of on-line payment and its device based on Application on Voiceprint Recognition
CN113496702A (en) Audio signal response method and device, computer readable medium and electronic equipment
CN114298845A (en) Method and device for processing claim settlement bills
CN111079185B (en) Database information processing method and device, storage medium and electronic equipment
CN113111165A (en) Deep learning model-based alarm receiving warning condition category determination method and device
CN111652606A (en) Intelligent retail terminal system
CN111352751A (en) Data file generation method and device, computer equipment and storage medium
CN110888986B (en) Information pushing method, device, electronic equipment and computer readable storage medium
CN107368597A (en) Information output method and device
CN116629639B (en) Evaluation information determining method and device, medium and electronic equipment
CN110674348B (en) Video classification method and device and electronic equipment
CN115730591A (en) User service method, device, equipment and storage medium based on knowledge graph
CN113537368A (en) Sample processing method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination