CN107516510B - Automatic voice testing method and device for intelligent equipment - Google Patents
Automatic voice testing method and device for intelligent equipment Download PDFInfo
- Publication number
- CN107516510B CN107516510B CN201710543138.XA CN201710543138A CN107516510B CN 107516510 B CN107516510 B CN 107516510B CN 201710543138 A CN201710543138 A CN 201710543138A CN 107516510 B CN107516510 B CN 107516510B
- Authority
- CN
- China
- Prior art keywords
- test
- voice
- state
- tested
- response result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/01—Assessment or evaluation of speech recognition systems
Abstract
The invention discloses an automatic voice testing method and device for intelligent equipment, wherein the method comprises the following steps: acquiring the current running state of the tested equipment; selecting a test voice to be played from a voice library according to the current running state of the tested equipment; playing the selected test voice to the tested device and acquiring a response result of the tested device to the test voice; and obtaining a voice test result according to the response result. By applying the scheme of the invention, the appropriate test voice can be selected from the voice library according to the current running state of the tested equipment, compared with the mode of mechanically playing the test voice circularly in the prior art, the method is more consistent with the running state of the tested equipment, and various exceptions which may be generated in the manual test and semi-automatic test processes are reduced. In addition, compare the mode of artifical test, promoted efficiency of software testing greatly.
Description
[ technical field ] A method for producing a semiconductor device
The invention relates to a computer application technology, in particular to an automatic voice testing method and device for intelligent equipment.
[ background of the invention ]
With the increasing maturity of voice recognition technology, more and more intelligent household appliances and intelligent household devices are appearing in the market, and the intelligent devices provide a more convenient interaction mode for consumers based on the voice recognition technology. However, for such an intelligent voice device, there is no convenient and easy-to-use automatic testing tool at present, and the conventional testing scheme in the industry at present is manual testing, or a semi-automatic mode of circularly playing audio by using an external playing device. The main problems are as follows:
for manual testing, labor and time costs are high.
Although the semi-automatic mode of circularly playing the audio reduces the labor cost and the time cost to a certain extent, the mode of mechanically playing the audio completely breaks away from the actual condition of the tested equipment, and the disorder of the tested audio in time sequence can be caused, so that the test result is inaccurate.
[ summary of the invention ]
Aspects of the application provide an automatic voice test method, device, equipment and storage medium for intelligent equipment, which can improve accuracy of test results and efficiency of intelligent voice equipment test.
One aspect of the present application provides an automated voice testing method for an intelligent device, including:
acquiring the current running state of the tested equipment;
selecting a test voice to be played from a voice library according to the current running state of the tested equipment;
playing the selected test voice to the tested device and acquiring a response result of the tested device to the test voice;
and obtaining a voice test result according to the response result.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the obtaining of the current operating state of the device under test includes:
and sending a query request to the test server through the unique identifier of the tested equipment to acquire the current running state reported to the test server by the tested equipment.
The above-described aspect and any possible implementation further provide an implementation, where the current operating state includes: a state to be awakened and a state to be identified.
The above-described aspect and any possible implementation manner further provide an implementation manner, where the selecting, from a speech library, a test speech to be played according to a current operating state of a device under test includes:
if the current running state of the tested equipment is the state to be awakened, calling an awakening audio from the voice library;
and if the current running state of the tested equipment is the state to be identified, calling the identification audio from the voice library.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where playing the selected test voice to the device under test and acquiring a response result of the device under test to the test voice includes: if the current running state of the tested equipment is the state to be awakened, sequentially traversing the awakening audios in the voice library, playing the currently traversed awakening audio to the tested equipment and acquiring the response result of the tested equipment to the awakening audio until the tested equipment is awakened or traversed;
if the current running state of the tested equipment is the state to be identified, sequentially traversing the identification audios in the voice library, playing the currently traversed identification audio to the tested equipment and acquiring the response result of the tested equipment to the identification audio until the traversal is finished.
The above-described aspect and any possible implementation manner further provide an implementation manner, where obtaining a voice test result according to the response result includes: and comparing the response result with an expected response result to obtain a voice test result.
In another aspect of the present invention, an automated voice testing apparatus for intelligent devices is provided, which includes:
the operation state acquisition unit is used for acquiring the current operation state of the tested equipment;
the test voice selecting unit is used for selecting test voices to be played from a voice library according to the current running state of the tested equipment;
a response result obtaining unit, configured to play the selected test voice to the device under test and obtain a response result of the device under test to the test voice;
and the test unit is used for obtaining a voice test result according to the response result.
The above-mentioned aspect and any possible implementation manner further provide an implementation manner, where the operating state obtaining unit is specifically configured to:
and sending a query request to the test server through the unique identifier of the tested equipment to acquire the current running state reported to the test server by the tested equipment.
The above-described aspect and any possible implementation further provide an implementation, where the current operating state includes: a state to be awakened and a state to be identified.
As for the above-mentioned aspects and any possible implementation manner, there is further provided an implementation manner, where the test speech selecting unit is specifically configured to:
if the current running state of the tested equipment is the state to be awakened, calling an awakening audio from the voice library;
and if the current running state of the tested equipment is the state to be identified, calling the identification audio from the voice library.
The above aspects, and any possible implementations, further provide an implementation,
if the current running state of the tested device is the state to be awakened, the test voice selection unit sequentially traverses the awakening audios in the voice library, and the response result acquisition unit plays the currently traversed awakening audios to the tested device and acquires the response result of the tested device to the awakening audios until the tested device is awakened or traversed;
if the current running state of the tested device is the state to be identified, the test voice selection unit sequentially traverses the identification audios in the voice library, and the response result acquisition unit plays the currently traversed identification audios to the tested device and acquires the response results of the tested device to the identification audios until the traversal is completed.
The above-described aspect and any possible implementation further provide an implementation, where the test unit is specifically configured to:
and comparing the response result with an expected response result to obtain a voice test result.
In another aspect of the present invention, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the program.
In another aspect of the invention, a computer-readable storage medium is provided, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method as set forth above.
Based on the above description, it can be seen that, by adopting the scheme of the present invention, a suitable test voice can be selected from the voice library according to the current operation state of the device under test, and compared with the mode of mechanically playing the test voice in a circulating manner in the prior art, the present invention is more consistent with the operation state of the device under test, and reduces various exceptions which may be generated in the manual test and semi-automatic test processes. In addition, compare the mode of artifical test, promoted efficiency of software testing greatly.
[ description of the drawings ]
FIG. 1 is a flow chart of an automated voice testing method for smart devices in accordance with the present invention;
FIG. 2 is a block diagram of an automated voice testing apparatus for smart devices in accordance with the present invention;
fig. 3 illustrates a block diagram of an exemplary computer system/server 012 suitable for use in implementing embodiments of the invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart of an embodiment of an automated voice testing method for an intelligent device according to the present invention, as shown in fig. 1, including the following steps:
in 101, the current operating state of the device under test is obtained.
Specifically, the test tool sends a query request to the test server through the unique identifier of the tested device, and obtains the current running state reported to the test server by the tested device through the test server; the current operating state includes: a state to be awakened and a state to be identified. The state to be awakened refers to a state that the tested equipment waits for voice awakening; the to-be-recognized state refers to a state in which the device under test is waiting for recognition of the input speech, and is also referred to as a listening state.
The device under test reports the current running state to the test server; the test server receives the current running state reported to the test server by the tested equipment and records the current running state in the test server according to the unique identifier of the tested equipment; the test server receives an inquiry request which is sent to the test server by the test tool through the unique identification of the tested equipment, and sends the current running state of the test equipment to the test tool.
In one implementation of this embodiment, the test tool obtains its current operating state directly from the device under test.
At 102, a test voice to be played is selected from a voice library according to the current operation state of the device under test.
For example, if the current running state of the tested device is the state to be awakened, the test tool calls an awakening audio from the voice library;
and if the current running state of the tested equipment is the state to be identified, the test tool calls the identification audio from the voice library.
In 103, playing the selected test voice to the device under test and obtaining a response result of the device under test to the test voice.
If the current running state of the tested equipment is the state to be awakened, sequentially traversing the awakening audios in the voice library, playing the currently traversed awakening audio to the tested equipment and acquiring the response result of the tested equipment to the awakening audio until the tested equipment is awakened or the traversal is finished.
If the current running state of the tested equipment is the state to be identified, sequentially traversing the identification audios in the voice library, playing the currently traversed identification audio to the tested equipment and acquiring the response result of the tested equipment to the identification audio until the traversal is finished.
Before starting the automatic test, the test of the tested equipment in the specified voice range and the specified voice function can be completed by screening the audio in the voice library.
The audio may be recorded audio, or may be a Speech signal obtained by converting Text data into Speech through Text-to-Speech (TTS) Speech synthesis software TTS.
Preferably, the test tool is preset with parameters of the automated voice test, and the parameters include: a list of wake-up tones for a voice wake-up test, a list of recognition tones for a voice recognition test, an expected response result parameter, a list of devices to be tested, and some other voice test parameters.
In 104, a voice test result is obtained according to the response result.
And comparing the response result with an expected response result to generate a test effect report.
Preferably, the test report is notified to the tester by means of mail, short message, and the like.
By applying the scheme of the invention, the manpower of testers can be effectively liberated, the testers only need to set the test parameters before the test is started, the test is started by one key, and the review test report is carried out before the test is finished, and no manpower is needed to participate in the intermediate process, so that the test investment of the testers is greatly reduced, various exceptions possibly generated in the manual test and semi-automatic test processes are reduced, and the test efficiency of the intelligent voice equipment is effectively improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
The above is a description of method embodiments, and the embodiments of the present invention are further described below by way of apparatus embodiments.
Fig. 2 is a structural diagram of an embodiment of an automatic voice testing apparatus for intelligent devices according to the present invention, as shown in fig. 2, including:
an operation state obtaining unit 201, configured to obtain a current operation state of the device under test.
Specifically, the test tool sends a query request to the test server through the unique identifier of the tested device, and obtains the current running state reported to the test server by the tested device through the test server; the current operating state includes: a state to be awakened and a state to be identified. The state to be awakened refers to a state that the tested equipment waits for voice awakening; the to-be-recognized state refers to a state in which the device under test is waiting for recognition of the input speech, and is also referred to as a listening state.
The device under test reports the current running state to the test server; the test server receives the current running state reported to the test server by the tested equipment and records the current running state in the test server according to the unique identifier of the tested equipment; the test server receives an inquiry request which is sent to the test server by the test tool through the unique identification of the tested equipment, and sends the current running state of the test equipment to the test tool.
In one implementation of this embodiment, the test tool obtains its current operating state directly from the device under test.
A test voice selecting unit 202, configured to select a test voice to be played from a voice library according to a current operating state of the device under test; in particular, the amount of the solvent to be used,
if the current running state of the tested equipment is the state to be awakened, calling an awakening audio from the voice library;
and if the current running state of the tested equipment is the state to be identified, calling the identification audio from the voice library.
A response result obtaining unit 203, configured to play the selected test voice to the device under test and obtain a response result of the device under test to the test voice.
If the current running state of the tested device is the state to be awakened, the test voice selection unit sequentially traverses the awakening audios in the voice library, and the response result acquisition unit plays the currently traversed awakening audios to the tested device and acquires the response result of the tested device to the awakening audios until the tested device is awakened or traversed.
If the current running state of the tested device is the state to be identified, the test voice selection unit sequentially traverses the identification audios in the voice library, and the response result acquisition unit plays the currently traversed identification audios to the tested device and acquires the response results of the tested device to the identification audios until the traversal is completed.
Before starting the automatic test, the test of the tested equipment in the specified voice range and the specified voice function can be completed by screening the audio in the voice library.
The audio may be recorded audio, or may be a Speech signal obtained by converting Text data into Speech through Text-to-Speech (TTS) Speech synthesis software TTS.
Preferably, the test tool is preset with parameters of the automated voice test, and the parameters include: a list of wake-up tones for a voice wake-up test, a list of recognition tones for a voice recognition test, an expected response result parameter, a list of devices to be tested, and some other voice test parameters.
The test unit 204 is configured to obtain a voice test result according to the response result.
And comparing the response result with an expected response result to generate a test effect report.
Preferably, the test report is notified to the tester by means of mail, short message, and the like.
By applying the scheme of the invention, the manpower of testers can be effectively liberated, the testers only need to set the test parameters before the test is started, the test is started by one key, and the review test report is carried out before the test is finished, and no manpower is needed to participate in the intermediate process, so that the test investment of the testers is greatly reduced, various exceptions possibly generated in the manual test and semi-automatic test processes are reduced, and the test efficiency of the intelligent voice equipment is effectively improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Based on the introduction, the method of the embodiment can effectively liberate the manpower of the tester, the tester only needs to set the test parameters before the test starts, and starts the test by one key, and the test report is read before the test ends, and no manpower is needed in the middle process, so that the test investment of the tester is greatly reduced, various exceptions possibly generated in the manual test and semi-automatic test processes are reduced, and the test efficiency of the intelligent voice equipment is effectively improved.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the server described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Fig. 3 illustrates a block diagram of an exemplary computer system/server 012 suitable for use in implementing embodiments of the invention. The computer system/server 012 shown in fig. 3 is only an example, and should not bring any limitations to the function and the scope of use of the embodiments of the present invention.
As shown in fig. 3, the computer system/server 012 is embodied as a general purpose computing device. The components of computer system/server 012 may include, but are not limited to: one or more processors or processing units 016, a system memory 028, and a bus 018 that couples various system components including the system memory 028 and the processing unit 016.
Computer system/server 012 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 012 and includes both volatile and nonvolatile media, removable and non-removable media.
Program/utility 040 having a set (at least one) of program modules 042 can be stored, for example, in memory 028, such program modules 042 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof might include an implementation of a network environment. Program modules 042 generally perform the functions and/or methodologies of embodiments of the present invention as described herein.
The computer system/server 012 may also communicate with one or more external devices 014 (e.g., keyboard, pointing device, display 024, etc.), hi the present invention, the computer system/server 012 communicates with an external radar device, and may also communicate with one or more devices that enable a user to interact with the computer system/server 012, and/or with any device (e.g., network card, modem, etc.) that enables the computer system/server 012 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 022. Also, the computer system/server 012 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 020. As shown in fig. 3, the network adapter 020 communicates with the other modules of the computer system/server 012 via bus 018. It should be appreciated that although not shown in fig. 3, other hardware and/or software modules may be used in conjunction with the computer system/server 012, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 016 executes the programs stored in the system memory 028, thereby performing the functions and/or methods of the described embodiments of the present invention.
The computer program described above may be provided in a computer storage medium encoded with a computer program that, when executed by one or more computers, causes the one or more computers to perform the method flows and/or apparatus operations shown in the above-described embodiments of the invention.
With the development of time and technology, the meaning of media is more and more extensive, and the propagation path of computer programs is not limited to tangible media any more, and can also be downloaded from a network directly and the like. Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (12)
1. An automatic voice test method for intelligent equipment is characterized by comprising the following steps:
acquiring the current running state of the tested equipment, wherein the current running state comprises the following steps: a state to be awakened or a state to be identified;
selecting a test voice to be played from a voice library according to the current running state of the tested equipment;
playing the selected test voice to the tested device and acquiring a response result of the tested device to the test voice;
and obtaining a voice test result according to the response result.
2. The method of claim 1, wherein the obtaining the current operating state of the device under test comprises:
and sending a query request to the test server through the unique identifier of the tested equipment to acquire the current running state reported to the test server by the tested equipment.
3. The method of claim 1, wherein selecting the test voice to be played from the voice library according to the current operating state of the device under test comprises:
if the current running state of the tested equipment is the state to be awakened, calling an awakening audio from the voice library;
and if the current running state of the tested equipment is the state to be identified, calling the identification audio from the voice library.
4. The method of claim 3, wherein playing the selected test voice to the device under test and obtaining the response result of the device under test to the test voice comprises:
if the current running state of the tested equipment is the state to be awakened, sequentially traversing the awakening audios in the voice library, playing the currently traversed awakening audio to the tested equipment and acquiring the response result of the tested equipment to the awakening audio until the tested equipment is awakened or traversed;
if the current running state of the tested equipment is the state to be identified, sequentially traversing the identification audios in the voice library, playing the currently traversed identification audio to the tested equipment and acquiring the response result of the tested equipment to the identification audio until the traversal is finished.
5. The method of claim 2, wherein obtaining the voice test result according to the response result comprises: and comparing the response result with an expected response result to obtain a voice test result.
6. The utility model provides an automatic pronunciation testing arrangement of smart machine which characterized in that includes:
an operation state obtaining unit, configured to obtain a current operation state of a device under test, where the current operation state includes: a state to be awakened or a state to be identified;
the test voice selecting unit is used for selecting test voices to be played from a voice library according to the current running state of the tested equipment;
a response result obtaining unit, configured to play the selected test voice to the device under test and obtain a response result of the device under test to the test voice;
and the test unit is used for obtaining a voice test result according to the response result.
7. The automated voice testing device of claim 6, wherein the operating state obtaining unit is specifically configured to:
and sending a query request to the test server through the unique identifier of the tested equipment to acquire the current running state reported to the test server by the tested equipment.
8. The automated voice testing apparatus of claim 6, wherein the test voice selection unit is specifically configured to:
if the current running state of the tested equipment is the state to be awakened, calling an awakening audio from the voice library;
and if the current running state of the tested equipment is the state to be identified, calling the identification audio from the voice library.
9. The automated voice testing apparatus of claim 6,
if the current running state of the tested device is the state to be awakened, the test voice selection unit sequentially traverses the awakening audios in the voice library, and the response result acquisition unit plays the currently traversed awakening audios to the tested device and acquires the response result of the tested device to the awakening audios; until the tested equipment is awakened or the traversal is finished;
if the current running state of the tested device is the state to be identified, the test voice selection unit sequentially traverses the identification audios in the voice library, and the response result acquisition unit plays the currently traversed identification audios to the tested device and acquires the response result of the tested device to the identification audios; until the traversal is finished.
10. The automated voice testing apparatus of claim 7, wherein the testing unit is specifically configured to:
and comparing the response result with an expected response result to obtain a voice test result.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of any one of claims 1 to 5.
12. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710543138.XA CN107516510B (en) | 2017-07-05 | 2017-07-05 | Automatic voice testing method and device for intelligent equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710543138.XA CN107516510B (en) | 2017-07-05 | 2017-07-05 | Automatic voice testing method and device for intelligent equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107516510A CN107516510A (en) | 2017-12-26 |
CN107516510B true CN107516510B (en) | 2020-12-18 |
Family
ID=60722249
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710543138.XA Active CN107516510B (en) | 2017-07-05 | 2017-07-05 | Automatic voice testing method and device for intelligent equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107516510B (en) |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109979444A (en) * | 2017-12-27 | 2019-07-05 | 深圳市优必选科技有限公司 | A kind of phonetic order automated testing method, terminal and device |
CN108206981A (en) * | 2017-12-29 | 2018-06-26 | 歌尔科技有限公司 | Pickup test method and equipment |
CN108228468A (en) * | 2018-02-12 | 2018-06-29 | 腾讯科技(深圳)有限公司 | A kind of test method, device, test equipment and storage medium |
CN108597494A (en) * | 2018-03-07 | 2018-09-28 | 珠海格力电器股份有限公司 | Tone testing method and device |
CN108877770B (en) * | 2018-05-31 | 2020-01-07 | 北京百度网讯科技有限公司 | Method, device and system for testing intelligent voice equipment |
CN109147778A (en) * | 2018-07-24 | 2019-01-04 | 上海庆科信息技术有限公司 | A kind of method, apparatus and system of intelligent socket tone testing |
CN108816801A (en) * | 2018-07-24 | 2018-11-16 | 上海庆科信息技术有限公司 | A kind of method, apparatus and system of the lamp tone testing of intelligent sphere bubble |
CN108899012B (en) * | 2018-07-27 | 2021-04-20 | 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) | Voice interaction equipment evaluation method and system, computer equipment and storage medium |
CN109147761B (en) * | 2018-08-09 | 2022-05-13 | 北京易诚高科科技发展有限公司 | Test method based on batch speech recognition and TTS text synthesis |
CN109243425A (en) * | 2018-08-13 | 2019-01-18 | 百度在线网络技术(北京)有限公司 | Speech recognition test method, device, system, computer equipment and storage medium |
CN109003602B (en) * | 2018-09-10 | 2020-03-24 | 百度在线网络技术(北京)有限公司 | Voice product testing method, device, equipment and computer readable medium |
CN109119065B (en) * | 2018-09-10 | 2020-12-15 | 四川长虹电器股份有限公司 | Service intelligence quotient test scoring system and method for intelligent voice product |
CN109326305B (en) * | 2018-09-18 | 2023-04-07 | 易诚博睿(南京)科技有限公司 | Method and system for batch testing of speech recognition and text synthesis |
CN109448701A (en) * | 2018-09-19 | 2019-03-08 | 易诚博睿(南京)科技有限公司 | A kind of intelligent sound recognizes the result statistical system and method for semantic understanding |
CN109243426A (en) * | 2018-09-19 | 2019-01-18 | 易诚博睿(南京)科技有限公司 | A kind of automatization judgement voice false wake-up system and its judgment method |
CN109523990B (en) * | 2019-01-21 | 2021-11-05 | 未来电视有限公司 | Voice detection method and device |
CN109634872B (en) * | 2019-02-25 | 2023-03-10 | 北京达佳互联信息技术有限公司 | Application testing method, device, terminal and storage medium |
CN109712608B (en) * | 2019-02-28 | 2021-10-08 | 百度在线网络技术(北京)有限公司 | Multi-sound zone awakening test method, device and storage medium |
CN111798833B (en) * | 2019-04-04 | 2023-12-01 | 北京京东尚科信息技术有限公司 | Voice test method, device, equipment and storage medium |
CN110264995A (en) * | 2019-06-28 | 2019-09-20 | 百度在线网络技术(北京)有限公司 | The tone testing method, apparatus electronic equipment and readable storage medium storing program for executing of smart machine |
CN112309430A (en) * | 2019-07-31 | 2021-02-02 | 广东美的制冷设备有限公司 | Household appliance and self-checking method and device thereof |
CN112802495A (en) * | 2019-11-13 | 2021-05-14 | 深圳市优必选科技股份有限公司 | Robot voice test method and device, storage medium and terminal equipment |
CN110808029A (en) * | 2019-11-20 | 2020-02-18 | 斑马网络技术有限公司 | Vehicle-mounted machine voice test system and method |
CN110838285A (en) * | 2019-11-20 | 2020-02-25 | 青岛海尔科技有限公司 | System, method and device for terminal voice test |
CN111159026A (en) * | 2019-12-23 | 2020-05-15 | 智车优行科技(北京)有限公司 | Intelligent voice system testing method and device and electronic equipment |
CN111462731A (en) * | 2020-03-27 | 2020-07-28 | 四川虹美智能科技有限公司 | Voice test system and test method thereof |
CN113628611A (en) * | 2020-05-07 | 2021-11-09 | 阿里巴巴集团控股有限公司 | Voice service test system, method, device and equipment |
CN113791545A (en) * | 2020-07-10 | 2021-12-14 | 北京沃东天骏信息技术有限公司 | Smart home equipment testing method and device, electronic equipment and readable storage medium |
CN112261214A (en) * | 2020-10-21 | 2021-01-22 | 广东商路信息科技有限公司 | Network voice communication automatic test method and system |
CN113220590A (en) * | 2021-06-04 | 2021-08-06 | 北京声智科技有限公司 | Automatic testing method, device, equipment and medium for voice interaction application |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103578463A (en) * | 2012-07-27 | 2014-02-12 | 腾讯科技(深圳)有限公司 | Automatic testing method and automatic testing device |
US20140344627A1 (en) * | 2013-05-16 | 2014-11-20 | Advantest Corporation | Voice recognition virtual test engineering assistant |
CN105792241A (en) * | 2014-12-26 | 2016-07-20 | 展讯通信(上海)有限公司 | Automatic test system and method and mobile terminal |
CN106559729A (en) * | 2015-09-25 | 2017-04-05 | 神讯电脑(昆山)有限公司 | MIC automatic recognition of speech test system and method |
CN106874185A (en) * | 2016-12-27 | 2017-06-20 | 中车株洲电力机车研究所有限公司 | A kind of automated testing method driven based on voiced keyword and system |
-
2017
- 2017-07-05 CN CN201710543138.XA patent/CN107516510B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103578463A (en) * | 2012-07-27 | 2014-02-12 | 腾讯科技(深圳)有限公司 | Automatic testing method and automatic testing device |
US20140344627A1 (en) * | 2013-05-16 | 2014-11-20 | Advantest Corporation | Voice recognition virtual test engineering assistant |
CN105792241A (en) * | 2014-12-26 | 2016-07-20 | 展讯通信(上海)有限公司 | Automatic test system and method and mobile terminal |
CN106559729A (en) * | 2015-09-25 | 2017-04-05 | 神讯电脑(昆山)有限公司 | MIC automatic recognition of speech test system and method |
CN106874185A (en) * | 2016-12-27 | 2017-06-20 | 中车株洲电力机车研究所有限公司 | A kind of automated testing method driven based on voiced keyword and system |
Also Published As
Publication number | Publication date |
---|---|
CN107516510A (en) | 2017-12-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107516510B (en) | Automatic voice testing method and device for intelligent equipment | |
CN108717393B (en) | Application program testing method and mobile terminal | |
CN106294673B (en) | Method and system for analyzing log data in real time by user-defined rule | |
CN110764945B (en) | Crash log processing method, device, equipment and storage medium | |
CN107436844B (en) | Method and device for generating interface use case aggregate | |
CN107562637B (en) | Method, device, system and storage medium for software testing | |
CN111798833A (en) | Voice test method, device, equipment and storage medium | |
WO2019218464A1 (en) | Application program testing method and apparatus, and mobile terminal and medium | |
CN112416775B (en) | Software automatic test method and device based on artificial intelligence and electronic equipment | |
CN109637536B (en) | Method and device for automatically identifying semantic accuracy | |
CN111241111B (en) | Data query method and device, data comparison method and device, medium and equipment | |
CN112416803A (en) | Automatic testing method and device | |
CN111243580B (en) | Voice control method, device and computer readable storage medium | |
CN110322587B (en) | Evaluation recording method, device and equipment in driving process and storage medium | |
CN112306447A (en) | Interface navigation method, device, terminal and storage medium | |
CN110597704B (en) | Pressure test method, device, server and medium for application program | |
CN110312161B (en) | Video dubbing method and device and terminal equipment | |
CN113470618A (en) | Wake-up test method and device, electronic equipment and readable storage medium | |
CN112988580A (en) | Test process reproduction method, device, equipment and storage medium | |
CN115757014A (en) | Power consumption testing method and device | |
CN114093392A (en) | Audio labeling method, device, equipment and storage medium | |
CN113744712A (en) | Intelligent outbound voice splicing method, device, equipment, medium and program product | |
CN112652039A (en) | Animation segmentation data acquisition method, segmentation method, device, equipment and medium | |
CN115312032A (en) | Method and device for generating speech recognition training set | |
CN110968519A (en) | Game testing method, device, server and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210510 Address after: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing Patentee after: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. Patentee after: Shanghai Xiaodu Technology Co.,Ltd. Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing Patentee before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. |