CN115995230A - Voice control method of ultrasonic equipment and ultrasonic equipment - Google Patents

Voice control method of ultrasonic equipment and ultrasonic equipment Download PDF

Info

Publication number
CN115995230A
CN115995230A CN202111219574.4A CN202111219574A CN115995230A CN 115995230 A CN115995230 A CN 115995230A CN 202111219574 A CN202111219574 A CN 202111219574A CN 115995230 A CN115995230 A CN 115995230A
Authority
CN
China
Prior art keywords
voice
ultrasonic
voice command
instruction
target parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111219574.4A
Other languages
Chinese (zh)
Inventor
雷涛
李海瑞
周述文
刘智光
王武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority to CN202111219574.4A priority Critical patent/CN115995230A/en
Publication of CN115995230A publication Critical patent/CN115995230A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses a voice control method of ultrasonic equipment and the ultrasonic equipment, wherein the voice control method comprises the following steps: receiving a first voice instruction sent by a user, wherein the first voice instruction comprises a target parameter and a first adjustment mode corresponding to the target parameter; executing a first adjustment operation on the target parameter according to the first voice instruction; and in a preset time period after the first adjustment operation is executed, when a second voice instruction sent by the user is received, executing a second adjustment operation on the target parameter according to the second voice instruction, wherein the second voice instruction comprises a second adjustment mode corresponding to the target parameter, and the length of the second voice instruction is shorter than that of the first voice instruction. After the ultrasonic equipment executes the long-vocabulary instruction sent by the user each time, the user can continuously adjust the target parameters by adopting a plurality of times of short-vocabulary instructions within a preset time length, so that the complexity of voice control of the user is reduced.

Description

Voice control method of ultrasonic equipment and ultrasonic equipment
Technical Field
The invention relates to the technical field of ultrasonic medical treatment, in particular to a voice control method of ultrasonic equipment and the ultrasonic equipment.
Background
The ultrasonic equipment is used as visual, convenient and noninvasive examination equipment and is widely applied to the medical field. In the current ultrasonic equipment, the examination items, examination indexes and the like can be set through input equipment such as keys, a mouse and the like on a control panel, and in the ultrasonic examination process, a doctor needs to operate an ultrasonic probe, set the examination items and the examination indexes and is troublesome to operate.
In the related art, the ultrasonic equipment can be controlled by inputting a voice command, so that the operation of doctors is reduced; the application of the voice recognition technology needs to pre-establish a voice command library, after a voice command sent by a doctor hits a certain voice command in the voice command library, the ultrasonic equipment executes corresponding control, but some voice commands in the voice command library are longer and need to be repeated for many times, so that the doctor is hard to use.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the invention provides a voice control method of ultrasonic equipment and the ultrasonic equipment, which can simplify voice instructions for controlling the ultrasonic equipment by a user and improve user experience.
In a first aspect, an embodiment of the present invention provides a method for controlling voice of an ultrasonic device, including:
receiving a first voice instruction sent by a user, wherein the first voice instruction comprises a target parameter and a first adjustment mode corresponding to the target parameter;
executing a first adjustment operation on the target parameter according to the first voice instruction;
and in a preset time period after the first adjustment operation is executed, when a second voice instruction sent by a user is received, executing the second adjustment operation on the target parameter according to the second voice instruction, wherein the second voice instruction comprises a second adjustment mode corresponding to the target parameter, and the length of the second voice instruction is shorter than that of the first voice instruction.
In a second aspect, an embodiment of the present invention provides a method for controlling voice of an ultrasonic apparatus, including:
receiving a control instruction input by a user through a contact type input device or a gesture recognition device, wherein the control instruction comprises a target parameter and a first adjustment mode corresponding to the target parameter;
executing a first adjustment operation on the target parameter according to the control instruction;
and in a preset time period after the first adjustment operation is executed, when a voice instruction sent by a user is received, executing a second adjustment operation on the target parameter according to the voice instruction, wherein the voice instruction comprises a second adjustment mode corresponding to the target parameter, and the length of the voice instruction is shorter than that of the control instruction.
In a third aspect, an embodiment of the present invention provides a method for controlling voice of an ultrasonic apparatus, including:
receiving a first voice instruction sent by a user, wherein the first voice instruction comprises a control object and a first adjustment mode corresponding to the control object;
executing a first adjustment operation on the control object according to the first voice instruction;
and executing a second adjustment operation on the control object according to a second voice command sent by a user when the second voice command is received within a preset time period after the first adjustment operation is executed, wherein the second voice command comprises a second adjustment mode corresponding to the control object and the length of the second voice command is shorter than that of the first voice command.
In a fourth aspect, an embodiment of the present invention provides an ultrasonic apparatus, including
An ultrasonic probe;
the transmitting/receiving circuit is used for controlling the ultrasonic probe to transmit ultrasonic waves to an ultrasonic detection object and receive ultrasonic echoes to obtain ultrasonic echo signals;
the processor is used for processing the ultrasonic echo signals and obtaining an ultrasonic image of the ultrasonic detection object;
The display is used for displaying the ultrasonic image and/or a measurement result obtained based on the ultrasonic image;
the processor is further configured to perform the voice control method of the ultrasound apparatus according to the first aspect, the second aspect, or the third aspect.
In a fifth aspect, an embodiment of the present invention provides a voice control apparatus of an ultrasound device, including at least one processor and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the voice control method of the first, second or third aspects.
In a sixth aspect, embodiments of the present invention further provide a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the speech control method according to the first, second or third aspects.
The voice control method of the ultrasonic equipment provided by the embodiment of the invention has at least the following beneficial effects: after the ultrasonic equipment executes long-term instructions sent by a user each time, when receiving short-term instructions within a preset time period, the short-term instructions can be executed aiming at target parameters corresponding to the long-term instructions, so that the user can continuously adjust the target parameters by adopting multiple times of short-term instructions, and the complexity of voice control of the user is reduced; in addition, the embodiment of the invention can reduce the false recognition caused by long-time attempt of the ultrasonic equipment to recognize the external voice by limiting the time window for receiving the short-term command, so that the embodiment of the invention not only improves the convenience of controlling the ultrasonic equipment by the user voice, but also improves the accuracy of recognizing the short-term command by the ultrasonic equipment based on the time window by matching the long-term command and the short-term command.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the technical aspects of the present application, and are incorporated in and constitute a part of this specification, illustrate the technical aspects of the present application and together with the examples of the present application, and not constitute a limitation of the technical aspects of the present application.
FIG. 1 is a block diagram of an ultrasound device provided in one embodiment of the present invention;
FIG. 2 is a general flow chart of a method of controlling speech for a control object according to one embodiment of the present invention;
FIG. 3 is a flow chart for determining a preset duration provided by one embodiment of the present invention;
FIG. 4 is a general flow chart of a method of speech control for target parameters provided by one embodiment of the present invention;
FIG. 5 is a flow chart of a first voice command matching voice command library according to one embodiment of the present invention;
FIG. 6 is a flow chart of a second voice command matching voice command library provided by one embodiment of the present invention;
FIG. 7 is a flow chart of a voice command matching method according to one embodiment of the present invention;
FIG. 8 is a general flow chart of another voice control method for target parameters provided by one embodiment of the present invention;
fig. 9 is a structural connection diagram of an ultrasonic apparatus according to an embodiment of the present invention.
Detailed Description
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be capable of operation in sequences other than those illustrated or described herein, for example. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It should be understood that in the description of the embodiments of the present application, the meaning of a plurality (or multiple) is two or more, and that greater than, less than, exceeding, etc. is understood to not include the present number, and that greater than, less than, within, etc. is understood to include the present number.
An ultrasonic scanning device (hereinafter referred to as ultrasonic device) transmits ultrasonic pulses to tissues in a human body based on an ultrasonic pulse imaging principle, and obtains a visible ultrasonic image of the tissues of the human body by receiving and processing echoes carrying characteristic information of the tissues of the human body by utilizing reflection of ultrasonic waves at interfaces of the tissues of the human body. Thus, as a visual, convenient, noninvasive inspection apparatus, ultrasound apparatuses have been increasingly used in clinic.
In order to facilitate the doctor to control the ultrasonic equipment and reduce the operation amount of the doctor when the doctor performs ultrasonic scanning on a patient, part of the ultrasonic equipment currently provides a voice recognition control function, and when the voice instruction sent by the doctor is recognized by the ultrasonic equipment, the ultrasonic equipment executes the function corresponding to the voice instruction. However, under the related technology, if the doctor adjusts the same parameter multiple times through the voice command, the doctor needs to send out a complete voice command each time, which instead increases the burden of the doctor for voice control and has poor user experience.
Based on the above, the embodiment of the invention provides a voice control method of ultrasonic equipment and the ultrasonic equipment, and through the cooperation of long vocabulary instructions and short vocabulary instructions, a user can send out the short vocabulary instructions for a plurality of times to realize continuous operation of a certain function, so that the burden of voice control of the user is reduced, and the user experience is improved.
Embodiments of the present invention will be further described below with reference to the accompanying drawings.
Fig. 1 is a schematic block diagram of an ultrasonic apparatus according to an embodiment of the present invention. The processing apparatus 1000 may include an ultrasound probe 1001, a transmitting circuit 1002, a transmit/receive selection switch 1003, a receiving circuit 1004, a beam combining circuit 1005, a processor 1006, a display 1007, and a memory 1008.
The ultrasonic probe 1001 includes a transducer (not shown in the figure) composed of a plurality of array elements arranged in an array, the plurality of array elements being arranged in a row to form a linear array, or being arranged in a two-dimensional matrix to form an area array, the plurality of array elements may also form a convex array. The array elements are used for transmitting ultrasonic beams according to the excitation electric signals or converting the received ultrasonic beams into electric signals. Each array element can thus be used to effect a mutual conversion of the electrical pulse signal and the ultrasound beam, thereby effecting the transmission of ultrasound waves to a target region of human tissue (e.g. the target heart in this embodiment), and also for receiving echoes of ultrasound waves reflected back through the tissue. In the case of ultrasonic detection, the transmit/receive selection switch 1003 may be used to control which array elements are used to transmit ultrasonic beams and which array elements are used to receive ultrasonic beams, or to control the array element time slots to transmit ultrasonic beams or receive echoes of ultrasonic beams. The array elements participating in ultrasonic wave transmission can be excited by the electric signals at the same time, so that ultrasonic waves are transmitted at the same time; or the array elements participating in the ultrasonic wave transmission can be excited by a plurality of electric signals with a certain time interval, so that the ultrasonic wave with a certain time interval can be continuously transmitted.
The transmitting circuit 1002 is configured to generate a transmitting sequence according to control of the processor 1006, where the transmitting sequence is configured to control a part or all of the plurality of array elements to transmit ultrasonic waves to biological tissue, and the transmitting sequence parameters include an array element position for transmitting, an array element number, and an ultrasonic beam transmitting parameter (such as amplitude, frequency, number of transmissions, transmission interval, transmission angle, waveform, focusing position, etc.). In some cases, the transmitting circuit 1002 is further configured to delay the phases of the transmitted beams, so that different transmitting array elements transmit ultrasound waves at different times, so that each transmitting ultrasound beam can be focused on a predetermined region of interest. Different modes of operation, such as B-image mode, C-image mode, and D-image mode (doppler mode), the transmit sequence parameters may be different, and after the echo signals are received by the receive circuit 1004 and processed by subsequent modules and corresponding algorithms, a B-image reflecting the anatomical structure of the tissue, a C-image reflecting the anatomical structure and blood flow information, and a D-image reflecting the doppler spectrum image may be generated.
The reception circuit 1004 is configured to receive an electric signal of an ultrasonic echo from the ultrasonic probe 1001 and process the electric signal of the ultrasonic echo. The receive circuitry 1004 may include one or more amplifiers, analog-to-digital converters (ADCs), and the like. The amplifier is used for amplifying the received electric signal of the ultrasonic echo after proper gain compensation, and the analog-to-digital converter is used for sampling the analog echo signal according to a preset time interval so as to convert the analog echo signal into a digitized signal, and the digitized echo signal still maintains amplitude information, frequency information and phase information. The data output from the reception circuit 1004 may be output to the beam forming circuit 1005 for processing, or may be output to the memory 1008 for storage.
The beam synthesis circuit 1005 is in signal connection with the receiving circuit 1004, and is configured to perform corresponding beam synthesis processing such as delay and weighted summation on signals output by the receiving circuit 1004, where, because distances from an ultrasonic receiving point in a measured tissue to the receiving array elements are different, channel data of the same receiving point output by different receiving array elements have delay differences, delay processing is required to be performed, phases are aligned, and different channel data of the same receiving point are weighted and summed, so as to obtain beamformed ultrasonic image data, and ultrasonic image data output by the beam synthesis circuit 1005 is also referred to as radio frequency data (RF data). The beam combining circuit 1005 outputs the radio frequency data to the IQ demodulation circuit. In some embodiments, the beam forming circuit 1005 may also output the rf data to the memory 1008 for buffering or storing, or directly output the rf data to the image processing module of the processor 1006 for image processing.
The beam combining circuit 1005 may perform the above-described functions in hardware, firmware, or software, for example, the beam combining circuit 104 may comprise a central controller Circuit (CPU), one or more micro-processing chips, or any other electronic component capable of processing input data according to specific logic instructions, which when the beam combining circuit 1005 is implemented in software, may execute instructions stored on tangible and non-transitory computer readable media (e.g., memory 1008) to perform beam combining calculations using any suitable beam combining method.
The processor 1006 is configured to be a central controller Circuit (CPU), one or more microprocessors, graphics controller circuits (GPUs), or any other electronic component capable of processing input data according to specific logic instructions, which may perform control of peripheral electronic components, or data reading and/or saving of memory 1008 according to the input instructions or predetermined instructions, and may also process the input data by executing programs in memory 1008, such as by performing one or more processing operations on the acquired ultrasound data according to one or more modes of operation, including but not limited to adjusting or defining the form of ultrasound emitted by the ultrasound probe 1001, generating various image frames for display by the display 1007 of a subsequent human-machine interaction device, or adjusting or defining the content and form displayed on the display 1007, or adjusting one or more image display settings (e.g., ultrasound images, interface components, locating regions of interest) displayed on the display 1007.
The image processing module of the processor 1006 is configured to process the data output by the beam synthesis circuit 1005 or the data output by the IQ demodulation circuit to generate a gray-scale image of the signal intensity variation in the scanning range, which reflects the anatomical structure inside the tissue, which is called a B-image. The image processing module may output the B-image to the display 1007 of the human-machine interaction device for display.
The man-machine interaction device is used for carrying out man-machine interaction, namely receiving the input and output visual information of a user; the input of the user can be received by a keyboard, an operation button, a mouse, a track ball and the like, and a touch screen integrated with a display can also be adopted; the output visual information of which is displayed 1007.
The memory 1008 may be a tangible and non-transitory computer readable medium, such as a flash memory card, a solid state memory, a hard disk, etc., for storing data or programs, for example, the memory 1008 may be used to store acquired ultrasound data or image frames generated by the processor 1006 that are not immediately displayed, or the memory 1008 may store graphical user interfaces, one or more default image display settings, programming instructions for the processor, beam forming circuitry, or IQ demodulation circuitry.
It should be noted that the structure of fig. 1 is only illustrative, and may include more or fewer components than those shown in fig. 1, or have a different configuration than that shown in fig. 1. The components shown in fig. 1 may be implemented in hardware and/or software.
Based on the ultrasonic apparatus shown in fig. 1, the voice control method of the ultrasonic apparatus as shown in fig. 2 may specifically include, but is not limited to, the following steps S100, S200 and S300.
Step S100, receiving a first voice instruction sent by a user, wherein the first voice instruction comprises a control object and a first adjustment mode corresponding to the control object;
step S200, executing a first adjustment operation on the control object according to the first voice command;
step S300, in a preset time period after the first adjustment operation is executed, when a second voice command sent by the user is received, executing a second adjustment operation on the control object according to the second voice command, wherein the second voice command comprises a second adjustment mode corresponding to the control object, and the length of the second voice command is shorter than that of the first voice command.
The ultrasonic equipment can realize different functions according to the current use scene, and at the moment, if the ultrasonic equipment receives a first voice instruction sent by a user and the first voice instruction is suitable for the current scene of the ultrasonic equipment, the ultrasonic equipment can execute the corresponding functions according to the first voice instruction. It can be understood that the first voice command belongs to a conventional function control command of the ultrasonic device, and includes a control object and a control mode corresponding to the control object (i.e., a first adjustment mode in this embodiment), and the control object and the control mode in the first voice command are identified by the voice recognition engine, so as to execute a first adjustment operation, where the first adjustment operation corresponds to a certain functional scenario of the ultrasonic device (for example, the first adjustment operation is to increase the gain of the ultrasonic probe in the B-image mode, and then the functional scenario is the gain adjustment scenario of the ultrasonic probe in the B-image mode). After the ultrasonic device performs the first adjustment operation according to the first voice command, according to step S300, a preset duration corresponding to the first adjustment operation is introduced, and if it is recognized that the user issues the second voice command and the second voice command also has a function of adjusting the same control object within the preset duration, the ultrasonic device performs the second adjustment operation on the same control object according to the second voice command. It should be noted that the second voice command has at least a second adjustment mode, but may be a default control object, and at this time, the ultrasound device determines that the second voice command is a control object that is sent within a preset duration of the first voice command and is suitable for the first voice command, and executes the second adjustment mode on the same control object based on a functional scene corresponding to the control object (for example, the functional scene corresponding to the first adjustment operation is a gain adjustment scene of the ultrasound probe in the B-image mode, and then the second adjustment operation is also an operation for adjusting a gain of the ultrasound probe in the B-image mode).
It is noted that the length of the second voice command is shorter than that of the first voice command, so that the length of the voice command to be spoken by the user is reduced in the subsequent voice control process of the first voice command; for example, the first voice command is "increase the probe gain", and then the second voice commands may be "add", "subtract", "raise" and other short words, and each second voice command performs an adjustment action on the probe gain, so that the user speaks the short second voice command multiple times, and continuous adjustment of the probe gain can be achieved. It can be understood that, in the embodiment of the present invention, the length comparison between the first voice command and the second voice command can be understood in such a way that, in terms of language length, taking chinese voice as an example (other languages are similar), the number of characters of the first voice command is smaller than that of the second voice command, so that the sentence length of the second voice command is shorter than that of the first voice command; in terms of the length of the executable instructions, the first voice instruction and the second voice instruction are finally converted into executable instructions which can be recognized and executed by the processor, and the lengths of the executable instructions corresponding to the first voice instruction and the second voice instruction are different, and taking a binary instruction as an example, since the first voice instruction necessarily contains a control object and a control mode, the corresponding executable instructions contain more codes, and therefore, the executable instructions corresponding to the first voice instruction are longer than the executable instructions corresponding to the second voice instruction.
In the above manner, the first voice command corresponds to the long vocabulary command, the second voice command corresponds to the short vocabulary command, after the user determines the functional scene through the long vocabulary command, the user can control the same functional scene by using the short vocabulary command in a preset time, and the user continuously sends out a plurality of short vocabulary commands, so that the ultrasonic equipment analyzes according to the context each time the short vocabulary command is received, and determines the current functional scene, and if the user is in the same functional scene, the user can continuously control the same functional scene. If the first adjustment operation is performed beyond the preset time, the ultrasonic device will not recognize the second voice instruction defaulting to the control object.
In some cases, the second voice command includes a control object in addition to the second adjustment manner, if the control object of the second voice command is the same as the control object of the first voice command, the control is performed under the same functional scene, if the voice recognition engine determines that the control object of the second voice command is different from the control object of the first voice command, the control is performed according to the functional scene corresponding to the second voice command, and after the second adjustment operation of the second voice command is performed, the preset duration is recalculated, where the preset duration corresponds to the second adjustment operation.
It may be appreciated that the preset time period is not a fixed size, and may be set according to different functional scenarios, specifically, referring to fig. 3, after the first adjustment operation is performed, the process of calling the preset time period may include the following steps:
step S301, determining a target function scene where the ultrasonic equipment is located according to target parameters;
step S302, determining a preset duration corresponding to the target function scene.
In some functional scenarios, it is necessary to provide a long time for the user to carefully adjust the control object, for example, when adjusting the gain of the ultrasound probe, it may be necessary to increase and decrease back and forth multiple times before determining what gain value should be adopted, and for this functional scenario, a relatively long preset duration may be set, while in some simple direct functional scenarios, a relatively short preset duration may be set, for example, a functional scenario of volume adjustment.
When the target function scene is determined through the voice instruction, the ultrasonic equipment can set the target function scene as the current function scene. Then, in the current functional scenario, the ultrasonic device receives a first voice command, where the functional scenario corresponding to the first voice command is different from the current functional scenario, and the ultrasonic device needs to switch the current functional scenario to the functional scenario corresponding to the first voice command, and if the front and rear functional scenarios are the same, the switching is not needed, but the preset duration can be recalculated.
In particular, to a control object, the embodiment of the present invention may use a parameter in an ultrasound apparatus as a control object, or may use a non-parameter in an ultrasound apparatus as a control object, and the following description will be given by way of examples.
For the case where the control object is a parameter in the ultrasound apparatus, the above steps S100 to S300 may be performed as follows, referring to fig. 4:
step S400, receiving a first voice instruction sent by a user, wherein the first voice instruction comprises a target parameter and a first adjustment mode corresponding to the target parameter;
step S500, executing a first adjustment operation on the target parameters according to the first voice command;
in step S600, in a preset time period after the first adjustment operation is performed, when a second voice command sent by the user is received, a second adjustment operation on the target parameter is performed according to the second voice command, where the second voice command includes a second adjustment mode corresponding to the target parameter, and a length of the second voice command is shorter than a length of the first voice command.
The target parameter is a corresponding parameter under the current functional scene, the target parameter is taken as a control object, for example, the first voice command is "volume up", the target parameter is volume, the functional scene is volume up, after the ultrasonic equipment increases volume according to the first voice command, in a preset time, the user sends out the second voice command to "add", and the ultrasonic equipment increases volume again according to the current functional scene; of course, in the preset time, even if the previous voice command corresponds to "up", the user can send the voice command to "down" later, and then the ultrasonic device can also reduce the volume according to the current functional scene.
Similarly, for the case that the control object is a non-parameter in the ultrasonic equipment, for example, a folder is provided with a plurality of ultrasonic images, when the ultrasonic images are viewed one by one on a display screen of the ultrasonic equipment, a user opens the ultrasonic images through a first voice command, and then a subsequent second voice command can be 'last "," next "," before "," after', and the like, so that the display is switched among the plurality of ultrasonic images; as another example, the first voice command is "open probe status window", then the subsequent second voice command may be "close" without speaking "close probe status window".
The following describes the voice control method according to the embodiment of the present invention in detail by taking the control object as the target parameter as an example.
Referring to fig. 5, before the first adjustment operation of the target parameter is performed in the above-described step S500, the following steps are further performed:
step S501, a first voice command is matched in a voice command library of ultrasonic equipment, and when the matching is successful, first text information corresponding to the first voice command is determined according to a matching result;
step S502, analyzing the first text information, and determining that the control object and the control mode of the first voice command are respectively a target parameter and a first adjustment mode.
A speech recognition engine is provided in an ultrasound device for recognizing speech instructions issued by a user, wherein the speech recognition engine is implemented by speech recognition technology, also known as automatic speech recognition (Automatic Speech Recognition, ASR), the aim of which is to convert the lexical content in human speech into computer readable inputs, such as keys, binary codes or character sequences. Generally, a voice recognition engine analyzes voice of a person to obtain text information corresponding to the voice, and analyzes the text information to obtain instruction content, so as to control equipment to execute corresponding operations. In the embodiment of the invention, the first voice instruction is input to the voice recognition engine of the ultrasonic equipment, the voice recognition engine is provided with a voice instruction library, whether the first voice instruction is an effective instruction is judged by matching a model in the voice instruction library with the first voice instruction, and if the matching is successful, the first text information corresponding to the first voice instruction is obtained according to the content of the successful matching. Because the first voice command in the embodiment of the invention is a complete and independently effective command, the control object and the control mode which are respectively the target parameter and the first adjustment mode for the target parameter should be indicated in the first text information obtained by recognition.
Likewise, for the second voice command, the matching is also performed by the voice recognition engine of the ultrasound device. Specifically, referring to fig. 6, before the second adjustment operation of the target parameter is performed in the above-described step S600, the following steps are also performed:
step S601, a second voice command is matched in a voice command library of the ultrasonic equipment, and when the matching is successful, second text information corresponding to the second voice command is determined according to a matching result;
in step S602, the second text information is analyzed, and when the control object and the control mode of the second voice command are the default and the second adjustment mode, respectively, the target parameter is set as the control object of the second voice command.
When the voice recognition engine of the ultrasonic equipment analyzes the second voice command to obtain second text information, the component of the second text information needs to be judged, if the second text information already contains a control object and a control mode, the second voice command can be processed according to the mode of processing the first voice command, if the second text information does not contain the control object (i.e. the control object is default) but contains the control mode, the ultrasonic equipment regards the target parameter of the successfully executed first voice command as the control object of the second voice command according to the context, and therefore, the second adjustment operation is executed on the target parameter according to the second adjustment mode in the second voice command.
It will be appreciated that speech recognition techniques typically recognize speech based on acoustic models, which are one of the most important parts of speech recognition systems, and that currently mainstream speech recognition systems are mostly modeled using hidden markov models (HMMs, hidden Markov Model), which are statistical models that describe a markov process with hidden unknown parameters. In hidden markov models, states are not directly visible, but some variables affected by the states are visible. The corresponding probabilities of speech and phonemes are described in the acoustic model. Phonemes are the smallest phonetic units that are partitioned according to the natural properties of speech. From an acoustic property, a phoneme is the smallest unit of speech that is divided from a sound quality perspective; from a physiological standpoint, a pronunciation action forms a phoneme. The specific calculation method of the acoustic model training calculation in the embodiment of the invention can adopt the existing mature training calculation method, for example, tools and processes of a voice recognition kit (HTK, hidden Markov Model Toolkit) can be used for carrying out acoustic model training calculation on voice to obtain a corresponding acoustic model, and the method is not limited herein.
Referring to fig. 7, the step of matching a first voice command and a second voice command (hereinafter collectively referred to as voice commands) in a voice command library based on the acoustic model matching method according to the embodiment of the present invention includes:
step S1001, extracting an acoustic feature of a voice command, where the voice command is a first voice command or a second voice command;
step S1002, matching a voice command library according to acoustic features, wherein the voice command library comprises a plurality of acoustic models;
in step S1003, when the acoustic feature hits one of the acoustic models in the voice command library, it is determined that the voice command matches the voice command library successfully.
And when a voice command is received, extracting the acoustic characteristics of the voice command, and matching the voice command in a voice command library according to the acoustic characteristics, wherein the voice command library comprises a plurality of acoustic models which are trained and correspond to different executable commands in the ultrasonic equipment, and if the acoustic characteristics are matched with one acoustic model (if the confidence coefficient is larger than a certain value) in the matching process, the current voice command is considered to correspond to the matched executable commands of the acoustic model, so that the ultrasonic equipment is controlled to execute corresponding operations.
It will be appreciated that the adjustment of the target parameter is typically a numerical adjustment, and thus the first adjustment mode in the first voice command and the second adjustment mode in the second voice command both involve increasing or decreasing the numerical value, where the first adjustment mode and the second adjustment mode may be the same or different. For example, the first adjustment method involves an increase in the value, and the second adjustment method may involve an increase in the value, or a decrease in the value, and for example, the first adjustment method doubles the current value, and the second adjustment method may double the doubled value as the current value, or double the doubled value as the current value, and halve the doubled value.
The degree to which the parameter value of the target parameter is increased or decreased is determined according to the second adjustment mode. For example, in the second adjustment operation of the above step S600, if the second adjustment mode indicates only the direction of adjustment and does not specify the magnitude of adjustment (e.g., the second voice command is merely "add"), the parameter value of the current target parameter may be adjusted according to the preset value in the ultrasonic device (e.g., the preset value is 1, each adjustment of the second voice command increases or decreases the parameter value of the target parameter by 1). If the second adjustment mode indicates an adjustment direction and an adjustment amplitude (e.g., the second voice command is "add 3", "double", etc.), then the second adjustment operation is performed in accordance with the adjustment direction and the adjustment amplitude specified by the second voice command.
In some cases, the preset value may also be variable depending on the number of consecutive adjustments. For example, in the adjustment process of continuously increasing or continuously decreasing the parameter value of the target parameter, the preset value gradually increases with the number of times of adjustment. For a specific example, when the first voice command is "gain increase", the subsequent plurality of second voice commands are all "add", and the initial value of the preset value in the ultrasonic device is 1, then after the first voice command is executed, "add" is executed N times before, add 1 to the increment, and from n+1 times of "add" is executed, each time when the preset value is increased by 1, i.e., add 2, add 3, add 4 to the increment, etc. (for example, the magnitude of the preset value forms a sequence of 1, 2, 3, 4, 5, … …); in the above example, the preset value is added with 1 each time, which belongs to linear change, but is set according to the requirement of the user, the variation of the predetermined value may be other than linear (e.g., the predetermined value is sequentially 11, 2, 3 … … or form a number series 11, 2, 4, 7, 11).
Through the steps, the user can use the long vocabulary instruction and the short vocabulary instruction in a matching way, the pronunciation amount of voice control is reduced in the process of controlling the ultrasonic equipment, and the efficiency of voice control is improved.
In addition to triggering the functional scenario by the first voice command, the functional scenario may be triggered by other means, and then the continuous control is controlled by using a short voice command. Referring to fig. 8, another embodiment of the present invention provides a voice control method of an ultrasonic apparatus including, but not limited to, the following steps S700, S800 and S900:
step S700, receiving a control instruction input by a user through a touch input device or a gesture recognition device, wherein the control instruction comprises a target parameter and a first adjustment mode corresponding to the target parameter;
step S800, executing a first adjustment operation on the target parameter according to the control instruction;
in step S900, in a preset time period after the first adjustment operation is performed, when a voice command sent by the user is received, a second adjustment operation on the target parameter is performed according to the voice command, where the voice command includes a second adjustment mode corresponding to the target parameter and the length of the voice command is shorter than that of the control command.
Compared with the first voice command in the foregoing embodiment, in this embodiment, a control command is input by using a touch input device or a gesture recognition device, for example, a user enters a certain functional scene through a key panel equipped by an ultrasonic device, which is equivalent to the user sending a control command (or a group of control commands) to the ultrasonic device through the key panel; as another example, a user inputs his or her own gesture motion via a gesture recognition device, which may be self-contained or data-connected to the ultrasound device independently of the ultrasound device, that translates into a control command (or set of control commands) for execution by the ultrasound device. Whether contact input or gesture input, the final control instruction is converted into an executable instruction that can be executed by the processor, and thus, the length comparison between the control instruction and the voice instruction can be determined by the length comparison of the executable instructions obtained by the conversion.
After the target function scene is determined by the control instruction, a user can use a short voice instruction to continuously control within a preset time, for example, after the gain of the ultrasonic probe in the B image mode is adjusted by a key panel of the ultrasonic equipment, the user can directly speak a short vocabulary instruction such as 'Add', and the like, and the ultrasonic equipment automatically increases the gain of the ultrasonic probe in the B image mode after recognizing the short vocabulary instruction, so that the pronunciation amount of the ultrasonic equipment controlled by the user through the voice instruction can be reduced.
The embodiment of the invention also provides ultrasonic equipment, which comprises:
an ultrasonic probe;
the transmitting/receiving circuit is used for controlling the ultrasonic probe to transmit ultrasonic waves to the ultrasonic detection object and receive ultrasonic echoes to obtain ultrasonic echo signals;
the processor is used for processing the ultrasonic echo signals and obtaining an ultrasonic image of the ultrasonic detection object;
a display for displaying the ultrasound image and/or a measurement result based on the ultrasound image;
the processor is also used for executing the voice control method of the ultrasonic equipment.
By controlling the ultrasonic equipment through the voice control method, a user can conveniently control by adopting the combination of the long vocabulary instruction and the short vocabulary instruction, and compared with the existing multiple repeated long vocabulary instructions, the ultrasonic equipment can effectively reduce the volume of the user and improve the control efficiency of the ultrasonic equipment by only speaking the first long vocabulary instruction and adjusting the same function through the short vocabulary instruction after successful execution.
The voice control method of the ultrasonic apparatus of the present invention is described below by way of a practical example.
The ultrasonic equipment receives a long vocabulary instruction sent by a user, the long vocabulary instruction indicates that a control object and a control mode are a target parameter and a first adjustment mode respectively, after the first adjustment operation corresponding to the first voice instruction is executed, a preset duration is set, if the ultrasonic equipment recognizes a short vocabulary instruction sent by the user in the preset duration, the short vocabulary instruction only comprises a second adjustment mode, and then the second adjustment operation on the target parameter is executed in combination with the target parameter of the long vocabulary instruction, and so on.
In the above mode, the ultrasonic equipment memorizes the functional scene of the long vocabulary instruction, performs context analysis according to the content of the short vocabulary instruction when receiving the short vocabulary instruction, and executes the segment vocabulary instruction according to the analysis result.
The embodiment of the invention also provides an ultrasonic device, which comprises at least one processor and a memory for communication connection with the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the voice control method of the ultrasound device described previously.
Referring to fig. 9, the control processor 2001 and the memory 2002 in the ultrasonic device 2000 may be exemplified by a bus connection. Memory 2002 is a non-transitory computer readable storage medium that can be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, memory 2002 may include high-speed random access memory, and may also include non-transitory memory, such as at least one disk memory, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 2002 optionally includes memory remotely located relative to control processor 2001, which may be connected to ultrasound device 2000 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
It will be appreciated by those skilled in the art that the apparatus structure shown in fig. 9 is not limiting of the ultrasound device 2000 and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The embodiment of the present invention also provides a computer-readable storage medium storing computer-executable instructions that are executed by one or more control processors, for example, by one control processor 2001 in fig. 9, which may cause the one or more control processors to perform the voice control method in the above-described method embodiment, for example, to perform the method steps S100 to S300 in fig. 2, the method steps S301 to S302 in fig. 3, the method steps S400 to S600 in fig. 4, the method steps S501 to S502 in fig. 5, the method steps S601 to S602 in fig. 6, the method steps S1001 to S1003 in fig. 7, and the method steps S700 to S900 in fig. 8 described above.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should also be appreciated that the various embodiments provided in the embodiments of the present application may be arbitrarily combined to achieve different technical effects.
While the preferred embodiments of the present application have been described in detail, the present application is not limited to the above embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit and scope of the present application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (12)

1. A voice control method of an ultrasonic apparatus, comprising:
receiving a first voice instruction sent by a user, wherein the first voice instruction comprises a target parameter and a first adjustment mode corresponding to the target parameter;
executing a first adjustment operation on the target parameter according to the first voice instruction;
and in a preset time period after the first adjustment operation is executed, when a second voice instruction sent by a user is received, executing the second adjustment operation on the target parameter according to the second voice instruction, wherein the second voice instruction comprises a second adjustment mode corresponding to the target parameter, and the length of the second voice instruction is shorter than that of the first voice instruction.
2. The voice control method of an ultrasonic apparatus according to claim 1, characterized in that before performing the first adjustment operation of the target parameter according to the first voice instruction, the voice control method further comprises:
matching the first voice command in a voice command library of the ultrasonic equipment, and determining first text information corresponding to the first voice command according to a matching result when the matching is successful;
and analyzing the first text information, and determining that the control object and the control mode of the first voice instruction are the target parameter and the first adjustment mode respectively.
3. The voice control method of an ultrasonic apparatus according to claim 1, characterized in that before performing the second adjustment operation of the target parameter according to the second voice instruction, the voice control method further comprises:
matching the second voice command in a voice command library of the ultrasonic equipment, and determining second text information corresponding to the second voice command according to a matching result when the matching is successful;
and analyzing the second text information, and setting the target parameter as the control object of the second voice instruction when the control object and the control mode of the second voice instruction are respectively a default mode and the second adjustment mode.
4. The voice control method of an ultrasonic device according to claim 2, wherein the process of matching the first voice command and the second voice command to the voice command library comprises:
extracting acoustic features of a voice command, wherein the voice command is the first voice command or the second voice command;
matching the voice command library according to the acoustic features, wherein the voice command library comprises a plurality of acoustic models;
and when the acoustic feature hits one of the acoustic models in the voice command library, determining that the voice command is successfully matched with the voice command library.
5. The method according to claim 1, wherein the first adjustment mode and the second adjustment mode are both increasing/decreasing adjustment, and the first adjustment mode and the second adjustment mode are the same or different.
6. The voice control method of an ultrasonic apparatus according to claim 1 or 5, wherein the performing a second adjustment operation of the target parameter according to the second voice instruction includes:
when the second adjustment mode is increasing or decreasing, increasing or decreasing the parameter value of the target parameter according to a preset value;
Or alternatively, the process may be performed,
and when the second adjustment mode is to increase or decrease the designated numerical value, increasing or decreasing the parameter value of the target parameter according to the designated numerical value.
7. The voice control method of an ultrasonic apparatus according to claim 6, wherein the increasing or decreasing the parameter value of the target parameter according to a preset value includes:
in the adjustment process of continuously increasing or continuously decreasing the parameter value of the target parameter, the preset value is gradually increased along with the adjustment times.
8. The voice control method of an ultrasonic apparatus according to claim 1, further comprising, after performing the first adjustment operation:
determining a target function scene where the ultrasonic equipment is located according to the target parameters;
and determining a preset duration corresponding to the target function scene.
9. The method for controlling voice of an ultrasonic device according to claim 8, wherein determining a target function scenario in which the ultrasonic device is located according to the target parameter comprises:
and when the target functional scene is different from the functional scene of the ultrasonic equipment before the first adjustment operation is executed, switching the functional scene of the ultrasonic equipment into the target functional scene.
10. A voice control method of an ultrasonic apparatus, comprising:
receiving a control instruction input by a user through a contact type input device or a gesture recognition device, wherein the control instruction comprises a target parameter and a first adjustment mode corresponding to the target parameter;
executing a first adjustment operation on the target parameter according to the control instruction;
and in a preset time period after the first adjustment operation is executed, when a voice instruction sent by a user is received, executing a second adjustment operation on the target parameter according to the voice instruction, wherein the voice instruction comprises a second adjustment mode corresponding to the target parameter, and the length of the voice instruction is shorter than that of the control instruction.
11. A voice control method of an ultrasonic apparatus, comprising:
receiving a first voice instruction sent by a user, wherein the first voice instruction comprises a control object and a first adjustment mode corresponding to the control object;
executing a first adjustment operation on the control object according to the first voice instruction;
and executing a second adjustment operation on the control object according to a second voice command sent by a user when the second voice command is received within a preset time period after the first adjustment operation is executed, wherein the second voice command comprises a second adjustment mode corresponding to the control object and the length of the second voice command is shorter than that of the first voice command.
12. An ultrasound device, comprising:
an ultrasonic probe;
the transmitting/receiving circuit is used for controlling the ultrasonic probe to transmit ultrasonic waves to an ultrasonic detection object and receive ultrasonic echoes to obtain ultrasonic echo signals;
the processor is used for processing the ultrasonic echo signals and obtaining an ultrasonic image of the ultrasonic detection object;
the display is used for displaying the ultrasonic image and/or a measurement result obtained based on the ultrasonic image;
the processor is further configured to perform the voice control method of the ultrasound apparatus of any one of the preceding claims 1 to 11.
CN202111219574.4A 2021-10-20 2021-10-20 Voice control method of ultrasonic equipment and ultrasonic equipment Pending CN115995230A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111219574.4A CN115995230A (en) 2021-10-20 2021-10-20 Voice control method of ultrasonic equipment and ultrasonic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111219574.4A CN115995230A (en) 2021-10-20 2021-10-20 Voice control method of ultrasonic equipment and ultrasonic equipment

Publications (1)

Publication Number Publication Date
CN115995230A true CN115995230A (en) 2023-04-21

Family

ID=85990790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111219574.4A Pending CN115995230A (en) 2021-10-20 2021-10-20 Voice control method of ultrasonic equipment and ultrasonic equipment

Country Status (1)

Country Link
CN (1) CN115995230A (en)

Similar Documents

Publication Publication Date Title
EP0883877B1 (en) Methods and apparatus for non-acoustic speech characterization and recognition
US7672849B2 (en) Systems and methods for voice control of a medical imaging device
US20200335128A1 (en) Identifying input for speech recognition engine
EP2995259A1 (en) Ultrasound optimization method and ultrasonic medical device therefor
US11432806B2 (en) Information processing apparatus, information processing method, and storage medium
EP3673813A1 (en) Ultrasound diagnosis apparatus and method of operating the same
KR20170060853A (en) Medical imaging apparatus and operating method for the same
KR20160036280A (en) Ultrasound imaging apparatus and method using synthetic aperture focusing
US20200395111A1 (en) Method for generating medical reports and an imaging system carrying out said method
CN115995230A (en) Voice control method of ultrasonic equipment and ultrasonic equipment
CN112168210B (en) Medical image processing terminal, ultrasonic diagnostic apparatus, and fetal image processing method
KR20160098010A (en) Ultrasound diagnosis apparatus, ultrasound probe and controlling method of the same
CN117159031A (en) Ultrasound device and system, beam forming method, electronic equipment and storage medium
US11678866B2 (en) Touchless input ultrasound control
US10265052B2 (en) Method of displaying ultrasound image and ultrasound diagnosis apparatus
EP3517043B1 (en) Ultrasonic imaging device and ultrasonic image display method
Freitas et al. Multimodal silent speech interface based on video, depth, surface electromyography and ultrasonic doppler: Data collection and first recognition results
CN115990028A (en) Voice command switching method of ultrasonic equipment and ultrasonic equipment
KR20010080655A (en) Ultrasonic diagnostic imaging system with voice communication
CN114376614B (en) Auxiliary method for carotid artery ultrasonic measurement and ultrasonic equipment
KR102389866B1 (en) Method for Generating a Ultrasound Image and Image Processing Apparatus
Shariff et al. Silent Speech Interface using Continuous-Wave Radar and Optimized AlexNet
US12016727B2 (en) Touchless input ultrasound control
Lee et al. IR-UWB Radar-Based Contactless Silent Speech Recognition of Vowels, Consonants, Words, and Phrases
CN115607188A (en) Spectral Doppler measurement method of heart and ultrasonic imaging equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination