CN115990028A - Voice command switching method of ultrasonic equipment and ultrasonic equipment - Google Patents

Voice command switching method of ultrasonic equipment and ultrasonic equipment Download PDF

Info

Publication number
CN115990028A
CN115990028A CN202111219582.9A CN202111219582A CN115990028A CN 115990028 A CN115990028 A CN 115990028A CN 202111219582 A CN202111219582 A CN 202111219582A CN 115990028 A CN115990028 A CN 115990028A
Authority
CN
China
Prior art keywords
voice command
scene
ultrasonic
target
operation scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111219582.9A
Other languages
Chinese (zh)
Inventor
雷涛
李海瑞
周述文
刘智光
王武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mindray Bio Medical Electronics Co Ltd
Original Assignee
Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mindray Bio Medical Electronics Co Ltd filed Critical Shenzhen Mindray Bio Medical Electronics Co Ltd
Priority to CN202111219582.9A priority Critical patent/CN115990028A/en
Publication of CN115990028A publication Critical patent/CN115990028A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses a voice command switching method of ultrasonic equipment and the ultrasonic equipment, wherein the ultrasonic equipment comprises a plurality of voice command sets, and the voice command switching method comprises the following steps: when the ultrasonic equipment runs to a target running scene, a target voice command set corresponding to the target running scene is called from a plurality of voice command sets, wherein the target voice command set only comprises voice commands applicable to the target running scene; receiving a voice command sent by a user, and matching the voice command in a target voice command set; and when the matching is successful, controlling the ultrasonic equipment to execute the voice command under the target operation scene. When the ultrasonic equipment runs to a certain scene, a voice command set corresponding to the scene is called, and the voice command set only supports the function of identifying the current scene, so that the number of command words required to be identified by a single voice command is effectively reduced, the voice identification rate of the ultrasonic equipment is improved, and the user experience is improved.

Description

Voice command switching method of ultrasonic equipment and ultrasonic equipment
Technical Field
The present invention relates to the field of ultrasonic medical technology, and in particular, to a method for switching voice commands of an ultrasonic device and an ultrasonic device.
Background
The ultrasonic equipment is used as visual, convenient and noninvasive examination equipment and is widely applied to the medical field. In the current ultrasonic equipment, the examination items, examination indexes and the like can be set through input equipment such as keys, a mouse and the like on a control panel, and in the ultrasonic examination process, a doctor needs to operate an ultrasonic probe, set the examination items and the examination indexes and is troublesome to operate.
In the related art, the ultrasonic apparatus can be controlled by inputting a voice command, thereby reducing the operations of doctors. The voice engine of the ultrasonic equipment needs to be capable of recognizing all voice commands under different operation scenes, as the functions of the ultrasonic equipment are increased, the number and complexity of the voice commands are increased, the vocabulary needed to be recognized by the voice engine is increased, and the voice recognition rate is greatly reduced.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the invention provides a voice command switching method of ultrasonic equipment and the ultrasonic equipment, which can improve the accuracy of the ultrasonic equipment in recognizing voice commands of users and improve user experience.
In a first aspect, an embodiment of the present invention provides a voice command switching method of an ultrasonic device, where the ultrasonic device includes a plurality of voice command sets, and the voice command switching method includes:
when the ultrasonic equipment runs to a target running scene, a target voice command set corresponding to the target running scene is called from the voice command sets, wherein the target voice command set only contains voice commands applicable to the target running scene;
receiving a voice command sent by a user, and matching the voice command in the target voice command set;
and when the matching is successful, controlling the ultrasonic equipment to execute the voice command under the target operation scene.
In a second aspect, an embodiment of the present invention provides an ultrasonic apparatus, including
An ultrasonic probe;
the transmitting/receiving circuit is used for controlling the ultrasonic probe to transmit ultrasonic waves to an ultrasonic detection object and receive ultrasonic echoes to obtain ultrasonic echo signals;
the processor is used for processing the ultrasonic echo signals and obtaining an ultrasonic image of the ultrasonic detection object;
the display is used for displaying the ultrasonic image and/or a measurement result obtained based on the ultrasonic image;
the processor is further configured to execute the voice command switching method of the ultrasonic device according to the first aspect.
In a third aspect, an embodiment of the present invention provides a voice command switching apparatus of an ultrasound device, including at least one processor and a memory for communicatively connecting with the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the voice command switching method of the first aspect.
In a fourth aspect, embodiments of the present invention further provide a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the voice command switching method according to the first aspect.
The voice command switching method of the ultrasonic equipment provided by the embodiment of the invention has at least the following beneficial effects: in the embodiment of the invention, when the ultrasonic equipment runs to a certain scene, the voice command set corresponding to the scene is called, the voice command set only supports the function of identifying the current scene, and compared with the traditional global voice command set, the voice command set has fewer command words, so that the number of command words required to be identified by single voice commands is effectively reduced, therefore, when the ultrasonic equipment runs in different scenes, the voice command set corresponding to the scene is called, the identification load of a voice engine can be reduced, the voice identification rate of the ultrasonic equipment is improved, and the user experience is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the technical aspects of the present application, and are incorporated in and constitute a part of this specification, illustrate the technical aspects of the present application and together with the examples of the present application, and not constitute a limitation of the technical aspects of the present application.
FIG. 1 is a block diagram of an ultrasound device provided in one embodiment of the present invention;
FIG. 2 is a general flow chart of a voice command switching method according to one embodiment of the present invention;
FIG. 3 is a flow chart of an exit target operational scenario provided by one embodiment of the present invention;
FIG. 4 is a flow chart of an entry into a target operational scenario provided by one embodiment of the present invention;
FIG. 5 is a flow chart of a voice command matching voice command library provided by one embodiment of the present invention;
FIG. 6 is a flow chart of a context switch provided by one embodiment of the present invention;
FIG. 7 is a flow chart of a voice command set merge for a context function scenario provided by one embodiment of the present invention;
fig. 8 is a structural connection diagram of an ultrasonic apparatus according to an embodiment of the present invention.
Detailed Description
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be capable of operation in sequences other than those illustrated or described herein, for example. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It should be understood that in the description of the embodiments of the present application, the meaning of a plurality (or multiple) is two or more, and that greater than, less than, exceeding, etc. is understood to not include the present number, and that greater than, less than, within, etc. is understood to include the present number.
An ultrasonic scanning device (hereinafter referred to as ultrasonic device) transmits ultrasonic pulses to tissues in a human body based on an ultrasonic pulse imaging principle, and obtains a visible ultrasonic image of the tissues of the human body by receiving and processing echoes carrying characteristic information of the tissues of the human body by utilizing reflection of ultrasonic waves at interfaces of the tissues of the human body. Thus, as a visual, convenient, noninvasive inspection apparatus, ultrasound apparatuses have been increasingly used in clinic.
In order to facilitate the doctor to control the ultrasonic equipment and reduce the operation amount of the doctor when the doctor performs ultrasonic scanning on a patient, part of the ultrasonic equipment currently provides a voice recognition control function, and when the voice instruction sent by the doctor is recognized by the ultrasonic equipment, the ultrasonic equipment executes the function corresponding to the voice instruction. Along with the increase of the functions of the ultrasonic equipment, the ultrasonic equipment adopting the voice control technology needs to have a huge voice command set to correctly identify the voice command of a doctor, so that the problems of overlong matching time and reduced matching identification rate of the voice command in the voice command set are easily caused, the user experience is poor, and the increase of the functions of the ultrasonic equipment is limited.
Based on the above, the embodiment of the invention provides a voice command switching method of ultrasonic equipment and the ultrasonic equipment, aiming at the current operation scene of the ultrasonic equipment, a voice command set corresponding to the current operation scene is called, and the voice command set aiming at a single operation scene is usually smaller, so that the voice recognition rate can be improved, and the voice control experience of a user is improved.
Embodiments of the present invention will be further described below with reference to the accompanying drawings.
Fig. 1 is a schematic block diagram of an ultrasonic apparatus according to an embodiment of the present invention. The processing apparatus 1000 may include an ultrasound probe 1001, a transmitting circuit 1002, a transmit/receive selection switch 1003, a receiving circuit 1004, a beam combining circuit 1005, a processor 1006, a display 1007, and a memory 1008.
The ultrasonic probe 1001 includes a transducer (not shown in the figure) composed of a plurality of array elements arranged in an array, the plurality of array elements being arranged in a row to form a linear array, or being arranged in a two-dimensional matrix to form an area array, the plurality of array elements may also form a convex array. The array elements are used for transmitting ultrasonic beams according to the excitation electric signals or converting the received ultrasonic beams into electric signals. Each array element can thus be used to effect a mutual conversion of the electrical pulse signal and the ultrasound beam, thereby effecting the transmission of ultrasound waves to a target region of human tissue (e.g. the target heart in this embodiment), and also for receiving echoes of ultrasound waves reflected back through the tissue. In the case of ultrasonic detection, the transmit/receive selection switch 1003 may be used to control which array elements are used to transmit ultrasonic beams and which array elements are used to receive ultrasonic beams, or to control the array element time slots to transmit ultrasonic beams or receive echoes of ultrasonic beams. The array elements participating in ultrasonic wave transmission can be excited by the electric signals at the same time, so that ultrasonic waves are transmitted at the same time; or the array elements participating in the ultrasonic wave transmission can be excited by a plurality of electric signals with a certain time interval, so that the ultrasonic wave with a certain time interval can be continuously transmitted.
The transmitting circuit 1002 is configured to generate a transmitting sequence according to control of the processor 1006, where the transmitting sequence is configured to control a part or all of the plurality of array elements to transmit ultrasonic waves to biological tissue, and the transmitting sequence parameters include an array element position for transmitting, an array element number, and an ultrasonic beam transmitting parameter (such as amplitude, frequency, number of transmissions, transmission interval, transmission angle, waveform, focusing position, etc.). In some cases, the transmitting circuit 1002 is further configured to delay the phases of the transmitted beams, so that different transmitting array elements transmit ultrasound waves at different times, so that each transmitting ultrasound beam can be focused on a predetermined region of interest. Different modes of operation, such as B-image mode, C-image mode, and D-image mode (doppler mode), the transmit sequence parameters may be different, and after the echo signals are received by the receive circuit 1004 and processed by subsequent modules and corresponding algorithms, a B-image reflecting the anatomical structure of the tissue, a C-image reflecting the anatomical structure and blood flow information, and a D-image reflecting the doppler spectrum image may be generated.
The reception circuit 1004 is configured to receive an electric signal of an ultrasonic echo from the ultrasonic probe 1001 and process the electric signal of the ultrasonic echo. The receive circuitry 1004 may include one or more amplifiers, analog-to-digital converters (ADCs), and the like. The amplifier is used for amplifying the received electric signal of the ultrasonic echo after proper gain compensation, and the analog-to-digital converter is used for sampling the analog echo signal according to a preset time interval so as to convert the analog echo signal into a digitized signal, and the digitized echo signal still maintains amplitude information, frequency information and phase information. The data output from the reception circuit 1004 may be output to the beam forming circuit 1005 for processing, or may be output to the memory 1008 for storage.
The beam synthesis circuit 1005 is in signal connection with the receiving circuit 1004, and is configured to perform corresponding beam synthesis processing such as delay and weighted summation on signals output by the receiving circuit 1004, where, because distances from an ultrasonic receiving point in a measured tissue to the receiving array elements are different, channel data of the same receiving point output by different receiving array elements have delay differences, delay processing is required to be performed, phases are aligned, and different channel data of the same receiving point are weighted and summed, so as to obtain beamformed ultrasonic image data, and ultrasonic image data output by the beam synthesis circuit 1005 is also referred to as radio frequency data (RF data). The beam combining circuit 1005 outputs the radio frequency data to the IQ demodulation circuit. In some embodiments, the beam forming circuit 1005 may also output the rf data to the memory 1008 for buffering or storing, or directly output the rf data to the image processing module of the processor 1006 for image processing.
The beam combining circuit 1005 may perform the above-described functions in hardware, firmware, or software, for example, the beam combining circuit 104 may comprise a central controller Circuit (CPU), one or more micro-processing chips, or any other electronic component capable of processing input data according to specific logic instructions, which when the beam combining circuit 1005 is implemented in software, may execute instructions stored on tangible and non-transitory computer readable media (e.g., memory 1008) to perform beam combining calculations using any suitable beam combining method.
The processor 1006 is configured to be a central controller Circuit (CPU), one or more microprocessors, graphics controller circuits (GPUs), or any other electronic component capable of processing input data according to specific logic instructions, which may perform control of peripheral electronic components, or data reading and/or saving of memory 1008 according to the input instructions or predetermined instructions, and may also process the input data by executing programs in memory 1008, such as by performing one or more processing operations on the acquired ultrasound data according to one or more modes of operation, including but not limited to adjusting or defining the form of ultrasound emitted by the ultrasound probe 1001, generating various image frames for display by the display 1007 of a subsequent human-machine interaction device, or adjusting or defining the content and form displayed on the display 1007, or adjusting one or more image display settings (e.g., ultrasound images, interface components, locating regions of interest) displayed on the display 1007.
The image processing module of the processor 1006 is configured to process the data output by the beam synthesis circuit 1005 or the data output by the IQ demodulation circuit to generate a gray-scale image of the signal intensity variation in the scanning range, which reflects the anatomical structure inside the tissue, which is called a B-image. The image processing module may output the B-image to the display 1007 of the human-machine interaction device for display.
The man-machine interaction device is used for carrying out man-machine interaction, namely receiving the input and output visual information of a user; the input of the user can be received by a keyboard, an operation button, a mouse, a track ball and the like, and a touch screen integrated with a display can also be adopted; the output visual information of which is displayed 1007.
The memory 1008 may be a tangible and non-transitory computer readable medium, such as a flash memory card, a solid state memory, a hard disk, etc., for storing data or programs, for example, the memory 1008 may be used to store acquired ultrasound data or image frames generated by the processor 1006 that are not immediately displayed, or the memory 1008 may store graphical user interfaces, one or more default image display settings, programming instructions for the processor, beam forming circuitry, or IQ demodulation circuitry.
It should be noted that the structure of fig. 1 is only illustrative, and may include more or fewer components than those shown in fig. 1, or have a different configuration than that shown in fig. 1. The components shown in fig. 1 may be implemented in hardware and/or software.
Based on the ultrasonic device shown in fig. 1, the ultrasonic device includes a plurality of voice command sets, and the voice command switching method of the ultrasonic device is shown in fig. 2, and may specifically include, but is not limited to, the following steps S100, S200 and S300.
Step S100, when the ultrasonic equipment runs to a target running scene, a target voice command set corresponding to the target running scene is called from a plurality of voice command sets, and the target voice command set only contains voice commands applicable to the target running scene;
step S200, receiving a voice command sent by a user, and matching the voice command in a target voice command set;
and step S300, when the matching is successful, controlling the ultrasonic equipment to execute the voice command under the target operation scene.
At present, an ultrasonic device recognizes a voice command of a user through a voice recognition engine, the voice recognition engine generally builds a voice model based on a model matching principle, and the actual content of the voice command is determined by comparing the voice command of the user with the voice model in the voice recognition engine. The general pattern recognition comprises basic modules such as preprocessing, feature extraction, template matching and the like, and firstly, preprocessing is carried out on input voice, wherein the preprocessing comprises framing, windowing, pre-emphasis and the like; the next step is feature extraction, and in this step, it is particularly important to select appropriate feature parameters, and the feature parameters commonly used include: pitch period, formants, short-time average energy or amplitude, linear prediction coefficients, perceptual weighting prediction coefficients, short-time average zero-crossing rate, etc.; when the actual recognition is carried out, templates are generated for the test voice according to the training process, and finally the recognition is carried out according to the distortion judgment criteria, wherein the common distortion judgment criteria include Euclidean distance, covariance matrix, bayesian distance and the like.
From the above, in order to recognize more voice commands, the number of voice models is increased correspondingly, and when a new input voice needs to be recognized, a large number of voice models need to be matched, so that the problems of long matching time and low matching accuracy are likely to occur. Aiming at the complexity of the current ultrasonic equipment in terms of functions and operations, in order to be able to identify all voice commands of a user, a voice command set of a voice recognition engine in the ultrasonic equipment is globally responsive and is necessarily huge, the recognition accuracy is affected, and in order to ensure the recognition accuracy, the addition of a voice control function of the ultrasonic equipment is limited in turn.
The embodiment of the invention provides the voice command switching method from step S100 to step S300 aiming at the problems of a huge global voice command set. Since the ultrasonic equipment only runs in one working state in the working process, the voice commands which can be used in the working state are limited, if the voice recognition engine only matches with a limited voice model in the working state, the corresponding voice command set is greatly reduced, and compared with the global voice command set, the voice commands in the current working state can be recognized more accurately. Specifically, when the ultrasonic device is operated to one of the operation scenes (i.e., the target operation scene), the target voice command set corresponding to the target operation scene is invoked, and the target voice command set is only used for identifying the voice commands of the functions of the target operation scene. For example, when the ultrasonic probe is operated to a gain adjustment scene (such as an adjustment interface for adjusting parameters) in the B-image mode, a target voice command set related to the gain adjustment scene in the B-image mode is invoked, which may include gain increase, gain decrease, gain reset, and the like, where voice commands related to a non-gain adjustment scene, such as volume increase, ultrasonic mode switching, and the like, cannot be identified by the user. It should be noted that, in order to enable the user to exit the current target operation scene through a voice command, the target voice command set includes a scene exit command for exiting the target operation scene, so when the target operation scene needs to be exited, the voice command switching method further includes:
step S400, when a scene exit command is received, the target operation scene is switched to a preset operation scene of the ultrasonic equipment.
The preset operation scene can be a function main interface of the ultrasonic equipment, or can be preset with other interfaces. In the function main interface, a user can enter other operation scenes according to own function requirements, for example, the function main interface comprises function keys (such as an A mode, a B mode, a C mode and the like of an ultrasonic probe, wherein the function keys can be touch keys on a display screen or corresponding virtual function keys can be selected through a mouse or a carriage return and the like).
It is noted that since the ultrasound device has multiple operating scenarios, and accordingly, multiple sets of voice commands, each set of voice commands, although having different voice commands, may have some of the same voice commands, such as the scenario exit command described above, which are the same among the various sets of voice commands, of course also belong to the voice commands applicable to the target operating scenario.
It can be understood that after exiting the target operation scene and entering the preset operation scene through the scene exit command, the preset operation scene also includes a voice command for reentering the target operation scene. Specifically, the voice command set corresponding to the preset operation scene includes a preset voice command for switching to the target operation scene;
the voice command switching method further comprises the following steps:
step S500, under the condition that the ultrasonic equipment is in a preset operation scene, controlling the ultrasonic equipment to switch from the preset operation scene to a target operation scene according to a preset voice command sent by a user.
For example, an adjusting interface of the ultrasonic probe for adjusting gain parameters in the B-image mode is a target operation scene, the adjusting interface is directly returned to the function main interface after being exited, when the target operation scene is exited to a preset operation scene through a scene exit command, the preset operation scene calls a corresponding voice command set, the voice command set comprises a preset voice command, and then when a user sends out the preset voice command, the preset operation scene can be re-entered into the target operation scene after recognizing the preset voice command.
Therefore, before entering the target operation scene, the user is required to control the ultrasonic equipment to enable the ultrasonic equipment to enter the corresponding working state, so that the target voice command set corresponding to the target operation scene is called. Specifically, before the ultrasound device is operated to the target operational scenario, one of the following is included:
(1) Receiving a scene switching voice command sent by a user, wherein the scene switching voice command is used for controlling the ultrasonic equipment to switch to a target operation scene;
(2) Receiving a scene switching instruction manually input by a user, wherein the scene switching instruction is used for controlling the ultrasonic equipment to switch to a target operation scene;
(3) And determining a previous operation scene of the ultrasonic equipment in the target operation scene according to the preset workflow of the ultrasonic equipment.
For the case (1), the user controls the ultrasonic device to execute a certain function or an operation through the voice command so as to enter a target operation scene, for example, when the user sends a scene switching voice command of 'enter B image mode adjustment gain' in a function main interface of the ultrasonic device, then the ultrasonic device switches to a gain adjustment interface of the B image mode and invokes a voice command set related to B image mode gain adjustment; for the case (2), the user controls the ultrasonic device to enter a target operation scene through a contact type input device (such as a mouse, a keyboard, a touch screen and the like), for example, when the user clicks a key on a function main interface of the ultrasonic device to enter a gain adjustment interface of a B image mode, and the ultrasonic device calls a voice command set related to gain adjustment of the B image mode; for the case (3), the ultrasonic equipment automatically switches the functional scenes according to a preset workflow, in the preset workflow, each functional scene lasts for a certain time, after the time is over, the ultrasonic equipment automatically switches to the next functional scene, and the voice command set corresponding to the next functional scene is automatically called.
Corresponding to the three situations, the ultrasonic equipment is operated to a target operation scene by one of the following methods:
switching to a target operation scene according to the scene switching voice command, and calling a target voice command set corresponding to the target operation scene;
switching to a target operation scene according to a scene switching instruction, and calling a target voice command set corresponding to the target operation scene;
and ending the previous operation scene of the target operation scene, switching to the target operation scene, and calling a target voice command set corresponding to the target operation scene.
It may be understood that, in the embodiment of the present invention, a plurality of voice command sets are stored in an ultrasonic device or in a memory connected to the ultrasonic device in a preset manner, and only one voice command set is validated in an operation scenario, when a voice command set sent by a user is received, the voice command is matched in the voice command set that is currently validated, and the matching process may specifically be performed according to the following steps:
step S210, extracting acoustic characteristics of a voice command;
step S220, matching a target voice command set according to the acoustic characteristics, wherein the target voice command set comprises a plurality of acoustic models;
in step S230, when the acoustic feature hits one of the acoustic models in the target voice command set, it is determined that the voice command matches successfully in the target voice command set.
It will be appreciated that speech recognition techniques typically recognize speech based on acoustic models, which are one of the most important parts of speech recognition systems, and that currently mainstream speech recognition systems are mostly modeled using hidden markov models (HMMs, hidden Markov Model), which are statistical models that describe a markov process with hidden unknown parameters. In hidden markov models, states are not directly visible, but some variables affected by the states are visible. The corresponding probabilities of speech and phonemes are described in the acoustic model. Phonemes are the smallest phonetic units that are partitioned according to the natural properties of speech. From an acoustic property, a phoneme is the smallest unit of speech that is divided from a sound quality perspective; from a physiological standpoint, a pronunciation action forms a phoneme. The specific calculation method of the acoustic model training calculation in the embodiment of the invention can adopt the existing mature training calculation method, for example, tools and processes of a voice recognition kit (HTK, hidden Markov Model Toolkit) can be used for carrying out acoustic model training calculation on voice to obtain a corresponding acoustic model, and the method is not limited herein.
In the above steps S210 to S230, when a voice command is received, the acoustic features of the voice command are extracted, and matching is performed on a voice command set according to the acoustic features, where the voice command set includes a plurality of acoustic models that have been trained and correspond to different executable commands in the ultrasonic device, and if the acoustic features match (e.g., the confidence is greater than a certain value) with a certain acoustic model during matching, the current voice command is considered to correspond to the matched executable command of the acoustic model, so as to control the ultrasonic device to perform a corresponding operation.
In some cases, the target operation scene is used as an upper interface, and in the case of entering a lower interface, the lower interface does not need to return to the preset function interface each time, but can return to the target operation scene, or directly and simultaneously identify the target voice command set of the target operation scene in the lower interface. The specific implementation modes of the two cases are as follows:
for the case that the lower interface returns to the target operation scene, the target voice command set of the target operation scene includes the lower function voice command of the lower function scene entering the target operation scene, and then the voice command switching method in the embodiment of the invention further includes:
step S601, when the ultrasonic equipment is in a target operation scene, receiving a lower-level function voice command sent by a user;
step S602, switching to a lower-level function scene according to a lower-level function voice command, and calling a lower-level function voice command set corresponding to the lower-level function scene, wherein the lower-level function voice command set comprises a lower-level function exit command;
in step S603, when a lower function exit command sent by the user is received, the lower function scene is switched to the target operation scene, and the lower function voice command set is switched to the target voice command set.
Therefore, the user can exit from the lower interface without directly returning to the preset operation scene after entering the lower interface each time, but can return to the upper interface through the lower function exit command, so that the user can conveniently switch between the upper interface and the lower interface of one set of functions, and the efficiency of voice control is improved.
For the case that the lower interface simultaneously recognizes the target voice command set, the voice command returned to the target operation scene is not required to be adopted, but the target voice command set includes the lower function voice command of the lower function scene entering the target operation scene, and then the voice command switching method of the embodiment of the invention further comprises the following steps:
step S604, when the ultrasonic equipment is in a target operation scene, receiving a subordinate function voice command sent by a user;
step S605, switching to a lower-level functional scene according to the lower-level functional voice command, and merging the lower-level functional voice command set corresponding to the lower-level functional scene with the target voice command set.
Therefore, after entering the lower interface, the voice command set of the upper interface can be directly used without exiting, and in practice, the two voice command sets of the upper interface and the lower interface are combined, so that the volume of a voice library is not too large, the voice recognition rate is ensured, and the voice control efficiency is improved.
Through the steps, the ultrasonic equipment invokes the corresponding voice command set according to different running states, and compared with the existing global voice command set, each voice command set in the embodiment of the invention has smaller volume, and can accurately identify the voice command of the user, thereby improving the accuracy of voice identification and improving the user experience.
The embodiment of the invention also provides ultrasonic equipment, which comprises:
an ultrasonic probe;
the transmitting/receiving circuit is used for controlling the ultrasonic probe to transmit ultrasonic waves to the ultrasonic detection object and receive ultrasonic echoes to obtain ultrasonic echo signals;
the processor is used for processing the ultrasonic echo signals and obtaining an ultrasonic image of the ultrasonic detection object;
a display for displaying the ultrasound image and/or a measurement result based on the ultrasound image;
the processor is also used for executing the voice command switching method of the ultrasonic equipment.
The embodiment of the invention also provides an ultrasonic device, which comprises at least one processor and a memory for communication connection with the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the voice command switching method of the ultrasonic device described previously.
Referring to fig. 8, the control processor 2001 and the memory 2002 in the ultrasonic device 2000 may be connected by a bus as an example. Memory 2002 is a non-transitory computer readable storage medium that can be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, memory 2002 may include high-speed random access memory, and may also include non-transitory memory, such as at least one disk memory, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 2002 optionally includes memory remotely located relative to control processor 2001, which may be connected to ultrasound device 2000 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Those skilled in the art will appreciate that the apparatus structure shown in fig. 8 is not limiting of the ultrasound device 2000 and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The embodiment of the present invention also provides a computer-readable storage medium storing computer-executable instructions that are executed by one or more control processors, for example, by one control processor 2001 in fig. 8, which may cause the one or more control processors to perform the voice command switching method in the above-described method embodiment, for example, to perform the method steps S100 to S300 in fig. 2, the method step S400 in fig. 3, the method step S500 in fig. 4, the method step S210 to S230 in fig. 5, the method step S601 to S603 in fig. 6, and the method step S604 to S605 in fig. 7 described above.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should also be appreciated that the various embodiments provided in the embodiments of the present application may be arbitrarily combined to achieve different technical effects.
While the preferred embodiments of the present application have been described in detail, the present application is not limited to the above embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit and scope of the present application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (10)

1. A voice command switching method of an ultrasonic apparatus, wherein the ultrasonic apparatus includes a plurality of voice command sets, the voice command switching method comprising:
when the ultrasonic equipment runs to a target running scene, a target voice command set corresponding to the target running scene is called from the voice command sets, wherein the target voice command set only contains voice commands applicable to the target running scene;
receiving a voice command sent by a user, and matching the voice command in the target voice command set;
and when the matching is successful, controlling the ultrasonic equipment to execute the voice command under the target operation scene.
2. The voice command switching method of an ultrasonic device according to claim 1, wherein the target voice command set includes a scene exit command for exiting the target operation scene;
the voice command switching method further comprises the following steps:
and when the scene exit command is received, switching the target operation scene to a preset operation scene of the ultrasonic equipment.
3. The voice command switching method of an ultrasonic device according to claim 2, wherein the preset operation scenario is a function main interface of the ultrasonic device.
4. The voice command switching method of an ultrasonic device according to claim 2 or 3, wherein the voice command set corresponding to the preset operation scene includes a preset voice command to switch to the target operation scene;
the voice command switching method further comprises the following steps:
and under the condition that the ultrasonic equipment is in the preset operation scene, controlling the ultrasonic equipment to switch from the preset operation scene to the target operation scene according to the preset voice command sent by the user.
5. The voice command switching method of an ultrasonic device according to claim 1, comprising one of the following before the ultrasonic device is operated to a target operation scenario:
receiving a scene switching voice command sent by a user, wherein the scene switching voice command is used for controlling the ultrasonic equipment to switch to the target operation scene;
receiving a scene switching instruction manually input by a user, wherein the scene switching instruction is used for controlling the ultrasonic equipment to switch to the target operation scene;
and determining the previous operation scene of the ultrasonic equipment in the target operation scene according to the preset workflow of the ultrasonic equipment.
6. The voice command switching method of an ultrasonic device according to claim 5, wherein the ultrasonic device is operated to the target operation scene by one of:
switching to the target operation scene according to the scene switching voice command, and calling a target voice command set corresponding to the target operation scene;
switching to the target operation scene according to the scene switching instruction, and calling a target voice command set corresponding to the target operation scene;
and ending the previous operation scene of the target operation scene, switching to the target operation scene, and calling a target voice command set corresponding to the target operation scene.
7. The voice command switching method of an ultrasonic device according to claim 1, wherein said matching the voice command in the target voice command set comprises:
extracting acoustic features of the voice command;
matching the target voice command set according to the acoustic features, wherein the target voice command set comprises a plurality of acoustic models;
and when the acoustic feature hits one of the acoustic models in the target voice command set, determining that the voice command is successfully matched in the target voice command set.
8. The voice command switching method of an ultrasonic device according to claim 1, wherein the target voice command set includes a lower-level function voice command into a lower-level function scene of the target operation scene, the voice command switching method further comprising:
when the ultrasonic equipment is in the target operation scene, receiving the subordinate function voice command sent by a user;
switching to the lower function scene according to the lower function voice command, and calling a lower function voice command set corresponding to the lower function scene, wherein the lower function voice command set comprises a lower function exit command;
and when the lower-level function exit command sent by the user is received, switching the lower-level function scene to the target operation scene, and switching the lower-level function voice command set to the target voice command set.
9. The voice command switching method of an ultrasonic device according to claim 1, wherein the target voice command set includes a lower-level function voice command into a lower-level function scene of the target operation scene, the voice command switching method further comprising:
when the ultrasonic equipment is in the target operation scene, receiving the subordinate function voice command sent by a user;
switching to the lower-level functional scene according to the lower-level functional voice command, and merging the lower-level functional voice command set corresponding to the lower-level functional scene with the target voice command set.
10. An ultrasound device, comprising:
an ultrasonic probe;
the transmitting/receiving circuit is used for controlling the ultrasonic probe to transmit ultrasonic waves to an ultrasonic detection object and receive ultrasonic echoes to obtain ultrasonic echo signals;
the processor is used for processing the ultrasonic echo signals and obtaining an ultrasonic image of the ultrasonic detection object;
the display is used for displaying the ultrasonic image and/or a measurement result obtained based on the ultrasonic image;
the processor is further configured to perform the voice command switching method of the ultrasound apparatus of any one of the preceding claims 1 to 9.
CN202111219582.9A 2021-10-20 2021-10-20 Voice command switching method of ultrasonic equipment and ultrasonic equipment Pending CN115990028A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111219582.9A CN115990028A (en) 2021-10-20 2021-10-20 Voice command switching method of ultrasonic equipment and ultrasonic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111219582.9A CN115990028A (en) 2021-10-20 2021-10-20 Voice command switching method of ultrasonic equipment and ultrasonic equipment

Publications (1)

Publication Number Publication Date
CN115990028A true CN115990028A (en) 2023-04-21

Family

ID=85992968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111219582.9A Pending CN115990028A (en) 2021-10-20 2021-10-20 Voice command switching method of ultrasonic equipment and ultrasonic equipment

Country Status (1)

Country Link
CN (1) CN115990028A (en)

Similar Documents

Publication Publication Date Title
US7672849B2 (en) Systems and methods for voice control of a medical imaging device
EP1974671B1 (en) Ultrasound system
US11903768B2 (en) Method and system for providing ultrasound image enhancement by automatically adjusting beamformer parameters based on ultrasound image analysis
US20200222029A1 (en) Information processing apparatus, information processing method, and storage medium
CN106691510A (en) Method and device for optimizing ultrasonic image
CN111714157A (en) Doppler ultrasonic blood flow automatic identification method and device
CN113040872A (en) Method for determining puncture state, method for determining needle point position and ultrasonic imaging device
CN112168210B (en) Medical image processing terminal, ultrasonic diagnostic apparatus, and fetal image processing method
AU2017313763B2 (en) Systems and methods for ultrasound review and imaging
US11646107B2 (en) Method for generating medical reports and an imaging system carrying out said method
CN115990028A (en) Voice command switching method of ultrasonic equipment and ultrasonic equipment
US20230240657A1 (en) Touchless input ultrasound control
CN114271850B (en) Ultrasonic detection data processing method and ultrasonic detection data processing device
US20190209136A1 (en) Ultrasonic imaging device and ultrasonic image display method
US11642098B2 (en) Ultrasonic imaging apparatus and method of controlling the same
CN115995230A (en) Voice control method of ultrasonic equipment and ultrasonic equipment
CN114376614B (en) Auxiliary method for carotid artery ultrasonic measurement and ultrasonic equipment
CN113951922A (en) Ultrasonic imaging equipment and scanning prompting method thereof
CN116437860A (en) Method and system for tracking probe movement in an ultrasound system
CN113576529A (en) Ultrasonic imaging equipment and code scanning operation method thereof
CN114699106A (en) Ultrasonic image processing method and equipment
CN115607188A (en) Spectral Doppler measurement method of heart and ultrasonic imaging equipment
CN106464868A (en) Adaptive demodulation method and apparatus for ultrasound image
CN116687445B (en) Automatic positioning and tracking method, device, equipment and storage medium for ultrasonic fetal heart
US11382595B2 (en) Methods and systems for automated heart rate measurement for ultrasound motion modes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination