WO2020215295A1 - 多医疗设备共存时的语音交互方法、医疗系统及医疗设备 - Google Patents

多医疗设备共存时的语音交互方法、医疗系统及医疗设备 Download PDF

Info

Publication number
WO2020215295A1
WO2020215295A1 PCT/CN2019/084442 CN2019084442W WO2020215295A1 WO 2020215295 A1 WO2020215295 A1 WO 2020215295A1 CN 2019084442 W CN2019084442 W CN 2019084442W WO 2020215295 A1 WO2020215295 A1 WO 2020215295A1
Authority
WO
WIPO (PCT)
Prior art keywords
interaction
user
feature
machine
voice
Prior art date
Application number
PCT/CN2019/084442
Other languages
English (en)
French (fr)
Inventor
赵亮
陈巍
谈琳
罗军
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to CN201980092353.XA priority Critical patent/CN113454732B/zh
Priority to PCT/CN2019/084442 priority patent/WO2020215295A1/zh
Publication of WO2020215295A1 publication Critical patent/WO2020215295A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G12/00Accommodation for nursing, e.g. in hospitals, not covered by groups A61G1/00 - A61G11/00, e.g. trolleys for transport of medicaments or food; Prescription lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs

Definitions

  • This application relates to the technical field of medical equipment, and in particular to a method for voice interaction when multiple medical devices coexist, a medical device that executes this method, and a medical system that executes this method.
  • Such scenarios cause the user's voice interaction to lose directionality, and it is impossible to perform voice interaction and perform medical functions for one or part of the medical devices. If part of the medical equipment in the space environment is in the monitoring stage and another part of the medical equipment is in an idle state, it is also easy to cause logical confusion of some medical equipment due to the execution of the same voice command. Further, if multiple medical devices provide voice feedback to the user at the same time, it may also cause mutual interference between multiple voice feedback messages, which affects the effective reception of information by the user.
  • This application proposes a voice interaction method when multiple medical devices coexist, to clarify the directivity of voice commands when multiple medical devices exist in the same spatial environment.
  • this application also relates to a medical device that implements this method, and a medical system that implements this method.
  • This application specifically includes the following technical solutions:
  • the voice interaction method when multiple medical devices coexist in this application includes:
  • the identity feature includes a single-machine feature and a multi-machine feature
  • the determining that the identity feature exists in the voice information includes:
  • the voice information includes a single-machine feature or a multi-machine feature
  • the activation of the local interaction system to interact with the user includes:
  • the identity feature is a stand-alone feature
  • the native interactive system is activated and interacts with the user based on the interaction sequence.
  • the determining that the identity feature exists in the voice information includes:
  • the identity feature is a multi-machine feature
  • the voice information also includes timing information
  • the activation of the local interactive system and interaction with the user based on the interaction sequence includes:
  • the multi-machine feature includes a ranking feature
  • the determining that the identity feature exists in the voice information includes:
  • the identity feature is a multi-machine feature
  • the activation of the local interactive system and interaction with the user based on the interaction sequence includes:
  • the determining that the identity feature exists in the voice information includes:
  • the identity feature is a multi-machine feature
  • the voice information also includes host information
  • the activation of the local interactive system and interaction with the user based on the interaction sequence includes:
  • the multi-machine feature includes a ranking feature
  • the determining that the identity feature exists in the voice information includes:
  • the identity feature is a multi-machine feature
  • the activation of the local interactive system and interaction with the user based on the interaction sequence includes:
  • the determining that the identity feature exists in the voice information includes:
  • the voice information includes timing information
  • the activation of the local interaction system to interact with the user includes:
  • the identity feature includes a ranking feature
  • the determining that the identity feature exists in the voice information includes:
  • the activation of the local interaction system to interact with the user includes:
  • the determining that the identity feature exists in the voice information includes:
  • the voice information includes host information
  • the activation of the local interaction system to interact with the user includes:
  • the identity feature includes a ranking feature, and when it is determined that the identity feature exists in the voice information, it includes:
  • the activation of the local interaction system to interact with the user includes:
  • interaction sequence determined by the host includes:
  • the local interactive system is started to interact with the user indirectly through the host.
  • the identity feature includes a sound source distance condition
  • the acquisition of voice information in the environment includes:
  • the determining that the identity feature exists in the voice information includes:
  • the identity feature includes a volume condition
  • the acquisition of voice information in the environment includes:
  • the determining that the identity feature exists in the voice information includes:
  • the sound source distance condition includes a sound source distance threshold, or/and
  • the sound source distance value of the local machine is greater than the sound source distance value of any other medical equipment in the environment.
  • the volume condition includes a volume threshold, or/and
  • volume value of the local machine is greater than the volume value of any other medical equipment in the environment.
  • the native interactive system after starting the native interactive system to interact with the user, it also includes:
  • the native interactive system after starting the native interactive system to interact with the user, it also includes:
  • the control local interaction system interacts with the user based on the interaction sequence of the networking information.
  • the feedback information includes visual feedback information and/or auditory feedback information.
  • the voice information in the environment is obtained to analyze and determine whether the identity feature exists in the voice information.
  • These medical devices may be medical devices of the same type or model, or medical devices of different types or models.
  • the introduction of identity features allows each medical device to obtain independent voice start instructions in the space environment.
  • the medical device activates the interactive system of the machine to interact with the user.
  • the interactive system of the medical device may include a voice interaction function, or various functions such as a visual interaction function, communication interaction, etc., to interact with the user.
  • the voice interaction method when multiple medical devices coexist in the present application uses different identity features to clarify the directivity when interacting with each medical device in the space environment, and it is convenient for the user to communicate with the corresponding medical device by outputting the identity characteristics of the medical device. To interact. It avoids the defects that the same model or the same type of medical equipment has similar functions, resulting in the same or similar voice interactive commands, which cause the voice command to be unclear and some medical equipment logic is confused.
  • this application also relates to a medical device, including a processor, an input device, an output device, and a storage device.
  • the processor, input device, output device, and storage device are connected to each other, wherein the storage device is used to store a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions, Perform the above-mentioned voice interaction method when multiple medical devices coexist.
  • this application also relates to a medical system, including:
  • the acquisition module is used to acquire voice information in the environment
  • An analysis module configured to determine that the identity feature exists in the voice information, and the identity feature is obtained through pre-allocation
  • the control module is used to start the local interactive system to interact with the user.
  • the medical system further includes a pairing module, and the pairing module is used to obtain pre-allocated identity features.
  • the medical system further includes a sorting module, and the sorting module is used to determine the sequence of interaction between the local interactive system and the user.
  • the sorting module determines the timing of interaction between the local interactive system and the user based on the timing information acquired by the acquiring module.
  • the sorting module determines the sequence of interaction between the local interactive system and the user based on the sorting characteristics acquired by the pairing module.
  • the medical system further includes a judgment module, and the judgment module is used to determine whether the machine acts as a host to interact with the user.
  • the judgment module determines whether the machine is used as a host to interact with the user based on the host information acquired by the acquisition module.
  • the sorting module determines whether the machine is used as a host to interact with the user based on the sorting characteristics acquired by the pairing module.
  • the medical system further includes a sound source ranging module, and the sound source ranging module is used to detect the sound source distance value.
  • the medical system further includes a volume detection module, and the volume detection module is used to detect a volume value.
  • the medical system further includes a networking module, and the networking module is used to start an interactive system of other medical devices in the environment that matches the networking information.
  • the medical system further includes a feedback module, and the feedback module is used to generate and display feedback information to show that the local interactive system has been activated.
  • the pre-allocated identity characteristics obtained by the pairing module include single-machine characteristics and multi-machine characteristics
  • the analysis module determines that the voice information has the identity feature, it determines that the voice information includes a single-machine feature or a multi-machine feature;
  • control module starts the local interactive system to interact with the user
  • analysis module determines that the identity feature is a stand-alone feature
  • control module starts the local interactive system to interact with the user
  • the control module activates the local interaction system and interacts with the user based on the interaction sequence.
  • the analysis module is configured to analyze the multi-machine feature and time sequence information included in the voice information obtained when it is determined that the identity feature exists in the voice information;
  • the sorting module is used to control the local interactive system to interact with the user based on the interaction timing of the timing information.
  • the pairing module includes a sorting feature in the pre-allocated multi-machine features.
  • the analysis module determines that the identity feature exists in the voice information, it determines that the identity feature is a multi-machine feature;
  • the ranking module is also used to analyze the ranking of the ranking features in the identity features
  • the control module is configured to interact with the user based on the interaction sequence of the ranking feature when the interactive system is started to interact with the user based on the interaction sequence.
  • the analysis module analyzes that the voice information includes the multi-machine feature and host information
  • the judgment module is used for judging whether the machine is the host based on the host information
  • the control module is configured to interact with the user based on the interaction timing determined by the host.
  • the pre-allocated multi-machine features obtained by the pairing module include a sorting feature
  • the analysis module is used to determine that the identity feature is a multi-machine feature
  • the sorting module It is also used to analyze and compare the sorting features of the machine
  • the judgment module is used to judge whether the machine is the host according to the comparison result
  • the control module is configured to interact with the user based on the interaction timing determined by the host.
  • the analysis module is configured to determine that the voice information further includes time sequence information when the identity feature exists in the voice information
  • the control module is configured to interact with the user based on the interaction sequence of the sequence information when starting the local interactive system to interact with the user.
  • the identity features distributed and obtained by the pairing module include a ranking feature
  • the ranking module is configured to compare the ranking of the ranking feature when the identity feature exists in the voice information
  • the control module is configured to interact with the user based on the interaction sequence of the sorting feature when starting the local interactive system to interact with the user.
  • the analysis module is configured to determine that the voice information also includes host information when the identity feature exists in the voice information
  • the judgment module is used for judging whether the machine is the host based on the host information
  • the control module is configured to interact with the user based on the interaction timing determined by the host when starting the local interactive system to interact with the user.
  • the identity features allocated and obtained by the pairing module include a ranking feature
  • the analysis module is used to determine that when the identity feature exists in the voice information
  • the ranking module compares the ranking of the machine Feature ranking
  • the judgment module is used for judging whether the machine is the host based on the comparison result
  • the control module is used to interact with the user based on the interaction sequence determined by the host when starting the local interactive system to interact with the user.
  • interaction sequence determined by the host includes:
  • the control module activates the local interactive system to interact with the user;
  • the control module activates the local interactive system to indirectly interact with the user through the host.
  • the pre-allocated identity feature obtained by the pairing module includes a sound source distance condition
  • the sound source ranging module is used to acquire the sound source distance value
  • the analysis module is used to determine that the distance to the sound source satisfies the sound source distance condition
  • the analysis module is used to determine that the identity feature exists in the voice information.
  • the pre-allocated identity features obtained by the pairing module include volume conditions
  • the volume detection module is used to acquire the volume value of the environmental voice information
  • the analysis module is used to determine that the volume value meets the volume condition
  • the analysis module is also used to determine the presence of the identity feature in the voice information.
  • the pre-allocated sound source distance condition obtained by the pairing module includes a sound source distance threshold, or/and
  • the sound source distance value of the local machine is greater than the sound source distance value of any other medical equipment in the environment.
  • the pre-allocated volume condition obtained by the pairing module includes a volume threshold, or/and
  • volume value of the local machine is greater than the volume value of any other medical equipment in the environment.
  • the acquisition module is used to acquire voice information in the environment
  • the analysis module is used to determine that there is exit information in the voice information
  • the control module is also used to exit the native interactive system and stop interacting with the user.
  • control module starts the local interactive system to interact with the user, it further includes:
  • the acquisition module is used to acquire voice information in the environment
  • the analysis module is used to determine that there is networking information in the voice information
  • the networking module is used to start the interactive system of the remaining medical equipment in the environment that matches the networking information based on the networking information;
  • the control module is also used to control the local interaction system to interact with the user based on the interaction timing of the networking information.
  • the feedback information generated and displayed by the feedback module includes visual feedback information and/or auditory feedback information.
  • FIG. 1 is a flowchart of a voice interaction method when multiple medical devices coexist in an embodiment of the present application
  • Figure 2 is a schematic diagram of a medical device in an embodiment of the present application.
  • FIG. 3 is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • FIG. 4 is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • 4a is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application
  • FIG. 5 is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • Figure 5a is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • FIG. 6 is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • Fig. 6a is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application
  • FIG. 7 is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • Fig. 8 is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • FIG. 8a is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • FIG. 9 is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • FIG. 10 is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • FIG. 11 is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • FIG. 12 is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • FIG. 13 is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • FIG. 13a is a flowchart of a voice interaction method when multiple medical devices coexist in another embodiment of the present application.
  • Figure 14 is a schematic diagram of a medical system in an embodiment of the present application.
  • Fig. 15 is a schematic diagram of a medical system in another embodiment of the present application.
  • the voice interaction method when multiple medical devices coexist in this application includes:
  • the multiple medical devices 100 include an interactive system 110, so that the medical device 100 implements the interactive function with the user through the interactive system 110.
  • the interaction system 110 of the medical device 100 may include a voice interaction function, or various functions such as a visual interaction function, a communication interaction, and the like to interact with the user.
  • the interactive system 110 of the medical device 100 has a voice interactive function, the user's voice information can be directly obtained through the interactive system 110 in real time.
  • the medical device 100 can also monitor the user's voice information in the spatial environment through a dedicated audio collection device.
  • the interactive system 110 of the present application based on the medical device 100 has a voice interaction function to expand.
  • the medical equipment 100 adopting the audio collection device has similar principles in the process of implementing the solution of the present application, and is not affected by the lack of voice interaction function.
  • the interactive systems 110 of multiple medical devices 100 obtain the voice information of users in the environment in real time. Because the medical device 100 has an interactive system 110, and the interactive system 110 can interact with the user, the interactive system 110 has the function of acquiring user voice information.
  • the medical device 100 After the medical device 100 obtains the voice information of the user in the spatial environment, it needs to analyze whether there is an identity feature corresponding to the device in the voice information.
  • the voice information includes all the conversations of all users in the space environment.
  • the medical device 100 can be determined into the voice information by analyzing the voice information Contains the identity characteristics of the corresponding machine.
  • Identity features need to be obtained through pre-allocation. Before interacting with multiple medical devices 100 with interactive functions in the spatial environment, the user needs to set specific identity characteristics for each medical device 100. Each identity feature has a unique identity that is different from other identity characteristics, so that when the user calls out the identity feature corresponding to the unique identity in the space environment, it can match the medical device 100 corresponding to the identity feature and compare it with the medical device. The device 100 interacts.
  • the assignment of identity characteristics can be done in a spatial environment. After multiple medical devices 100 are placed in the same spatial environment, the user can assign identity features to each medical device 100 according to the specific number and category of the medical devices 100 in the spatial environment.
  • the assignment of the identity feature can be done by using various methods such as buttons, touch input, or voice input on each medical device 100.
  • the medical device 100 confirms the identity feature, which can be entered by the user calling out the identity feature, or by the user’s selection on the medical device 100 or input of text and numbers, and then the medical device 100 converts the user’s input to the original The identity characteristics corresponding to the machine.
  • the simplest implementation can be to number each medical device 100 in the space environment one by one: if there are 9 medical devices 100 with voice interaction function in the space environment at the same time, input the numbers from 1 to 9 for the 9 medical devices 100 in turn
  • the serial number is used as the identity feature of each of the nine medical devices 100. It is understandable that the digital number obtained by any medical device 100 at this time is the identity feature corresponding to the medical device 100, and the digital number is also a unique identifier that distinguishes the identity feature from other medical devices 100. If the user's outgoing voice message contains the number "4", it means that the user has called out the voice message whose identity feature is "4".
  • the user will correspond to the nine medical devices 100 in the spatial environment whose identity feature is defined as " 4" medical equipment 100, and other digital numbers "1,2,3,5,6,7,8,9" 8 medical equipment 100 because they did not get the identity characteristics corresponding to the machine, it is considered not Choose to.
  • the assignment of identity features can also be done by using the original factory settings. Similar to each medical device 100 when it leaves the factory, it will get an identity ID corresponding to the machine. This identity ID is used as an identification code for the corresponding medical device 100 to distinguish it from any other medical device 100 on the market. It can be used in any space environment. The device 100 is distinguished from any other medical device 100. Therefore, using a similar logic, the identity ID is used as the unique identifier of the medical device 100, or a set of coding mechanism is built independently of the identity ID as the unique identity of the medical device 100, which can be used in the factory stage of the medical device 100 That is, the identity characteristics of the corresponding machine are obtained through pre-allocation. It is understandable that because the medical device 100 is pre-allocated at the factory stage to obtain identity features that are different from any other medical devices 100, in any space environment, multiple medical devices 100 have obtained pre-allocated identity features.
  • the medical device 100 starts the interaction system 110 of the machine to interact with the user.
  • the activation of the native interactive system or the interaction between the medical device and the user mentioned in this application refers to the activation of the interactive system or the interaction function between the medical device and the user.
  • starting the local interactive system 110 to interact with the user mentioned in this embodiment refers to starting the interactive function of the interactive system 110 with the user.
  • the medical device 100 may obtain the voice information in the spatial environment through the interactive system 110. Therefore, the interactive system 110 may also perform the task of acquiring voice information before interacting with the user.
  • the activation of the interactive system 110 does not depend on whether it is determined that the voice information has an identity feature corresponding to the machine.
  • the interactive function between the interactive system 110 and the user is activated by determining that the identity feature exists in the voice information.
  • the voice interaction method when multiple medical devices coexist in the present application distinguishes each medical device 100 by assigning its corresponding identity characteristics to each medical device 100 in the spatial environment, so that the user can communicate with the target medical device by calling out.
  • the operation of the identity feature corresponding to 100 starts the interactive function of the interactive system 110 of the corresponding medical device 100, thereby achieving the purpose of directional interactive operation of the medical device 100 in the spatial environment.
  • the identity feature corresponding to the local device is obtained through pre-allocation, it can interact with the user after it is determined that the voice information called by the user contains the identity feature corresponding to the local device.
  • the medical device 100 When it is not activated, the medical device 100 is always in a state where the interactive function is not activated, which prevents the user from interacting with other medical devices 100 in the space environment because of receiving voice commands from the user to other medical devices 100 However, responding or executing corresponding actions through this machine will affect the normal operation of this machine.
  • the user calls out the voice information that includes the identity feature, he also defines the operation to be performed by the medical device 100 corresponding to the identity feature, or describes the specific function that needs to be activated, and the operation or specific function is It can be controlled by the interactive system 110 of the medical device 100.
  • the medical device 100 determines that the voice information contains the identity feature, it also receives the instruction including the startup operation of the corresponding function of the machine.
  • the medical device 100 can directly activate the interactive system 110 and activate the corresponding functions of the machine at the same time through the user's voice information.
  • the medical device 100 determines the identity feature corresponding to the device, and directly activates the data collection function of the medical device 100, or feeds back the collected physical data to the user through the display screen or voice broadcast, etc. .
  • the medical device 100 determines that the user's start operation instruction is directed to the local machine after determining the identity feature of the corresponding machine, and then starts the function corresponding to the machine.
  • This situation can also be regarded as the user first activated the interactive system 110 through the identity feature, and then activated the corresponding function of the medical device 100 through the interactive system 110. Therefore, the interaction between the interactive system 110 and the user is the activation of the corresponding function completed when the voice interactive function does not respond.
  • the interactive system 110 correspondingly completes the activation of the corresponding function without responding to the user, which is also a way of completing the interaction with the user after the interactive system 110 is activated by the identity feature.
  • the user's interaction with multiple medical devices 100 in the spatial environment can be one-to-one for only one medical device 100, or multiple medical devices 100 can be activated at a time to perform batch interaction.
  • the voice interaction method when multiple medical devices coexist in the present application is proposed, it is relatively convenient for the user to start the target medical device 100. Therefore, when the user activates the medical device 100 through the identity feature, he can arbitrarily extract and interact with one or more medical devices 100 he needs according to actual needs.
  • the current medical device 100 with voice interaction function can be better than any one-to-one stand-alone voice interaction work.
  • the interaction system 110 of most medical equipment 100 is also designed based on one-to-one interaction logic, and the instructions are relatively simple and easy to execute.
  • the medical device 100 in order to realize batch operation of multiple medical devices 100 through voice commands, it is necessary to design a set of relatively complex and detailed interaction logic in terms of interaction timing and function correspondence.
  • the medical device 100 also needs to make certain adaptive settings in the process of responding to the user's batch operation, so as to meet the user's demand for simultaneously interacting with multiple medical devices 100 in a spatial environment.
  • the interactive system 110 of the medical device 100 is provided with a single-machine interactive mode and a multi-machine interactive mode at the same time.
  • voice interaction methods when multiple medical devices coexist include:
  • the voice information includes a single-machine feature or a multi-machine feature, and the identity feature is obtained through pre-allocation.
  • the identity features obtained by the medical device 100 through pre-allocation may include a single-machine feature only for a single-machine interaction mode, and a multi-machine feature for a multi-machine interaction mode. That is, the identity feature obtained by pre-allocation of the medical device 100 is not limited to one, and may be multiple. At least one of the multiple identity features is a single-machine feature, and the remaining identity features are multi-machine features.
  • the multi-machine feature can be understood as a variety of different groups corresponding to the same medical device 100 and different other medical devices 100. When the user calls out different multi-machine features, the multi-machine features as identity features can all correspond to the same medical device 100, so that the interactive system 110 of the medical device 100 is activated.
  • S30a When it is determined that the identity feature is a stand-alone feature, start the local interaction system 110 to interact with the user;
  • the local interaction system 110 is activated and interacts with the user based on the interaction sequence.
  • the interactive system 110 of the medical device 100 is set to have both a single-machine interaction mode and a multi-machine interaction mode.
  • the corresponding startup or mode switching is performed. That is, after acquiring the voice information of the user, the medical device 100 determines whether the identity feature includes the identity feature, and also analyzes whether the identity feature is a single-device feature or a multi-device feature.
  • the native interactive system 110 can be directly started to enter the stand-alone interactive mode to interact with the user; when the identity feature is determined to be a multi-machine feature, the native interactive system 110 can be started and based on The interaction sequence interacts with the user.
  • the division of interaction modes may not be performed. That is, when the medical device 100 starts the local machine interaction system 110 to interact with the user, it only interacts with the user by determining whether the identity feature is a single-machine feature or a multi-machine feature. It can be understood that when it is determined that the identity feature is a stand-alone feature, the medical device 100 can interact with the user one-to-one, and the instructions are relatively concise and easy to execute. When it is determined that the identity feature is a multi-machine feature, the interaction between the medical device 100 and the user needs to be based on a certain interaction sequence to avoid the difficulty in receiving user information after multiple medical devices 100 activated at the same time interact with the user at the same time.
  • the medical device 100 does not need to be divided into a single-machine interaction mode or a multi-machine interaction mode.
  • the only difference is whether the interaction sequence is followed.
  • the division of interaction modes in this embodiment can facilitate understanding and clearly express the applicant's solution.
  • the stand-alone interaction mode of the medical device 100 is relatively straightforward. After the medical device 100 determines the stand-alone features corresponding to the local device, it directly starts the local interactive system 110 to interact with the user.
  • the scenario of multi-machine batch interaction is relatively complicated.
  • the medical device 100 is based on the interaction sequence, a certain timing allocation will be performed with the same group of medical devices 100 that are started together to avoid the simultaneous voice response of multiple medical devices 100.
  • the user cannot hear clearly, or multiple medical devices 100 feed back monitoring data at the same time, which makes it inconvenient for the user to receive. Therefore, the embodiment of FIG. 3 provides convenience for the user to perform batch interaction with multiple medical devices 100 in a space environment.
  • This application scenario can correspond to a ward, where three beds are placed at the same time, and each bed is provided with a medical device 100, that is, there are three medical devices 100 for medical devices 1, 2, and 3.
  • the distribution of identity characteristics corresponding to the three medical devices 100 is shown in Table 1:
  • the identity characteristics obtained by the medical device 1 through pre-allocation include four kinds of "A ⁇ 1 ⁇ 3 ⁇ 4".
  • the letter “A” is the single-machine feature in the identity features of the medical device 1
  • the numbers “1 ⁇ 3 ⁇ 4" are the three multi-machine features of the medical device 1.
  • the voice information that the user calls out includes any one of the identity features of “A, 1, 3, and 4”
  • the interactive system 110 of the medical device 1 can be activated correspondingly, and interact with the medical device 1.
  • the medical device 1 determines that the user's identity feature is a stand-alone feature.
  • the interactive system 110 of the medical device 1 enters the stand-alone interactive mode to interact with the user.
  • the letter “B” and the letter “C” correspond to the medical device 2 and the medical device 3, respectively. That is, the letter B is the stand-alone feature of the medical device 2, and the letter C is the stand-alone feature of the medical device 3. Since the medical device 2 and the medical device 3 have not determined the identity characteristics of the corresponding machine, the interactive system 110 will not activate the interactive function to ensure that the user interacts with the medical device 1 alone. In the same way, when the identity feature called by the user is the letter "B” or the letter "C", the interactive system 110 of the medical device 2 or the medical device 3 will be activated correspondingly.
  • the multi-machine feature obtained through pre-allocation includes the number "2 ⁇ 3 ⁇ 4".
  • the number "3" starts the interactive system 110 of the medical device 1 and the medical device 2 at the same time, and the user can call After the number "3", the medical device 1 and the medical device 2 are interacted in batches through their respective interactive systems 110. That is to say, the number "3” divides the medical equipment 1 and the medical equipment 2 into a group, so that the medical equipment 1 and the medical equipment 2 have the number "3" as the multi-machine feature. Under this grouping, the medical equipment 1 and the medical equipment 2 can Independent of the medical device 3 to interact with the user.
  • its multi-machine feature obtained through pre-allocation includes the number "1 ⁇ 2 ⁇ 4".
  • the interactive function of the interactive system 110 of the medical device 3 and the medical device 1 is activated at the same time, which is convenient for the user to interact with the medical device 1 and the medical device 3 in batch.
  • the interactive system 110 of all medical equipment 100 in the ward is activated at this time , The user can interact with all the medical devices 100 in the ward in batches.
  • the user can be a medical staff.
  • medical staff can arbitrarily combine and interact with the three medical devices 100 by calling out the corresponding identity features. For example, when there is only one bed in this ward with a patient, by exhaling the identity feature corresponding to the medical device 100 set next to the bed, voice interaction with the interactive system 110 of the medical device 100 is performed, and physical signs are performed on the patient. Data collection, physical sign data submission, and regular physical sign data collection operations.
  • medical staff can activate the interactive system 110 of two or three medical devices 100 at the same time by calling out the multi-machine feature of the corresponding medical device 100
  • the above operations are performed through batch interaction with the interactive system 110.
  • the allocation method of identity characteristics in Table 1 is mostly applicable to situations where the number of medical devices 100 in a space environment is small. At this time, a simple number or letter assignment can satisfy the single-machine feature and multi-machine feature settings. When there are more medical devices 100 in the space environment, you can refer to the identity feature allocation method in Table 2:
  • the identity feature allocation method in Table 2 groups the multiple medical devices 100 in the spatial environment into major categories, and then corresponds to each medical device 100 in the group by the number in the group.
  • the medical device 100 used to collect physical sign data of the same patient in the ward has "A” as the multi-machine feature
  • the medical device 100 used to collect the physical sign data of another patient has "B” as the multi-machine feature.
  • a digital number is set to distinguish and define the medical devices 100 with different functions in the two groups, as the stand-alone feature of each medical device 100 , Such as medical equipment A1, medical equipment B2, etc.
  • two medical devices 100 with the same functions of the hospital beds are set with a unified digital multi-machine feature.
  • the medical device A1 and the medical device B1 that collect ECG data are both set with the number "1" as the multi-machine feature
  • the medical device that collects blood pressure A2 and medical equipment B2 both set the number "2" as the multi-machine feature... Therefore, after the medical equipment AN and the medical equipment BN both set the number "N" as the multi-machine feature, the number "1- N" defines the collection function of different physical signs data.
  • the medical staff when there is only one patient in the ward, the medical staff only needs to call out the single-machine feature corresponding to each medical device 100 to interact with multiple medical devices 100 corresponding to the bed one by one. Or, by exhaling the multi-machine feature "A" or "B" corresponding to the hospital bed, the medical staff simultaneously activates the interactive functions of the interactive systems 110 of the multiple medical devices 100 corresponding to the hospital bed to carry out batch interaction.
  • the medical staff can activate two medical devices 100 with the same function on the two beds at the same time by exhaling the numbers, and collect the same physical sign data of the two patients in batches through voice interaction.
  • the voice interaction method of the present application when multiple medical devices coexist, after a reasonable grouping plan for each medical device 100 in the space environment, through simple identity feature assignment, it provides the user with a precise point to start the medical device 100 interactive system 110 In addition to the interactive function, it also provides the convenience for the user to precisely activate the interactive functions of the interactive system 110 of multiple medical devices 100 at the same time for batch interaction.
  • the voice interaction method when multiple medical devices coexist in this application can cope with any number of single-machine interaction or multi-machine interaction scenarios of the medical device 100 in the spatial environment, and at the same time avoid the confusion of the medical device 100 in receiving instructions. It is ensured that the user can effectively receive the feedback of the medical device 100, which simplifies the complicated and repeated interaction process when the user uses the interactive function.
  • the allocation method of Table 1 and Table 2 uses a combination of numbers and letters.
  • the voice interaction method when multiple medical devices coexist in this application does not limit the specific setting content of the identity feature.
  • the user can also use functional words to assign identity features to the medical device 100.
  • words such as "blood pressure”, “cardiograph”, and "heart rate” correspond to functions and simplify the allocation logic, so that the identity features can correspond to various functions of the medical device 100, which is convenient for users to remember.
  • the user can also complete the identity feature setting by setting any words, letters, numbers, or any combination of words that are convenient for their own memory. As long as the effect of distinguishing the identity feature from one or more other medical devices 100 is achieved, so that the user can activate the medical device 100 by calling out the identity feature, it belongs to the allocation method of the identity feature involved in this application.
  • FIG. 4 is a flowchart of another embodiment of a voice interaction method when multiple medical devices coexist in this application.
  • the identity features include single-machine features and multi-machine features. This method includes the following steps:
  • the voice information includes the multi-machine feature, and the identity feature is obtained through pre-allocation.
  • the voice information also includes timing information.
  • the identity feature contained in the voice information is a multi-machine feature.
  • the voice information that the user calls out also includes timing information.
  • the timing information is acquired by the medical device 100 together with the multi-machine feature.
  • the medical device 100 interacts with the user based on the interaction sequence after determining the multi-machine feature corresponding to the local machine. Because the multi-machine feature activates the interactive system 110 of multiple medical devices 100 to interact with the user at the same time, if multiple interactive systems 110 interact with the user at the same time, there are also multiple interactive systems 110 that provide voice feedback to the user at the same time, resulting in user information reception The situation of interference. In order to avoid this phenomenon, the user can order the multiple medical devices 100 corresponding to the multi-machine feature by calling out the multi-machine feature while exhaling the sequence information. The multiple medical devices 100 are ordered sequentially based on the sequence information. Interact with the user.
  • the timing information can be described as a real-time sequence of multiple medical devices 100 that need to be interacted simultaneously by the user.
  • the user can sort the medical devices 1, 2, and 3 by voice when calling out the multi-machine feature "4", for example, "according to the medical device 3, medical device 2, medical device 1 Respond in order".
  • the "response in the order of medical equipment 3, medical equipment 2, and medical equipment 1" can be analyzed and determined as sequence information. Because the timing information obtained by the multiple medical devices 100 is the same, the multiple medical devices 100 interact with the user according to the same sorting standard.
  • the medical device 100 interacts with the user in sequence based on time sequence information does not limit that every information interaction of the interactive system 110 of the medical device 100 must be performed in sequence. It can be set that the multiple medical devices 100 only interact with the user based on the time sequence information only when they report information related to the physical sign data measured by the machine, or when the information needs to be reported in order to prevent the user from receiving interference. Such a setting can further save interaction time and improve efficiency.
  • the identity feature includes a single-machine feature and a multi-machine feature
  • the multi-machine feature also includes a sorting feature. This method includes the following steps:
  • the voice information includes the multi-machine feature, and the multi-machine feature is obtained through pre-allocation.
  • the identity feature also presets the sorting feature corresponding to the multi-machine feature at the same time.
  • This sorting feature is similar to the timing information in the embodiment of FIG. 4, with the difference that the timing information is set by the user in the outgoing voice information, while the sorting feature is preset at the stage of assigning identity features.
  • the sorting feature can simplify the user's operation of additionally setting the response sequence of the medical device 100 after starting multiple medical devices 100 through the multi-machine feature, which is relatively more convenient for the user to use.
  • the identity feature included is the feature of multiple machines.
  • the medical device 100 determines the priority of the local machine when multiple medical devices 100 interact with the user based on the multi-machine feature and the priority ranking of the corresponding local feature in the multi-machine feature. Order of levels.
  • the implementation of setting the sorting feature after the medical device 100 starts the local interactive system 110 is similar to the embodiment of FIG. 4. Because the order of interaction between each medical device 100 and the user has been determined by the sorting feature, this embodiment can also avoid the defect that multiple interactive systems 110 simultaneously give voice feedback to the user, which causes interference in user information reception.
  • the number "1-N" in the medical equipment A1-AN can be defined as the ranking feature.
  • the interactive systems 110 of the medical devices A1-AN are all activated to interact with the user.
  • the medical equipment A1-AN are analyzed based on the "1-N" digital number of the machine to determine the interaction of the machine in the multi-machine mode corresponding to the multi-machine feature "A”
  • the medical device A1, the medical device A2, ... the medical device AN interacts with the user in sequence.
  • the medical device A2 As the latter medical device 100, needs to receive the signal of completion of the interaction of the medical device A1 in order to determine that the current interaction sequence is the turn of the machine.
  • the interaction sequence can be handed over to the medical device A2 by issuing a signal instruction for the completion of the interaction; or the user can send a signal instruction for the completion of the interaction to hand over the interaction sequence.
  • the signal instruction for the completion of the interaction can be the same instruction.
  • the medical device A2 After receiving a signal instruction for the completion of the interaction, the medical device A2 determines that the current interaction sequence is the turn of the machine.
  • the medical device A3 counts and receives two interaction completions. After the signal instruction, it can be determined that the current interactive timing is the turn of the machine, and so on.
  • the signal instructions for the completion of the interaction can also be set separately for different medical devices 100. For example, after the interaction of the medical device A1 is completed, it can send out the information including the identity feature or name of the medical device A2 by sending out "please the medical device A2 to start responding". achieve.
  • the related control can be accomplished through the deployment of the server.
  • the identity features include a single-machine feature and a multi-machine feature.
  • the voice interaction method when multiple medical devices coexist in this application includes the following steps:
  • the identity feature is a multi-machine feature, and the identity feature is obtained through pre-allocation.
  • S23d Determine whether the host is the host based on the host information.
  • the user calls out the content including the host information while calling out the multi-machine feature for batch interaction, so that the medical device 100 obtains the host information through analysis while obtaining the multi-machine feature, and performs corresponding operations and operations. response.
  • any one of the medical devices 100 corresponding to the multi-machine feature can be set as the host.
  • This application does not limit the specific content of the host information.
  • One medical device 100 among the multiple medical devices 100 is defined as the host in any way, so that it can be used as host information different from the other medical devices 100. For example, when the multi-device feature is called out, the single-device feature of one of the medical devices 100 is also called out, and then the medical device 100 corresponding to the single-device feature is defined as the host.
  • the medical device 100 defined as the host interacts with the user through the interactive system 110 of the machine based on the interaction sequence.
  • all medical devices 100 corresponding to the multi-machine feature can interact with the user only through the medical device 100 defined as the host.
  • the user By setting the host, the user only needs to interact with one medical device 100, which realizes the convenience of batch interaction with multiple medical devices 100 at the same time.
  • the user's voice command input or the voice feedback actions of multiple medical devices 100 are all completed by the same medical device 100.
  • Such an implementation also avoids defects such as poor information acceptance and logical confusion that users are likely to encounter when interacting with multiple medical devices 100 simultaneously in the same space environment.
  • Setting the multi-machine interaction mode of the host can be understood as a single-machine interaction between the user and the medical device 100 set as the host to complete the effects of batch command input or information feedback to multiple medical devices 100.
  • the commands that the user needs to operate in batches can be received by the medical device 100 as the host and then allocated to other medical devices one by one.
  • the device 100 can also use other medical devices 100 to directly obtain the user's voice information and accept instructions to complete.
  • the user interacts with the user based on the interaction timing determined by the host, and there are cases where the host information does not correspond to the host as the host. That is, the determination result of step S23d may correspond to the case that the machine is not the host.
  • the interaction between the medical device 100 and the user that is activated by the same multi-machine feature but not set as the host can be completed by the following steps:
  • the identity feature is a multi-machine feature, and the identity feature is obtained through pre-allocation.
  • the voice information also includes host information.
  • S23e It is determined based on the host information that the host is not the host.
  • the medical device 100 in the embodiment in FIG. 6 is activated by the multi-machine feature, but is not set as the host.
  • the medical device 100 that is not set as the host indirectly interacts with the user through the host in the subsequent interaction process. That is, the local interactive system 110 is only used to receive the instructions in the voice information that the user calls, and does not directly respond to the user.
  • the multiple medical devices 100 corresponding to the multi-machine feature except for the medical device 100 that is set as the host, will not send out sound feedback through devices such as speakers, so that the user can only Receive the sound feedback of the medical device 100 set as the host to ensure that the interaction between the user and the medical device 100 set as the host is not interfered.
  • the non-host medical device 100 does not directly respond to the user’s voice in its multi-machine interaction mode, when the user needs the non-host medical device 100 to provide feedback, the medical device 100 can communicate with the host medical device 100 Through the communication connection between the host medical device 100, the content that needs feedback is fed back to the user.
  • the non-host medical device 100 can also provide feedback to the user through other feedback devices of the machine, such as a display, an indicator light and the like.
  • the interaction sequence determined by the host includes two situations:
  • the local interactive system 110 is activated to interact with the user indirectly through the host.
  • the identity feature includes a single-machine feature and a multi-machine feature
  • the multi-machine feature includes a sorting feature.
  • the voice interaction method when multiple medical devices coexist in this application includes the following steps:
  • the voice information includes a multi-machine feature, and the identity feature is obtained through pre-allocation.
  • S30f Start the local interaction system 110 to interact with the user based on the interaction timing determined by the host.
  • the multi-machine interaction between multiple medical devices 100 and the user is also carried out by setting a host.
  • the user does not need to specifically set the host while calling out the multi-machine feature, but by assigning the identity feature while setting the sorting feature, the multiple medical devices 100 corresponding to the same multi-machine feature are sorted by the sorting feature. Comparing the priority rankings between each other, the medical device 100 with the highest priority in the ranking feature is automatically introduced as the host to interact with the user.
  • the sorting feature in this embodiment can be the same as the sorting feature in the embodiment of FIG. 5, and the method of using the sorting feature is determined according to the preset multi-computer interaction mode or the specific implementation of the interaction sequence. Do not conflict.
  • the identity feature includes a single-machine feature and a multi-machine feature
  • the multi-machine feature includes a sorting feature. If the ranking feature of the medical device 100 is not the highest ranking feature among the multi-machine features, the method may include the following steps:
  • the identity feature is a multi-machine feature, and the identity feature is obtained through pre-allocation.
  • the priority of the ranking feature is lower than the priority of at least one ranking feature in the medical device 100 that is activated at the same time, that is, the medical device 100 is considered not to be set as the host.
  • the medical device 100 that is not set as the host interacts with the user indirectly through the host in the subsequent interaction process, and is only used to receive the instructions in the voice information called by the user, and does not direct the user. Reply to ensure that the interaction between the user and the medical device 100 set as the host is not interfered.
  • the medical device 100 when the user needs the non-host medical device 100 to provide feedback, the medical device 100 can communicate with the host medical device 100, and send the content that needs feedback to the host medical device 100. User feedback.
  • the non-host medical device 100 can also provide feedback to the user through other feedback devices of the machine, such as a display, an indicator light and the like.
  • step S20a of the method of the present application distinguishes between single-machine features and multi-machine features, which simplifies to a certain extent the decision logic of whether to interact with the user based on the interaction sequence in step S30a. That is, when it is determined in step S20a that the identity feature is a stand-alone feature, the step of determining the interaction sequence can be skipped and directly interact with the user. When it is determined in step S20a that the identity feature is a multi-machine feature, the interaction sequence is determined based on the pre-allocated settings or the related settings defined by the user. Of course, there are still some embodiments that do not distinguish between single-machine features or multi-machine features for identity features. After receiving voice information containing identity features, they directly determine the interaction sequence and communicate with the user based on the result of the interaction sequence. To interact. For example, the embodiment in Figure 4a:
  • S200b Determine that the voice information includes timing information.
  • the embodiment of Fig. 4a differs from the embodiment of Fig. 4 in that, in step S200b, the judgment of the identity feature is realized by determining that the voice information called by the user contains timing information.
  • the medical device 100 determines the timing information, it can be considered that the timing information includes both the identity feature and the interaction timing.
  • the function of the timing information also includes two aspects: as an identity feature, the interactive system 110 of the multiple medical devices 100 is activated, and the interactive timing of the multiple medical devices 100 is determined. Therefore, when the voice information called by the user includes timing information, the medical device 100 can directly start the local interactive system 110 to interact with the user based on the interaction timing of the timing information.
  • the user will call out the timing information together when the interactive systems 110 of multiple medical devices 100 are activated at the same time.
  • the user can interact with the user directly based on the interaction timing of the timing information, and the effect similar to FIG. 4 can also be achieved.
  • the identity feature includes a ranking feature, and this embodiment specifically includes:
  • the ranking feature is directly included in the identity feature.
  • the medical device 100 directly determines the interaction sequence through the ranking feature. It is understandable that, corresponding to the embodiment in FIG. 5, although this embodiment does not separate the single-machine feature from the multi-machine feature, for the identity feature including the sorting feature, it can be assumed that the identity feature has two or more activated correspondingly.
  • the interactive system 110 of the medical device 100 therefore, the identity feature will include the ranking feature. If there is an identity feature that does not include a ranking feature, this identity feature should only correspond to the interactive system 110 that activates a medical device 100.
  • this embodiment does not define the difference between the identity feature corresponding to a single machine or multiple machines, it is also possible to activate the interactive system 110 of multiple medical devices 100 at the same time through whether the identity feature includes a sorting feature, and enable the interaction of multiple medical devices 100
  • the system 110 interacts in an orderly manner.
  • FIG. 5a can also be described as: when the identity feature only corresponds to starting the interactive system 110 of one medical device 100, it includes a ranking feature, and the interactive system 110 of the medical device 100 corresponding to the ranking feature directly interacts with the user To interact. That is, there is only one interactive system 110 in the ranking feature, and there is no need to wait for the response of other interactive systems 110 to end.
  • S202d Determine whether the machine is the host according to the host information.
  • the host information is used to determine the host from multiple medical devices 100. Subsequent steps such as determining the host based on the host information, and interacting with the user based on the interaction sequence determined by the host are exactly the same as the steps in FIG. 6. It can be understood that, in this embodiment, the determination of the single-machine feature and the multi-machine feature is also omitted, and the user directly uses the host information to determine that the user needs to activate the interactive system 110 of multiple medical devices 100 at the same time. Compared with the embodiment in FIG. 6, this embodiment is more concise, saves the information volume of the user's voice information, and also achieves an implementation effect similar to that in FIG. 6 better.
  • the identity feature includes a ranking feature. Specific steps are as follows:
  • S203f Determine whether the machine is the host according to the comparison result.
  • the identity feature directly includes the ranking feature.
  • This allows the user to activate the interactive system 110 of multiple medical devices 100 at the same time by calling out the identity feature in the voice message.
  • a host has been selected from the multiple interactive systems 110 that have been activated through the comparison of ranking features.
  • the interaction sequence determined by the host similar to FIG. 8 is used to interact with the user. That is, the medical device 100 that is selected as the host based on the ranking feature directly interacts with the user, while the remaining medical devices 100 interact with the user indirectly through the host.
  • This embodiment also saves the amount of information in the user's voice information, and can obtain a more intelligent interaction effect.
  • Fig. 10 is a flowchart of another embodiment of the voice interaction method when multiple medical devices coexist in this application.
  • the identity features include single-machine features, multi-machine features, and sound source distance conditions. This method includes:
  • this embodiment also presets the sound source distance condition when pre-assigning identity features.
  • the sound source distance condition may include a comparison of the sound source distance threshold and the sound source distance.
  • the sound source distance threshold is a numerical constant related to the distance.
  • the sound source distance comparison requires comparison between the sound source distance values obtained by the local device and other medical devices 100 in the environment.
  • the medical device 100 acquires the voice information in the space environment, it also detects the distance of the user through the sensor, and measures the distance value of the sound source of the user making the sound.
  • the medical device 100 compares the detected distance value from the sound source with a preset sound source distance threshold; or the medical device 100 compares the detected sound source distance value Compare with the sound source distance value detected by other medical equipment 100 in the environment. Because the sound source distance value and the sound source distance threshold are comparisons between two distance-related numerical constants, the magnitude of the two values can be obtained relatively quickly and directly, and the comparison result can be obtained. The comparison of the sound source distance values between multiple medical devices 100 can also quickly obtain the comparison result.
  • the sound source distance value obtained by the machine can be determined Meet the sound source distance condition and proceed to the next steps.
  • the medical device 100 determines that the distance between the user and the machine is within a sufficiently close range.
  • the user moves within a range close enough to a certain medical device 100 in a spatial environment, it can be considered that the user is a directional voice message sent to the medical device 100. Or search for the medical device 100 closest to the user among multiple medical devices 100.
  • the medical device 100 automatically determines that there are identity features in the voice information, and activates the interactive system 110 to enter the stand-alone interactive mode to interact with the user.
  • This embodiment provides convenience for users to use this method to interact in a space environment. Because when there are a large number of medical devices 100 in the space environment, the user has to remember details such as single-machine features, multiple multi-machine features, and sorting features corresponding to each medical device 100, which inevitably makes mistakes. Although it can be prompted by a list, a sticker on the medical device 100, etc., when the user only needs to interact with a specific medical device 100 in the space environment, the implementation of FIG. 10 can be used to obtain a more concise and fast Start operation. It is understandable that when the user moves to a sufficiently close range of the medical device 100, by calling out voice information, and the voice information may not be an identity feature, but any voice information, the medical device 100 can be started very conveniently.
  • the interactive system 110 eliminates the tedious operations that may exist, such as a large amount of memory or look-up tables, look-up icons and marks.
  • the sound source distance condition may include both the sound source distance threshold and the comparison of the sound source distance.
  • the medical device 100 determines that the sound source distance value of the machine is the smallest, but the smallest sound source distance value is not less than the sound source distance threshold, it can be considered that the user is not starting the medical device 100. Introducing two judgment conditions at the same time can improve the accuracy of user directivity and reduce the incidence of misoperation.
  • the embodiment of FIG. 11 can also be used to perform.
  • the identity feature also includes volume conditions.
  • the voice interaction method when multiple medical devices coexist in this application includes:
  • S10i Acquire environmental voice information and volume value.
  • the idea of the method in this embodiment is similar to the idea of FIG. 10, the difference is that it does not use the method of distance measurement to the user, but directly determines whether the user is in a sufficiently close range through the volume of the voice information that the user calls. Inside. When it is determined that the volume of the voice information exceeds the volume threshold, it can usually be determined that the user is within a sufficiently close range from the medical device 100, and from this, it is determined that the user’s voice information has identity features, and the stand-alone interaction mode and The user interacts.
  • the medical device 100 determines that the volume of the user's voice information obtained by the machine is the highest in the environment, it can also determine that there are identity features in the user's voice information, and activate the interactive system 110 of the machine to interact with the user. It can be understood that the effect obtained by the implementation of the method in FIG. 11 is also similar to that obtained by the method in FIG. 10. Further, in the method implementation of FIG. 11, because it does not need to measure the distance to the user, the interactive system 110 inherent in the medical device 100 is used to obtain the volume value of the user's voice information, and the equipment of the sensor is omitted, which simplifies The number of sensors used in the medical device 100 is reduced, and the cost is saved.
  • the sound source distance condition and the volume condition to determine whether the user's voice information is used as an identity feature. Because different users interact with the medical device 100 based on this method, there may be differences in the size of their voices, causing some loud voice users to unconsciously trigger the medical device 100 nearby to enter the stand-alone interactive mode. Or some users do not pay attention to keeping a sufficient distance from the nearest medical device 100 while calling out the multi-machine feature, so that the medical device 100 enters the single-machine interactive mode, and simultaneously activates multiple medical devices 100 corresponding to the multi-machine feature to enter the interaction status. This situation is more prominent when the sound source distance threshold or volume threshold is used as the judgment condition.
  • the sound source distance condition and the volume condition can be set to determine whether the user's voice information is an identity feature. That is, the user needs to be within a certain distance and the volume value is high enough to trigger the function of the sound source distance condition and volume condition in the identity feature, so as to avoid false triggering.
  • the single-machine feature and the multi-machine feature are not distinguished.
  • the interactive system 110 of the medical device 100 can be activated correspondingly to interact, without distinguishing between a single device or multiple devices. It is understandable that when the sound source distance condition and the volume condition are the highest value after comparison, only the interactive system 110 of the medical device 100 will be activated to interact with the user, and no logical confusion will occur.
  • Fig. 12 illustrates a flowchart of another embodiment of the voice interaction method when multiple medical devices coexist as shown in Fig. 1.
  • the medical device 100 starts the native interactive system 110 to interact with the user, it further includes:
  • the feedback information can be visual feedback information or auditory feedback information. Because if the interactive system 110 provides feedback to the user at the same time, it is not easy for the user to quickly and batch detect whether the target medical device 100 is activated through voice feedback. At this time, the aforementioned interaction sequence can be used to sequentially provide auditory feedback information, thereby facilitating the user's reception.
  • the medical device 100 can generate and display visual feedback information such as feedback images and feedback lights through visual feedback devices such as display screens and indicator lights provided on the machine, and initiate visual feedback to the user, thereby displaying the machine’s
  • the interactive system 110 has activated the interactive function. That is, when there are multiple medical devices 100 in the space environment, the medical device 100 with the interactive function activated can provide visual feedback to the user through information such as patterns, lights, etc., to provide the user with a quick confirmation of whether the identity characteristics issued by the target medical device The convenience of 100 effective reception.
  • the user finds that the identity feature he called out has not been received by all the target medical devices 100, that is, the user fails to activate the target medical device 100 through the identity feature, the user can add the activation by calling out the identity feature again.
  • the target medical device 100 then performs batch interaction to improve the reliability of the method of this application.
  • FIG. 13 is also a flowchart of another embodiment of the voice interaction method when multiple medical devices coexist as shown in FIG. 1.
  • the method includes:
  • this embodiment provides a logout mechanism.
  • the user determines that he has activated the non-target medical device 100 due to an error in his identity feature, he can specify one or more medical devices 100 to withdraw from the current interaction link through the outgoing group information.
  • it is convenient for the user to modify the details of the target medical device 100 corresponding to his voice operation after starting the interactive system 110 of the medical device 100.
  • the user while performing batch interactions, if the user needs to perform further interactive operations on some of the medical devices 100 in the multi-machine interactive mode, they can also exclude the medical devices 100 that do not require further interaction by calling out the group exit information. Outside the sequence of batch interaction.
  • this embodiment provides the convenience for users to quickly interact in batches, and at the same time to make a second selection of the medical device 100 that is currently interacting. It is understandable that the embodiment of FIG. 13 can also be performed for a stand-alone interaction mode, so as to end the interaction process between the user and the currently interacting medical device 100, and then start the next interaction operation.
  • the withdrawal information may include a preset idle time threshold. That is, the medical device 100 does not interact with the user in a certain period of time, and the period of time exceeds the preset idle period, it can be determined that the user has not interacted with the medical device 100, and automatically exits the current interactive state, saving Resources.
  • Figure 13a include:
  • FIG. 13 is a situation in which the interactive system 110 of the medical device 100 exits the current interactive state after entering the interactive state.
  • this embodiment provides convenience for the medical device 100 to supplement the medical device 100 that needs to interact after receiving the user's networking information.
  • the identity features in step S20k are not limited to single-machine features or multi-machine features.
  • the interactive system 110 of the machine enters the interactive state, the interactive system 110 of the remaining medical devices 100 in the environment that matches the networking information can be started by obtaining the user's networking information. In this way, on the premise that the user needs to retain the medical device 100 that is currently in the interactive state, the remaining medical devices 100 that he wants to interact with are added to perform a larger batch of interactions.
  • the interaction time sequence also needs to match the networking information. That is, the medical device 100 currently in the interactive state needs to reallocate the interaction sequence with the supplemented medical device 100.
  • the re-allocated interaction sequence may be preserving the current interaction sequence, and the medical device 100 supplemented with networking information may be sequentially arranged after the current interaction sequence based on priority, user designation, etc., to form the re-allocated interaction sequence. It is also possible to rearrange the sequence of the supplementary medical device 100 and the medical device 100 currently in the interactive state by priority, user designation, etc., without preserving the current interaction sequence, to create a newly allocated interaction sequence. These interaction timings all belong to the interaction timing of networking information.
  • the interaction sequence of networking information can be performed by setting the host. That is, in a scenario where a host currently exists or is currently a stand-alone interaction, the medical device 100 supplemented by networking information is automatically defined as a non-host, and the user still uses the current host or recognizes the stand-alone as the host based on the host information The interaction timing of the interaction. At this time, the interaction sequence of networking information is equal to the interaction sequence determined by the host. Of course, it is also possible to re-identify the host after the addition, that is, to compare the sorting characteristics of each medical device 100 entering the interactive state again, and then determine the host according to the comparison result. At this time, the interaction sequence of the networking information is still the same as the host determination The interaction sequence.
  • FIG. 2 is a schematic diagram of a medical device 100 involved in this application.
  • the medical device 100 further includes a processor 101, an input device 102, an output device 103 and a storage device 104.
  • the processor 101, the input device 102, the output device 103, and the storage device 104 are connected to each other, wherein the storage device 104 is used to store a computer program, the computer program includes program instructions, and the processor 101 is configured to Call the program instructions to execute the voice interaction method when multiple medical devices coexist.
  • the processor 101 calls the program instructions stored in the storage device 104 to perform the following operations:
  • the medical device 100 of the present application can execute the voice interaction method when multiple medical devices coexist because the processor 101 calls the program of the storage device 104, so that in a scenario where there are multiple medical devices 100 in a spatial environment, It provides convenience for the user to start the local interactive system 110 to interact with the user through the identity feature corresponding to the local machine. At the same time, it avoids the defect that the user interacts with the target medical device 100 in the same space environment due to the similar or similar voice commands, and the non-target medical device 100 also stimulates the interaction and causes confusion in the interaction logic.
  • the storage device 104 may include a volatile memory device, such as a random-access memory (RAM); the storage device 104 may also include a non-volatile memory device (non-volatile memory), for example A flash memory (flash memory), a solid-state drive (SSD), etc.; the storage device 104 may also include a combination of the foregoing types of storage devices.
  • RAM random-access memory
  • non-volatile memory for example A flash memory (flash memory), a solid-state drive (SSD), etc.
  • SSD solid-state drive
  • the processor 101 may be a central processing unit (CPU).
  • the processor 101 may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA) Or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
  • FIG. 14 is a schematic diagram of a medical system 200 provided by this application.
  • the medical system 200 includes:
  • the obtaining module 201 is used to obtain voice information in the environment
  • the analysis module 202 is configured to determine that the identity feature exists in the voice information, and the identity feature is obtained through pre-allocation;
  • the control module 203 is used to start the local interaction system 110 to interact with the user.
  • the medical system 200 of the present application is also used to implement the voice interaction method when multiple medical devices of the present application coexist.
  • the acquisition module 201 starts to monitor the voice information in the space environment.
  • the analysis module 202 analyzes the voice information monitored by the acquisition module 201.
  • the analysis module 202 sends instructions to the control module 203 so that the control module 203 activates the local interactive system to interact with the user . Therefore, when there are multiple medical systems 200 in the spatial environment, the medical system 200 of the present application can enable the function of interacting with the user by determining whether the voice information called by the user contains the identity feature obtained by the corresponding machine pre-allocation. , To avoid mistaking the user's voice commands that are not directed to the machine as the machine's instructions, and activate the corresponding functions of the machine, causing the user's voice instructions to be ambiguous.
  • the medical system 200 further includes a pairing module 204 for obtaining pre-allocated identity features.
  • a pairing module 204 for obtaining pre-allocated identity features. It is understandable that, as in the above method embodiment, the user can complete the pre-allocation of the identity features of the pairing module 204 at the factory stage, or it can be placed in the space environment in the medical system 200, and there are multiple medical devices in the space environment. After the system 200, based on the specific distribution and functions of the medical system 200 in the space environment, the matching module 204 is used to complete the pre-allocation of identity features.
  • the pairing module 204 obtains the pre-allocated identity features including single-machine features and multi-machine features.
  • the analysis module 202 determines that the voice information has an identity feature, it also needs to determine that the identity feature included in the voice information is a single-machine feature or a multi-machine feature.
  • control module 203 starts the local interaction system 100 to interact with the user
  • analysis module 202 determines that the identity feature is a stand-alone feature
  • control module 203 starts the local interaction system 110 to interact with the user
  • the control module 203 activates the local interaction system 110 and interacts with the user based on the interaction sequence.
  • the medical system 200 further includes a sorting module 205.
  • the sorting module 205 is used to determine the time sequence of the interaction between the local interactive system 110 and the user.
  • the sorting module 205 determines the timing of the interaction between the local interactive system 110 and the user based on the timing information acquired by the acquiring module 201.
  • the analysis module 202 is configured to analyze and obtain multiple machine features and time sequence information included in the voice information when it is determined that there are identity features in the voice information;
  • the control module 203 is configured to start the local interaction system 110 to interact with the user based on the interaction sequence
  • the sorting module 205 is configured to control the local interaction system 110 to sequentially interact with the user based on the interaction sequence of the timing information.
  • the sorting module 205 determines the sequence of interaction between the local interactive system 110 and the user based on the sorting features acquired by the pairing module 204. Specifically, the pairing module 204 obtains the pre-allocated multi-machine features including sorting features;
  • the obtaining module 201 obtains the voice information in the environment, when the analysis module 202 determines that there is an identity feature in the voice information, it determines that the identity feature is a multi-machine feature;
  • the ranking module 205 is also used to analyze the priority ranking of the ranking features in the multi-machine features
  • control module 203 starts the interactive system 110 to interact with the user based on the interaction sequence, it controls the multiple medical systems 200 to sequentially interact with the user based on the sequence of interaction sequence features.
  • the medical system 200 further includes a judgment module 206.
  • the judging module 206 is used to determine whether the local machine acts as a host to interact with the user.
  • the judging module 206 determines whether the machine is used as a host to interact with the user based on the host information obtained by the obtaining module 201. Specifically, when the acquiring module 201 acquires voice information in the environment, the voice information called by the user includes multi-machine features and host information;
  • the analysis module 202 is used to determine that the identity feature is a multi-machine feature, and the judgment module 206 is used to determine whether the machine is a host based on the host information;
  • the control module 203 is configured to start the local interactive system 110 to interact with the user based on the interaction timing determined by the host.
  • the sorting module 205 determines whether the local computer is used as a host to interact with the user based on the sorting features in the multi-machine features acquired by the pairing module 204. Specifically, the pairing module 204 obtains the pre-allocated multi-machine features including sorting features;
  • the analysis module 202 is used to determine that the identity feature is a multi-machine feature when it is determined that the voice information has an identity feature;
  • the sorting module 205 is also used to analyze the sorting of the priority of the sorting features in the identity features; the judging module 206 is used to determine the sorting of the priorities corresponding to the sorting features of the machine, and determine whether the machine is the host;
  • the control module 203 is configured to start the local interactive system 110 to interact with the user based on the interaction timing determined by the host after the interactive system 110 interacts with the user based on the interaction timing.
  • the analysis module 202 is configured to determine that the voice information further includes timing information when the identity feature is found in the voice information;
  • the control module 203 is used to start the local interaction system 110 to interact with the user, and interact with the user based on the interaction timing of the timing information.
  • the identity features allocated by the matching module 204 include a ranking feature
  • the ranking module 205 is configured to analyze the ranking of the ranking feature when the identity feature is determined to exist in the voice information
  • the control module 203 is configured to interact with the user based on the interaction sequence of the sorting feature when starting the local interaction system 110 to interact with the user.
  • the analysis module 202 is configured to determine that the voice information also includes host information when the identity feature is found in the voice information;
  • the judging module 206 is configured to judge whether the machine is the host based on the host information
  • the control module 203 is configured to interact with the user based on the interaction timing determined by the host when starting the local interaction system 110 to interact with the user.
  • the identity characteristics obtained by the matching module 204 include ranking characteristics
  • the sorting module 205 is configured to compare the sorting of the sorting features of the local machine when it is determined that the identity feature exists in the voice information;
  • the judging module 206 is configured to judge whether the local machine is the host based on the comparison result
  • the control module 203 is configured to interact with the user based on the interaction timing determined by the host when starting the local interaction system 110 to interact with the user.
  • control module 203 starts the local interaction system 110 to interact with the user based on the interaction timing determined by the host includes:
  • the control module 203 activates the local interactive system 110 to interact with the user;
  • the control module 203 activates the local interactive system 110 to indirectly interact with the user through the host.
  • the medical system 200 further includes a sound source ranging module 207, and the sound source ranging module 207 is used to detect the sound source distance value.
  • the pairing module 204 obtains the pre-allocated identity features including the sound source distance condition;
  • the sound source ranging module 207 is used to acquire the sound source distance value
  • the analysis module 202 is configured to determine that the distance to the sound source satisfies the sound source distance condition
  • the analysis module 202 is used to determine the presence of identity features in the voice information. It is understandable that the control module 203 subsequently activates the local interaction system 110 to perform voice interaction with the user based on the identity feature.
  • the medical system 200 further includes a volume detection module 208, and the volume detection module 208 is configured to detect a volume value.
  • the pairing module 204 obtains the pre-allocated identity features including volume conditions;
  • the volume detection module 208 is used to obtain the volume value of the environmental voice information
  • the analysis module 202 is configured to determine that the volume value in the voice information meets the volume condition
  • the analysis module 202 is also used to determine the presence of the identity feature in the voice information. It is understandable that the control module 203 subsequently activates the local interaction system 110 to perform voice interaction with the user based on the identity feature.
  • the pre-allocated sound source distance condition obtained by the pairing module 204 includes a sound source distance threshold, or/and
  • the analysis module 202 determines that the sound source distance value of the local machine is greater than the sound source distance value of any other medical equipment in the environment.
  • the pre-allocated volume condition obtained by the pairing module 204 includes a volume threshold, or
  • the volume value of the local machine is greater than the volume value of any other medical equipment in the environment.
  • the medical system 200 further includes a feedback module 209.
  • the feedback module 209 is used to generate and display feedback information to show that the local interactive system 110 has been activated. Specifically, after the control module 203 starts the local interaction system 110 to interact with the user, the feedback module 209 generates and displays signals such as feedback images, feedback lights, etc. to the user by connecting with visual feedback devices such as display screens and indicator lights to inform the user The user's native interactive system 110 has entered an interactive state. Or, the feedback module 209 provides auditory feedback to the user through the interactive system 110.
  • the acquisition module 201 is used to acquire voice information in the environment
  • the analysis module 202 is used to determine that there is exit information in the voice information
  • the control module 203 is also used to exit the local interactive system 110 and stop interacting with the user.
  • the medical system 200 further includes a networking module 210.
  • the acquisition module 201 is used to continue to acquire voice information in the environment;
  • the analysis module 202 is configured to determine that there is networking information in the voice information
  • the networking module 210 is configured to activate the interactive system 110 of the remaining medical devices 100 in the environment that matches the networking information based on the networking information;
  • the control module 203 is also used to control the local interaction system 110 to interact with the user based on the interaction timing of the networking information.
  • the medical system 200 of this application can be installed on a monitor or a central station.
  • the monitor and the central station may also include multiple main bodies, such as a host, a front-end device, and a remote server.
  • the acquisition module 201, the distance measurement module 207, and the volume detection module 208 need to be installed on the front-end equipment to acquire user voice information, sound source distance information, and volume value
  • the other functional modules in the medical system 200 of this application are in multiple
  • the distribution on the main body does not need to be particularly limited. It can be set to run on any main body of the front end, middle end or back end.
  • the monitor or central station uses the medical system 200 of this application to implement the voice interaction method when multiple medical devices coexist in this application, it is equipped with the ability to determine whether the voice information the user calls out when there are multiple medical systems 200 in the space environment. Contains the function of enabling interaction with the user corresponding to the identity of the machine, avoiding mistakenly interpreting the user's voice commands that are not directed to the machine as the machine's instructions, and starting the corresponding functions of the machine, causing the user's voice instructions to be unclear.

Abstract

本发明多医疗设备共存时的语音交互方法,用于对同一空间环境内存在多个具备语音交互功能的医疗设备时的语音交互操作。医疗设备检测环境中的语音信息,并分析确定到该语音信息中存在与本机对应的身份特征后,医疗设备才启动本机的交互系统与用户进行交互。其中身份特征由预分配获取,不同的身份特征对应到空间环境内不同的医疗设备。由此,本语音交互方法得以在同一空间环境内基于身份特征而对多个医疗设备具备了指向性,可以单独或分组对医疗设备进行交互,避免了语音指令的混乱。本申请还涉及执行本交互方法的医疗设备和医疗系统。

Description

多医疗设备共存时的语音交互方法、医疗系统及医疗设备 技术领域
本申请涉及医疗设备技术领域,尤其涉及一种用于多个医疗设备共存时的语音交互方法、执行此方法的医疗设备、以及执行此方法的医疗系统。
背景技术
当前具备语音交互功能的医疗设备越来越多,使得用户与医疗设备的交互也更加便捷。但由于相同型号或相同类型的医疗设备功能相近,因此其语音交互指令也大致相同。这使得在同一空间环境内,如病房、门诊、手术室等场景下,如果同时存在多个相同型号或相同类型的具有语音交互功能的医疗设备,多个医疗设备都会接收到空间环境内的同一语音交互命令并进行响应。
这样的场景导致用户的语音交互失去了指向性,无法针对其中一个或部分医疗设备进行语音交互并执行医疗功能。如果空间环境内部分医疗设备正处于监测阶段而另一部分医疗设备处于闲置状态,还容易因为执行同一语音指令而造成部分医疗设备的逻辑混乱。进一步,如果多个医疗设备同时对用户进行语音反馈,还可能造成多个语音反馈的消息之间相互干扰,影响用户对信息的有效接收。
发明内容
本申请提出一种用于多个医疗设备共存时的语音交互方法,用以明确同一空间环境内存在多个医疗设备时语音指令的指向性。同时,本申请还涉及执行此方法的医疗设备,以及执行此方法的医疗系统。本申请具体包括如下技术方案:
第一方面,本申请多医疗设备共存时的语音交互方法,包括:
获取环境内语音信息;
确定到所述语音信息中存在所述身份特征,所述身份特征通过预分配获得;
启动本机交互系统与用户进行交互。
其中,所述身份特征中包括单机特征和多机特征,所述确定到所述语音信息中存在所述身份特征,包括:
确定到所述语音信息包括单机特征或多机特征;
所述启动本机交互系统与用户进行交互,包括:
当确定到所述身份特征为单机特征时,启动本机交互系统与用户进行交互;
当确定到所述身份特征为多机特征时,启动本机交互系统并基于交互时序与用户进行交互。
其中,所述确定到所述语音信息中存在所述身份特征,包括:
确定到所述身份特征为多机特征;
确定到所述语音信息中还包括时序信息;
所述启动本机交互系统并基于交互时序与用户进行交互,包括:
根据所述时序信息的交互时序与用户进行交互。
其中,所述多机特征中包括排序特征,所述确定到所述语音信息中存在所述身份特征,包括:
确定到所述身份特征为多机特征;
分析所述多机特征中的所述排序特征的排序;
所述启动本机交互系统并基于交互时序与用户进行交互,包括:
基于所述排序特征的交互时序与用户进行交互。
其中,所述确定到所述语音信息中存在所述身份特征,包括:
确定到所述身份特征为多机特征;
确定到所述语音信息中还包括主机信息;
依据所述主机信息判定本机是否为主机;
所述启动本机交互系统并基于交互时序与用户进行交互,包括:
基于所述主机判定的交互时序与用户进行交互。
其中,所述多机特征包括排序特征,所述确定到所述语音信息中存在所述身份特征,包括:
确定到所述身份特征为多机特征;
分析并比对本机的所述排序特征的排序;
依据所述比对结果判定本机是否为主机;
所述启动本机交互系统并基于交互时序与用户进行交互,包括:
基于所述主机判定的交互时序与用户进行交互。
其中,所述确定到所述语音信息中存在所述身份特征,包括:
确定到所述语音信息中包括时序信息;
所述启动本机交互系统与用户进行交互,包括:
基于所述时序信息的交互时序与用户进行交互。
其中,所述身份特征包括排序特征,所述确定到所述语音信息中存在所述身份特征,包括:
分析所述排序特征的排序;
所述启动本机交互系统与用户进行交互,包括:
基于所述排序特征的交互时序与用户进行交互。
其中,所述确定到所述语音信息中存在所述身份特征,包括:
确定到所述语音信息中包括主机信息;
依据所述主机信息判定本机是否为主机;
所述启动本机交互系统与用户进行交互,包括:
基于所述主机判定的交互时序与用户进行交互。
其中,所述身份特征包括排序特征,所述确定到所述语音信息中存在所述身份特征时,包括:
分析并比对本机的所述排序特征的排序;
依据所述比对结果判定本机是否为主机;
所述启动本机交互系统与用户进行交互,包括:
基于所述主机判定的交互时序与用户进行交互。
其中,所述主机判定的交互时序包括:
若判定到本机为所述主机,则启动本机交互系统与用户进行交互;
若判定到本机不为所述主机,则启动本机交互系统通过所述主机与用户间接交互。
其中,所述身份特征包括声源距离条件,所述获取环境内语音信息,包括:
获取环境语音信息及声源距离值;
所述确定到所述语音信息中存在所述身份特征,包括:
确定到所述声源距离值满足所述声源距离条件;
判定所述语音信息中存在所述身份特征。
其中,所述身份特征包括音量条件,所述获取环境内语音信息,包括:
获取环境语音信息及音量值;
所述确定到所述语音信息中存在所述身份特征,包括:
确定到所述音量值满足所述音量条件;
判定所述语音信息中存在所述身份特征。
其中,所述声源距离条件包括声源距离阈值,或/和
判定到本机声源距离值大于环境内任意其余所述医疗设备的声源距离值。
其中,所述音量条件包括音量阈值,或/和
判定到本机音量值大于环境内任意其余所述医疗设备的音量值。
其中,在启动本机交互系统与用户进行交互之后,还包括:
获取环境内语音信息;
确定到所述语音信息中存在退组信息;
退出所述本机交互系统并停止与用户进行交互。
其中,在启动本机交互系统与用户进行交互之后,还包括:
获取环境内语音信息;
确定到所述语音信息中存在组网信息;
基于所述组网信息启动所述环境内与所述组网信息匹配的其余医疗设备的交互系统;
控制本机交互系统基于所述组网信息的交互时序与用户进行交互。
其中,所述启动本机交互系统与用户进行交互之后,还包括:
生成并展示反馈信息以显示本机交互系统已启动。
其中,所述反馈信息包括视觉反馈信息和/或听觉反馈信息。
本申请多医疗设备共存时的语音交互方法,通过获取环境内的语音信息,来分析确定语音信息中是否存在所述身份特征。这些医疗设备可能为同类型或同型号的医疗设备,也可以为不同类型或不同型号的医疗设备。身份特征的引 入使得各个医疗设备获得了在该空间环境内独立的语音启动指令。当确定到用户输出的语音信息中包含所述身份特征时,医疗设备才启动本机的交互系统与用户进行交互。可以理解的,医疗设备的交互系统可以包括语音交互功能,或诸如视觉交互功能、通信交互等各种功能与用户进行交互。由此,本申请多医疗设备共存时的语音交互方法利用不同的身份特征明确了与空间环境内各个医疗设备交互时的指向性,方便用户通过输出医疗设备对应的身份特征来与对应的医疗设备进行交互。避免了相同型号或相同类型的医疗设备功能相近,造成语音交互指令相同或类似而造成的语音指令指向不明、部分医疗设备逻辑混乱的缺陷。
第二方面,本申请还涉及一种医疗设备,包括处理器、输入装置、输出装置和存储装置。所述处理器、输入装置、输出装置和存储装置相互连接,其中,所述存储装置用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行上述的多医疗设备共存时的语音交互方法。
第三方面,本申请还涉及一种医疗系统,包括:
获取模块,用于获取环境内语音信息;
分析模块,用于确定到所述语音信息中存在所述身份特征,所述身份特征通过预分配获得;
控制模块,用于启动本机交互系统与用户进行交互。
其中,所述医疗系统还包括配对模块,所述配对模块用于获得预分配的身份特征。
其中,所述医疗系统还包括排序模块,所述排序模块用于确定所述本机交互系统与用户进行交互的时序。
其中,所述排序模块基于所述获取模块获取到的时序信息以确定所述本机交互系统与用户进行交互的时序。
其中,所述排序模块基于所述配对模块获取到的排序特征以确定所述本机交互系统与用户进行交互的时序。
其中,所述医疗系统还包括判断模块,所述判断模块用于确定本机是否作为主机与用户进行交互。
其中,所述判断模块基于所述获取模块获取到的主机信息以确定本机是否作为主机与用户进行交互。
其中,所述排序模块基于所述配对模块获取到的排序特征以确定本机是否作为主机与用户进行交互。
其中,所述医疗系统还包括声源测距模块,所述声源测距模块用于检测声源距离值。
其中,所述医疗系统还包括音量检测模块,所述音量检测模块用于检测音量值。
其中,所述医疗系统还包括组网模块,所述组网模块用于启动所述环境内与所述组网信息匹配的其余医疗设备的交互系统。
其中,所述医疗系统还包括反馈模块,所述反馈模块用于生成并展示反馈信息以显示本机交互系统已启动。
其中,所述配对模块获得预分配的身份特征包括单机特征和多机特征;
所述分析模块在确定到所述语音信息中存在所述身份特征时,确定到所述语音信息包括单机特征或多机特征;
所述控制模块在启动本机交互系统与用户进行交互时,当所述分析模块确定到所述身份特征为单机特征,则所述控制模块启动本机交互系统与用户进行交互;
当所述分析模块确定到所述身份特征为多机特征,则所述控制模块启动本机交互系统并基于交互时序与用户进行交互。
其中,所述分析模块用于在确定到所述语音信息中存在所述身份特征时,分析获得的所述语音信息包括的所述多机特征以及时序信息;
所述控制模块用于启动本机交互系统并基于交互时序与用户进行交互时,所述排序模块用于控制本机交互系统基于所述时序信息的交互时序与用户进行交互。
其中,所述配对模块在获得预分配的多机特征中包括排序特征。所述分析模块在确定到所述语音信息中存在所述身份特征时,确定到所述身份特征为多机特征;
所述排序模块还用于分析所述身份特征中的所述排序特征的排序;
所述控制模块用于在启动所述交互系统基于交互时序与用户进行交互时,基于所述排序特征的交互时序与用户进行交互。
其中,所述获取模块在获取环境内语音信息后,由所述分析模块分析出所述语音信息中包括所述多机特征以及主机信息;
所述判断模块用于基于所述主机信息判定本机是否为主机;
所述控制模块用于基于所述主机判定的交互时序与用户进行交互。
其中,所述配对模块获得预分配的多机特征中包括排序特征,所述获取模块获取环境内语音信息时,所述分析模块用于确定到所述身份特征为多机特征,所述排序模块还用于分析并比对本机的所述排序特征的排序;
所述判断模块用于依据比对结果判定本机是否为主机;
所述控制模块用于基于所述主机判定的交互时序与用户进行交互。
其中,所述分析模块用于确定到所述语音信息中存在所述身份特征时,确定到所述语音信息中还包括时序信息;
所述控制模块用于启动本机交互系统与用户进行交互时,基于所述时序信息的交互时序与用户进行交互。
其中,所述配对模块分配获得的身份特征中包括排序特征,所述排序模块用于确定到所述语音信息中存在所述身份特征时,比对所述排序特征的排序;
所述控制模块用于启动本机交互系统与用户进行交互时,基于所述排序特征的交互时序与用户进行交互。
其中,所述分析模块用于确定到所述语音信息中存在所述身份特征时,确定到所述语音信息中还包括主机信息;
所述判断模块用于基于所述主机信息判定本机是否为主机;
所述控制模块用于启动本机交互系统与用户进行交互时,基于所述主机判定的交互时序与用户进行交互。
其中,所述配对模块分配获得的所述身份特征中包括排序特征,所述分析模块用于确定到所述语音信息中存在所述身份特征时,由所述排序模块比对本机的所述排序特征的排序;
所述判断模块用于基于所述比对结果判定本机是否为主机;
所述控制模块用于启动本机交互系统与用户进行交互时,基于所述主机判 定的交互时序与用户进行交互。
其中,所述主机判定的交互时序包括:
若判断模块判定到本机为所述主机,则控制模块启动本机交互系统与用户进行交互;
若判断模块判定到本机不为所述主机,则控制模块启动本机交互系统通过所述主机与用户间接交互。
其中,所述配对模块获得预分配的身份特征包括声源距离条件;
所述获取模块获取环境内语音信息时,所述声源测距模块用于获取声源距离值;
所述分析模块用于确定到所述声源距离值满足所述声源距离条件;
所述分析模块用于判定所述语音信息中存在所述身份特征。
其中,所述配对模块获得预分配的身份特征包括音量条件;
所述获取模块获取环境内语音信息时,所述音量检测模块用于获取环境语音信息的音量值;
所述分析模块用于确定到所述音量值满足所述音量条件;
所述分析模块还用于判定所述语音信息中存在所述身份特征。
其中,所述配对模块获得预分配的所述声源距离条件包括声源距离阈值,或/和
判定到本机声源距离值大于环境内任意其余所述医疗设备的声源距离值。
其中,所述配对模块获得预分配的所述音量条件包括音量阈值,或/和
判定到本机音量值大于环境内任意其余所述医疗设备的音量值。
其中,所述控制模块启动本机交互系统与用户进行交互之后,所述获取模块用于获取环境内语音信息;
所述分析模块用于确定到所述语音信息中存在退组信息;
所述控制模块还用于退出所述本机交互系统并停止与用户进行交互。
其中,在所述控制模块启动本机交互系统与用户进行交互之后,还包括:
所述获取模块用于获取环境内语音信息;
所述分析模块用于确定到所述语音信息中存在组网信息;
所述组网模块用于基于所述组网信息启动所述环境内与所述组网信息匹 配的其余医疗设备的交互系统;
所述控制模块还用于控制本机交互系统基于所述组网信息的交互时序与用户进行交互。
其中,所述反馈模块用于生成并展示的所述反馈信息包括视觉反馈信息和/或听觉反馈信息。
可见,在上述各个方面,在多个具有交互系统的医疗设备共存的场景下,都可以通过呼出对应不同医疗设备的身份特征,来明确用户语音信息在空间环境内同各个医疗设备语音交互时的指向性。这方便了用户通过身份特征来与对应的医疗设备进行交互,避免相同型号或相同类型的医疗设备功能相近,造成语音交互指令相同或类似而造成的语音指令指向不明、部分医疗设备出现逻辑混乱的缺陷。
附图说明
图1是本申请一实施例中多医疗设备共存时的语音交互方法的流程图;
图2是本申请一实施例中医疗设备的示意图;
图3是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图4是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图4a是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图5是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图5a是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图6是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图6a是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图7是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图8是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图8a是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图9是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图10是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图11是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图12是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图13是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图13a是本申请另一实施例中多医疗设备共存时的语音交互方法的流程图;
图14是本申请一实施例中医疗系统的示意图;
图15是本申请另一实施例中医疗系统的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其它实施例,都属于本申请保护的范围。
请参阅图1所示的多医疗设备共存时的语音交互方法流程图,以及图2所示的本申请多医疗设备共存时的语音交互方法所对应的医疗设备100。本申请多医疗设备共存时的语音交互方法包括:
S10、获取环境内语音信息。
具体的,空间环境中存在有多个医疗设备100,且多个医疗设备100包括交互系统110,以使得医疗设备100通过交互系统110来实现与用户的交互功能。可以理解的,医疗设备100的交互系统110可以包括语音交互功能,或诸如视觉交互功能、通信交互等各种功能与用户进行交互。医疗设备100的交互系统110具备语音交互功能时,可以直接通过交互系统110来实时获取用户的语音信息。或者,当医疗设备100不具备语音交互功能,或不便于采 用语音交互功能获取用户的语音信息时,医疗设备100也可以通过专设的音频采集装置来监测空间环境内用户的语音信息。为方便描述,本申请基于医疗设备100的交互系统110具有语音交互功能来展开。采用音频采集装置的医疗设备100在执行本申请方案的过程中原理类似,并不因没有语音交互功能而受到影响。多个医疗设备100的交互系统110都实时获取该环境内用户的语音信息。因为医疗设备100具有交互系统110,而交互系统110可以与用户进行交互,因此交互系统110具备了获取用户语音信息的功能。
S20、确定到所述语音信息中存在所述身份特征,所述身份特征通过预分配获得。
具体的,医疗设备100在获得空间环境内用户的语音信息后,需要分析该语音信息中是否存在与本机对应的身份特征。语音信息包括空间环境内所有用户的所有对话,只要任意一个用户呼出的语音信息中包括有医疗设备100所预分配获得的身份特征,医疗设备100都可以通过分析该语音信息而确定到语音信息中包含对应本机的身份特征。
身份特征需要通过预分配获得。在对空间环境中的多个具有交互功能的医疗设备100进行交互之前,用户需要对每个医疗设备100设定特定的身份特征。每一个身份特征都具有区别于其它身份特征的独特标识,以使得用户在空间环境中呼出对应该独特标识的身份特征时,能够匹配到与该身份特征所对应的医疗设备100,并与该医疗设备100进行交互。
身份特征的分配可以在空间环境中完成。当多个医疗设备100置于同一空间环境之后,用户可以根据该空间环境中的医疗设备100的具体数量和类别,来对各个医疗设备100进行身份特征的分配。身份特征的分配可以通过在各个医疗设备100上采用按键、触摸输入或语音输入等多种方式完成。医疗设备100对身份特征的确认,可以通过用户呼出该身份特征进行输入,也可以通过用户在医疗设备100上进行选择或文字、数字的输入,再由医疗设备100将用户的输入转化为与本机对应的身份特征。最简单的实施方式可以为对空间环境中的各个医疗设备100逐一编号:假如空间环境中同时存在9个具有语音交互功能的医疗设备100,则对9个医疗设备100依次输入1至9的数字编号,作为这9个医疗设备100各自的身份特征。可以理解的,此时 任意一个医疗设备100所获得的数字编号即为该医疗设备100所对应的身份特征,且该数字编号也为该身份特征区别于其它医疗设备100的独特标识。如果用户呼出的语音信息中包含数字“4”,则代表用户呼出了身份特征为“4”的语音信息,此时用户会对应到空间环境内9个医疗设备100中的身份特征被定义为“4”的医疗设备100,而其它数字编号为“1,2,3,5,6,7,8,9”的8个医疗设备100因为没有得到与本机对应的身份特征而被认为没有被选择到。
身份特征的分配还可以采用出厂原始设置的方式来完成。类似于每一个医疗设备100在出厂时都会得到一个对应本机的身份ID,该身份ID作为对应医疗设备100区别于市面上任意一个其它医疗设备100的标识编码,可以在任意空间环境下将医疗设备100区别于任意一个其它医疗设备100。因此采用类似的逻辑,将该身份ID用作该医疗设备100的独特标识,或独立于身份ID之外另建一套编码机制来单独作为医疗设备100的独特标识,可以在医疗设备100出厂阶段即通过预分配获得对应本机的身份特征。可以理解的,因为医疗设备100在出厂阶段即预分配获得区别于其余任意医疗设备100的身份特征,因此在任意空间环境下,多个医疗设备100之间都已经获得了预分配的身份特征。
S30、启动本机交互系统110与用户进行交互。
具体的,在确定到空间环境中的用户呼出了包含对应本机的身份特征之后,医疗设备100启动本机的交互系统110与用户进行交互。需要提出的是,本申请提及的启动本机交互系统或医疗设备与用户进行交互,指的是启动交互系统或医疗设备与用户的交互功能。例如,本实施例提及的启动本机交互系统110与用户进行交互,指的是启动交互系统110与用户的交互功能。因为前述提到,医疗设备100可能会通过交互系统110来获取空间环境中的语音信息。因此,交互系统110在未与用户进行交互之前,也可以执行语音信息获取的任务。由此,在这种情况下,交互系统110的启动并非取决于是否确定到语音信息中存在对应本机的身份特征。但交互系统110与用户的交互功能,是通过确定到语音信息中存在该身份特征才启动的。
由此,本申请多医疗设备共存时的语音交互方法,通过对空间环境中的各个医疗设备100分配其对应的身份特征,而区分开了各个医疗设备100,使得 用户可以通过呼出与目标医疗设备100对应的身份特征的操作,来启动对应的医疗设备100的交互系统110的交互功能,从而实现了对空间环境中的医疗设备100进行有指向性的交互操作的目的。对于空间内的各个医疗设备100,因为通过预分配获得了与本机对应的身份特征,可以在确定到用户呼出的语音信息包含与本机对应的身份特征后,才与用户进行交互。而在没有被启动的情况下,医疗设备100始终处于未启动交互功能的状态,避免了在用户与空间环境内其它医疗设备100进行交互时,因为接收到用户对其它医疗设备100发出的语音指令而通过本机进行应答或执行相应的动作,影响本机的正常工作。
需要提出的是,如果用户在呼出包括身份特征的语音信息时,还同时定义了对应该身份特征的医疗设备100所要执行的操作,或描述为需要启动的具体功能,而该操作或具体功能又能够被医疗设备100的交互系统110所控制。此时,医疗设备100在确定到语音信息中包含身份特征的同时,还接收到了包含本机对应功能的启动操作的指令。医疗设备100可以直接经由用户的语音信息来同时做出启动交互系统110,和启动本机对应功能的操作。例如,医疗设备100在接收用户语音信息后,确定到本机对应的身份特征同时,直接启动本医疗设备100的数据采集功能、或将采集得到的体征数据通过显示屏或者语音播报反馈给用户等。因为医疗设备100是在确定到对应本机的身份特征后,才确定用户的启动操作的指令是指向本机的,进而启动了本机对应的功能。该情况也可以视为用户先通过身份特征启动了交互系统110,然后通过交互系统110来启动了医疗设备100的相应功能。由此,交互系统110与用户的交互是在语音交互功能不应答的情况下完成的相应功能的启动,这种情况也属于本申请多医疗设备共存时的语音交互方法所要求保护的范围。交互系统110在不应答用户的情况下对应完成相应功能的启动,也属于交互系统110被身份特征启动后与用户进行交互的一种完成方式。
可以理解的,用户对空间环境中多个医疗设备100的交互,可以一对一的只针对其中一个医疗设备100进行交互,也可以一次启动多个医疗设备100来进行批量的交互。因为本申请多医疗设备共存时的语音交互方法的提出,使得用户对目标医疗设备100的启动相对便捷。由此,用户在通过身份特征启 动医疗设备100时,可以根据实际需要来任意提取所需要的一个或多个医疗设备100并进行交互。
当前具备语音交互功能的医疗设备100都可以胜任一对一的单机语音交互工作。大多数医疗设备100的交互系统110也是基于一对一的交互逻辑来进行设计的,指令相对简洁,易于执行。而要想实现通过语音指令来对多个医疗设备100进行批量操作,在交互时序、功能对应等方面需要设计一套相对复杂且细化的交互逻辑。医疗设备100在应对用户批量操作的过程中也需要进行一定的适应性设定,才能满足用户在空间环境中同时对多个医疗设备100进行交互的需求。
因此,请参见图3的实施例。在本实施例中,医疗设备100的交互系统110同时设置有单机交互模式和多机交互模式。相对应的,多医疗设备共存时的语音交互方法包括:
S10a、获取环境内语音信息。
S20a、确定到所述语音信息包括单机特征或多机特征,所述身份特征通过预分配获得。
具体的,医疗设备100通过预分配获得的身份特征,可以包括只针对单机交互模式的单机特征,以及包括针对多机交互模式的多机特征。即医疗设备100预分配获得的身份特征并不限定于一个,可以为多个。多个身份特征中至少有一个属于单机特征,其余的身份特征为多机特征。多机特征可以理解为多种对应同一个医疗设备100与不同的其它医疗设备100之间的不同分组。用户呼出不同多机特征时,该多机特征作为身份特征均可以对应到同一个医疗设备100,使得该医疗设备100的交互系统110被启动。但随着多机特征的不同,与该同一个医疗设备100一起与用户同时进行交互的其它医疗设备100可以不同。或描述为,对应同一个医疗设备100,用户通过呼出不同的多机特征来对空间环境内不同预设分组的医疗设备100进行批量操作,且每一个预设分组中都包含有该身份特征所对应到的同一个医疗设备100。
S30a、当确定到所述身份特征为单机特征时,启动本机交互系统110与用户进行交互;
当确定到所述身份特征为多机特征时,启动本机交互系统110并基于交 互时序与用户进行交互。
具体的,在本实施例中,为了便于区分,设定医疗设备100的交互系统110同时具备单机交互模式和多机交互模式。通过预先分配到的单机特征和多机特征来对应启动或进行模式的切换。即医疗设备100在获取到用户呼出的语音信息后,在确定到该身份特征中包括身份特征的同时,还分析该身份特征为单机特征还是多机特征。当确定到所述身份特征为单机特征时,可直接启动本机交互系统110进入单机交互模式与用户进行交互;当确定到所述身份特征为多机特征时,启动本机交互系统110并基于交互时序与用户进行交互。
而在另一些实施例中,可以不进行交互模式的划分。即医疗设备100在启动本机交互系统110与用户进行交互时,只通过判定身份特征具体为单机特征还是多机特征来与用户进行交互。可以理解的,当判定到身份特征为单机特征时,医疗设备100可以一对一的与用户进行交互,指令相对简洁,易于执行。而当判定到身份特征为多机特征时,医疗设备100与用户的交互需要基于一定的交互时序,避免同时被启动的多个医疗设备100同时与用户交互后造成用户信息接收的困难。此时,医疗设备100并不用划分单机交互模式或多机交互模式,区别只在于是否遵循交互时序。当然,本实施例中对于交互模式的划分,可以便于理解,清楚的表达申请人的方案。
前述中提到,医疗设备100的单机交互模式相对直接,医疗设备100在确定到对应本机的单机特征之后,直接启动本机交互系统110与用户进行交互。而多机批量交互的场景相对复杂,医疗设备100在基于交互时序时,会与同组被一同启动的医疗设备100之间进行一定的时序分配,避免多个医疗设备100同时进行语音应答而造成用户听不清、或多个医疗设备100同时反馈监测数据造成用户不便于接收等情况。因此,图3的实施例提供了用户在空间环境下对多个医疗设备100进行批量交互的便利。
一种应用场景请参见表1。该应用场景可以对应病房中,该病房内同时放置有三张病床,每张病床都设有一个医疗设备100,即医疗设备1、2、3共三个医疗设备100。三个医疗设备100对应的身份特征分布如表1所示:
Figure PCTCN2019084442-appb-000001
表1
可以理解的,通过表1的身份特征的分配,医疗设备1通过预分配获得的身份特征包括“A\1\3\4”共四种。其中字母“A”为医疗设备1的身份特征中的单机特征,数字“1\3\4”为医疗设备1的三个多机特征。当用户呼出的语音信息中包括有“A、1、3、4”中任意一个身份特征时,都可以对应启动医疗设备1的交互系统110,并与医疗设备1进行交互。进一步,当用户呼出字母“A”时,医疗设备1确定到用户的身份特征为单机特征,此时医疗设备1的交互系统110会进入到单机交互模式中,来与用户进行交互。可以理解的,此时字母“B”和字母“C”分别对应到医疗设备2和医疗设备3。即字母B为医疗设备2的单机特征,字母C为医疗设备3的单机特征。医疗设备2和医疗设备3因为没有确定到对应本机的身份特征,其交互系统110不会启动交互功能,保证用户单独与医疗设备1进行交互。同理的,当用户呼出的身份特征为字母“B”或字母“C”时,会对应启动到医疗设备2或医疗设备3的交互系统110。
而对于医疗设备2,其通过预分配获得的多机特征包括数字“2\3\4”。当用户呼出数字“3”时,因为数字3同时为医疗设备1和医疗设备2的多机特征,因此数字“3”同时启动了医疗设备1和医疗设备2的交互系统110,用户可以在呼出数字“3”后对医疗设备1和医疗设备2通过其各自的交互系统110进行批量交互。也即数字“3”将医疗设备1和医疗设备2分为了一组,使得医疗设备1和医疗设备2同时具有数字“3”作为多机特征,在该分组下医疗设备1和医疗设备2能够独立于医疗设备3来与用户进行交互。
对于医疗设备3,其通过预分配获得的多机特征包括数字“1\2\4”。当用户呼出数字“1”时,医疗设备3与医疗设备1的交互系统110其交互功能同时被 启动,便于用户对医疗设备1和医疗设备3进行批量交互。而当用户呼出数字“4”时,由于医疗设备1、医疗设备2、医疗设备3都分配到“4”作为多机特征,因此此时病房内的全部医疗设备100的交互系统110均被启动,用户可以对病房内的所有医疗设备100批量进行交互。
在本场景下,用户可以为医护人员。基于本病房内的三张病床实际的病人数量和床位分布情况,医护人员可以通过呼出对应的身份特征来对三个医疗设备100进行任意组合以及交互。例如本病房内只有一张病床有病人的情况下,通过呼出设置于该病床边的医疗设备100所对应的身份特征,来与该医疗设备100的交互系统110进行语音交互,对病人实施体征数据采集、体征数据报送、定时采集体征数据等操作。而当本病房中有两张病床有病人,或三张病床有病人的情况下,医护人员可以通过呼出对应医疗设备100的多机特征来同时启动两个或三个医疗设备100的交互系统110的交互功能,并通过与交互系统110的批量交互来执行上述操作。
表1的身份特征分配方式,多适用于空间环境内医疗设备100数量较少的情况。此时通过简单的数字或字母分配,就能够满足单机特征和多机特征的设定。而在空间环境内医疗设备100较多时,可以参见表2的身份特征分配方式:
Figure PCTCN2019084442-appb-000002
Figure PCTCN2019084442-appb-000003
表2
区别于表1的分配方式,表2的身份特征分配方式将空间环境内多个医疗设备100先进行大类的分组,然后通过组内的编号来对应到本组中的各个医疗设备100。同样以病房来举例,假设病房中有两张病床,每张病床上都有N个医疗设备100,N个医疗设备100分别对应采集该病床上病人的不同体征数据。例如心电、血压、心率、体温等体征数据,都采用单独的医疗设备100来进行采集。由此,可以定义病房内用于采集同一病人体征数据的医疗设备100统一具有“A”作为多机特征,用于采集另一个病人体征数据的医疗设备100统一具有“B”作为多机特征。然后,根据病床“A”和病床“B”中各个医疗设备100的功能不同,设置数字编号来将两组分组中的不同功能的医疗设备100进行区分定义,作为每一个医疗设备100的单机特征,如医疗设备A1、医疗设备B2等。
进一步,将两张病床功能相同的医疗设备100设置统一的数字多机特征,例如采集心电体征数据的医疗设备A1和医疗设备B1均设置数字“1”为多机特征,采集血压的医疗设备A2和医疗设备B2均设置数字“2”为多机特征......由此,在医疗设备AN和医疗设备BN均设置数字“N”为多机特征之后,可以将数字“1-N”定义到不同的体征数据的采集功能。
根据表2的身份特征分配,当病房内只有一个病人的时候,医护人员只需要呼出每个医疗设备100对应的单机特征,就能够逐一与该病床对应的多个医疗设备100进行交互。或者医护人员通过呼出该病床对应的多机特征“A”或“B”,来同时启动该病床对应的多个医疗设备100的交互系统110的交互功能,开展批量交互。而当病房内同时有两个病人的时候,医护人员可以通过呼出数字,来同时启动两张病床上功能相同的两个医疗设备100,通过语音交互来批量采集两个病人的同一项体征数据。
由此,本申请多医疗设备共存时的语音交互方法,在对空间环境内各个医疗设备100进行合理的分组规划之后,通过简单的身份特征分配,提供了用户精准指向启动医疗设备100交互系统110交互功能的同时,还提供了用户精准的同时启动多个医疗设备100交互系统110的交互功能以进行批量交互的便利。通过科学合理的分组规划,本申请多医疗设备共存时的语音交互方法可以应对空间环境中医疗设备100任意数量的单机交互或多机交互场景,同时避免了医疗设备100出现指令接收混乱的情况,保证用户能够有效接收到医疗设备100的反馈,简化了用户在使用交互功能过程中繁复的重复交互过程。
需要提出的是,表1和表2的分配方式,采用了数字和字母组合的形式。但本申请多医疗设备共存时的语音交互方法并不限定身份特征的具体设定内容。举例来说,用户同样可以采用功能性词语对医疗设备100进行身份特征的分配。例如“血压”、“心电”、“心率”等词语,对应到功能之后还简化了分配逻辑,使得身份特征能够与医疗设备100的各项功能对应,方便用户记忆。此外,用户还可以通过设定任意便于自身记忆的词语、字母、数字,或任意形式的组合来完成身份特征的设定。只要达到身份特征区别于其它一个或多个医疗设备100的效果,使得用户能通过呼出该身份特征来对应启动医疗设备100,都属于本申请所涉及的身份特征的分配方式。
请参见图4的实施例,图4为本申请多医疗设备共存时的语音交互方法另一实施例的流程图。在本实施例中,身份特征包括单机特征和多机特征。本方法包括如下步骤:
S10b、获取环境内语音信息。
S21b、确定到所述语音信息包括所述多机特征,所述身份特征通过预分配获得。
S22b、确定到所述语音信息中还包括时序信息。
具体的,用户呼出的语音信息中,包含的身份特征为多机特征。同时,用户呼出的语音信息中,还包括有时序信息。该时序信息与多机特征一并被医疗设备100所获取。
S30b、启动本机交互系统110基于所述时序信息的交互时序依次与用户 进行交互。
具体的,医疗设备100在确定到对应本机的多机特征后,基于交互时序与用户进行交互。因为多机特征同时启动了多个医疗设备100的交互系统110与用户交互,多个交互系统110如果同时与用户进行交互,同样存在多个交互系统110同时对用户进行语音反馈,造成用户信息接收受到干扰的情况。为避免这一现象的发生,用户可以通过呼出多机特征的同时呼出时序信息,来为该多机特征所对应的多个医疗设备100进行排序,多个医疗设备100基于时序信息的排序来依次与用户进行交互。
时序信息可以描述为用户对同时需要进行交互的多个医疗设备100进行的实时排序。例如对应表1的实施例,用户可以在呼出多机特征“4”的时候,同时通过语音对医疗设备1、2、3进行排序,例如“按照医疗设备3、医疗设备2、医疗设备1的顺序进行应答”。此时该“按照医疗设备3、医疗设备2、医疗设备1的顺序进行应答”就可以被分析确定为时序信息。因为多个医疗设备100所获得的时序信息是相同的,因此多个医疗设备100均按照同一排序标准与用户进行交互。
需要提出的是,医疗设备100与用户基于时序信息依次进行交互的情况,并不限定医疗设备100的交互系统110每一项信息交互都必须依次进行。可以设定多个医疗设备100只有在报送与本机测得的体征数据有关的信息,或需要顺序报送以免用户接收受到干扰的信息时才基于时序信息依次与用户进行交互。这样的设置可以进一步的节约交互时间,提高效率。
请参见图5的实施例,图5为本申请多医疗设备共存时的语音交互方法另一实施例的流程图。在本实施例中,所述身份特征中包括单机特征和多机特征,其中多机特征中还包括排序特征。本方法包括如下步骤:
S10c、获取环境内语音信息。
S21c、确定到所述语音信息包括所述多机特征,所述多机特征通过预分配获得。
S22c、分析所述多机特征中的所述排序特征的排序。
具体的,在本实施例中,身份特征还同时预设了对应到多机特征的排序特征。该排序特征类似图4实施例中的时序信息,其区别在于时序信息是用户在 呼出的语音信息中进行设定的,而排序特征是在分配身份特征的阶段就已经预设。相较于时序信息,排序特征可以简化用户在每次通过多机特征启动多个医疗设备100之后还要追加设定医疗设备100的应答顺序的操作,相对更加便于用户使用。
用户呼出的语音信息中,包含的身份特征为多机特征。医疗设备100在获得该多机特征之后,同时基于该多机特征以及多机特征中对应本机的排序特征的优先级排序,来确定本机在多个医疗设备100与用户进行交互时的优先级顺序。
S30c、启动本机交互系统110并基于所述排序特征的交互时序依次与用户进行交互。
具体的,设定排序特征的实施方式在医疗设备100启动本机交互系统110之后的实施方式与图4的实施例类似。因为已经通过排序特征确定了各个医疗设备100与用户交互的先后顺序,因此本实施例也可以避免多个交互系统110同时对用户进行语音反馈,造成用户信息接收受到干扰的缺陷。
对应本实施例的方法,通过表2举例:在分配身份特征的阶段,可以将医疗设备A1-AN中的数字编号“1-N”定义为排序特征。用户在呼出多机特征“A”后,医疗设备A1-AN的交互系统110均被启动与用户进行交互。此时,在S22c的步骤中,医疗设备A1-AN均基于本机的“1-N”数字编号进行分析,从而确定本机在对应多机特征“A”的多机模式下所处的交互顺序,然后基于该交互顺序先后由医疗设备A1、医疗设备A2、......医疗设备AN依次与用户进行交互。
可以理解的,在交互的过程中,前一医疗设备100和后一医疗设备100之间存在交接的问题。例如医疗设备A2作为后一医疗设备100需要接收到医疗设备A1的交互完成的信号,才能确定到当前交互时序轮到本机的信息。能够解决这一问题的方式很多,例如医疗设备A1在交互完成后,可以通过发出交互完成的信号指令来将交互时序交接给医疗设备A2;或由用户发出交互完成的信号指令来将交互时序交接给医疗设备A2。同时,交互完成的信号指令可以为相同的指令,医疗设备A2在接收到一个交互完成的信号指令之后就确定到当前交互时序轮到本机,医疗设备A3通过计数,在接收到两个交互完成的信号指令之后能够确定到当前交互时序轮到本机,以此类推。交互完成的信 号指令也可以针对不同的医疗设备100来单独设置,例如医疗设备A1在交互完成后,可以通过发出“请医疗设备A2开始应答”这样包含医疗设备A2的身份特征或名称的信息来实现。当然,如果在多个医疗设备100同时连接有服务器的场景下,可以通过服务器的调配来完成相关控制。
请参见图6的实施例,在本实施例中,所述身份特征中包括单机特征和多机特征。本申请多医疗设备共存时的语音交互方法包括如下步骤:
S10d、获取环境内语音信息。
S21d、确定到所述身份特征为多机特征,所述身份特征通过预分配获得。
S22d、确定到所述语音信息中还包括主机信息。
S23d、基于所述主机信息判定本机是否为主机。
具体的,用户在呼出包含多机特征进行批量交互的同时,还呼出包括主机信息的内容,从而使得医疗设备100在获得多机特征的同时,还通过分析得到主机信息,并进行相应的操作和响应。基于主机信息,可以设定该多机特征对应到的医疗设备100中任意一个医疗设备100为主机。本申请不限定主机信息的具体内容,通过任意方式定义出多个医疗设备100中的一台医疗设备100为主机,使其区别于其余医疗设备100都可以作为主机信息。例如,在呼出多机特征的时候还呼出其中一台医疗设备100的单机特征,进而定义到对应该单机特征的医疗设备100为主机。
S30d、启动本机交互系统110基于主机判定的交互时序与用户进行交互。
被定义为主机的医疗设备100基于交互时序,并通过本机的交互系统110与用户进行交互。此时,多机特征所对应到的所有医疗设备100可以只通过被定义为主机的医疗设备100来与用户进行交互。通过设定主机的方式,使得用户只需要与一个医疗设备100进行交互,就实现了同时对多个医疗设备100进行批量交互的便利。用户的语音指令输入,或多个医疗设备100的语音反馈动作,都通过同一个医疗设备100来完成。这样的实施方式同样避免了用户在同一空间环境下同时与多个医疗设备100同时进行交互而容易遇到的信息接受不良、逻辑混乱等缺陷。设定主机的多机交互模式可以理解为用户与设为主机的医疗设备100进行单机交互,即可完成对多个医疗设备100的批量指令输入或信息反馈等效果。
进一步的,通过设定主机,用户与作为主机的医疗设备100在基于交互系统110进行交互的过程中,用户需要批量操作设定的指令可以由作为主机的医疗设备100接收后逐一分配给其它医疗设备100来完成,也可以采取其它医疗设备100直接获取用户的语音信息并接受指令来完成。
具体的,基于主机判定的交互时序与用户进行交互,也存在主机信息未对应到本机为主机的情况。即步骤S23d的判定结果,可能对应到本机为非主机的情况。请参看图7的流程图,在对应到图6实施例中,被同一多机特征所启动,但没有被设定为主机的医疗设备100与用户的交互动作,可以采用以下步骤来完成:
S10e、获取环境内语音信息。
S21e、确定到所述身份特征为多机特征,所述身份特征通过预分配获得。
S22e、确定到所述语音信息中还包括主机信息。
S23e、基于所述主机信息确定到本机不为所述主机。
S30e、启动本机交互系统110基于主机判定的交互时序与用户进行交互,且本机交互系统110通过所述主机与用户间接交互。
具体的,在本实施例中,对应到图6实施例中的医疗设备100被多机特征启动,但并没有被设置为主机的情况。此时,本机交互系统110被启动后,未被设置为主机的医疗设备100在后续交互过程中,通过所述主机与用户间接交互。也即本机交互系统110只用于接收用户呼出的语音信息中的指令,而不对用户进行直接应答。用户在批量交互的过程中,该多机特征所对应到的多个医疗设备100,除被设定为主机的医疗设备100之外,均不会通过扬声器等装置发出声音反馈,以使得用户只接收被设为主机的医疗设备100的声音反馈,保证用户与被设为主机的医疗设备100的交互不受干扰。
在本实施例中,因为非主机的医疗设备100其多机交互模式不对用户的语音进行直接应答,因此在用户需要非主机的医疗设备100进行反馈时,医疗设备100可以通过与主机医疗设备100之间通信连接,将需要反馈的内容通过主机医疗设备100来向用户进行反馈。非主机的医疗设备100还可以通过本机其它的反馈装置,如显示器、指示灯等装置来对用户进行反馈。
另一种描述方式,在图6和图7的实施例中,基于主机判定的交互时序 包括两种情况:
1、若判定到本机为所述主机,则启动本机交互系统110与用户进行交互;
2、若判定到本机不为所述主机,则启动本机交互系统110通过所述主机与用户间接交互。
请参阅图8的实施例,在本实施例中,身份特征包括单机特征和多机特征,且多机特征包括有排序特征。本申请多医疗设备共存时的语音交互方法包括如下步骤:
S10f、获取环境内语音信息。
S21f、确定到所述语音信息中包括多机特征,所述身份特征通过预分配获得。
S22f、分析并比对本机的所述排序特征的排序。
S23f、依据比对结果判定本机是否为主机。
S30f、基于所述主机判定的交互时序启动所述本机交互系统110与用户进行交互。
具体的,在本实施例中,同样通过设置主机的方式来开展多个医疗设备100与用户的多机交互。但此时并不需要用户在呼出多机特征的同时特意设定主机,而是通过分配身份特征的同时设定排序特征,对应同一个多机特征的多个医疗设备100之间通过排序特征来比较相互之间的优先级排序,进而自动推出排序特征中优先级最高的医疗设备100作为主机与用户进行交互。可以理解的,本实施例中的排序特征与图5实施例中的排序特征可以相同,并根据预设的多机交互模式或交互时序的具体实施方式来确定排序特征的使用方法,二者并不冲突。
相似的逻辑,对应图9的实施例,身份特征包括单机特征、多机特征,且多机特征包括排序特征。如果医疗设备100的排序特征在多机特征中并非最高的排序特征时,本方法可包括如下步骤:
S10g、获取环境内语音信息。
S21g、确定到所述身份特征为多机特征,所述身份特征通过预分配获得。
S22g、分析所述多机特征中的所述排序特征的排序,并确定到所述多机特征所对应的多个医疗设备100中至少一个所述医疗设备100的排序特征的 优先级高于本机的排序特征的优先级。
S30g、启动所述本机交互系统110基于主机判定的交互时序通过所述主机间接与用户进行交互。
具体的,在本实施例中,对应到图8中的医疗设备100的排序特征判定,如果本机的排序特征比对后优先级不为最高,即图9的实施例中医疗设备100在接收到的多机特征中,其排序特征的优先级低于同时被启动的医疗设备100中至少一个排序特征的优先级,即认为该医疗设备100不会被设定为主机。同图7的实施例相似,未被设置为主机的医疗设备100在后续交互过程中,通过主机间接与用户进行交互,且只用于接收用户呼出的语音信息中的指令,而不对用户进行直接应答,以保证用户与被设为主机的医疗设备100的交互不受干扰。
可以理解的,在本实施例中,当用户需要非主机的医疗设备100进行反馈时,医疗设备100可以通过与主机医疗设备100之间通信连接,将需要反馈的内容通过主机医疗设备100来向用户进行反馈。非主机的医疗设备100还可以通过本机其它的反馈装置,如显示器、指示灯等装置来对用户进行反馈。
需要提出的是,在图3的实施例中,本申请方法步骤S20a通过单机特征和多机特征的区分,一定程度上简化了S30a步骤中是否需要基于交互时序与用户进行交互的判定逻辑。即S20a步骤中确定到身份特征为单机特征时,可以跳过判定交互时序的步骤,直接与用户进行交互。而当S20a步骤中确定到身份特征为多机特征后,才基于预分配得到的设置,或用户定义的相关设置来判定交互时序。当然,还存在一些实施例,对于身份特征不进行单机特征或多机特征的区分,在接到包含身份特征的语音信息之后,直接进行交互时序的判定,并基于交互时序的判定结果来与用户进行交互。例如图4a中的实施例:
S100b、获取环境内语音信息。
S200b、确定到所述语音信息中包括时序信息。
S300b、启动本机交互系统110基于所述时序信息的交互时序依次与用户进行交互。
具体的,图4a的实施例与图4的实施例区别在于,在步骤S200b中,对身份特征的判断,是通过确定到用户呼出的语音信息中包含有时序信息来实现 的。医疗设备100在确定到时序信息的同时,可以认为时序信息包括了身份特征以及交互时序两方面的内容。此时,时序信息的功能也包括两个方面:作为身份特征启动多个医疗设备100的交互系统110,并确定多个医疗设备100的交互时序。因此,当用户呼出的语音信息中包含时序信息后,医疗设备100可以直接启动本机交互系统110基于时序信息的交互时序与用户进行交互。通常情况下,用户在同时启动多个医疗设备100的交互系统110时才会一同呼出时序信息,此时直接基于时序信息的交互时序与用户进行交互,也可以达到类似于图4的效果。另一种实施例,参见图5a。在本实施例中,身份特征包括有排序特征,本实施例具体包括:
S100c、获取环境内语音信息。
S201c、确定到所述语音信息包括身份特征。
S202c、分析所述身份特征中的所述排序特征的排序。
S300c、启动本机交互系统110并基于所述排序特征的交互时序依次与用户进行交互。
具体的,在本实施例中,因为没有分别设置单机特征或多机特征,排序特征直接包括于身份特征中。当用户呼出包括有排序特征的身份特征之后,医疗设备100直接通过排序特征来确定交互时序。可以理解的,对应到图5的实施例,虽然本实施例没有将单机特征和多机特征分开,但是对于包括排序特征的身份特征,可以默认为该身份特征对应启动了两个或两个以上的医疗设备100的交互系统110,因此该身份特征才会包括排序特征。而如果有一个身份特征中不包括排序特征,则这一个身份特征应该只对应启动一个医疗设备100的交互系统110。本实施例虽然没有定义身份特征对应单机或多机的区别,但通过身份特征中是否包括排序特征,同样可以实现同时启动多个医疗设备100的交互系统110,并使得多个医疗设备100的交互系统110有序交互。
或,图5a的实施例还可以描述为:当身份特征只对应启动一个医疗设备100的交互系统110时,其包括有排序特征,且该排序特征对应的医疗设备100的交互系统110直接与用户进行交互。即该排序特征中只有一个交互系统110的排序,无需等待其它交互系统110的应答结束。
实施例请参见图6a,对于用户呼出的语音信息中包括主机信息的情况, 还可以采用以下的步骤来完成:
S100d、获取环境内语音信息。
S201d、确定到所述语音信息中包括主机信息。
S202d、依据所述主机信息判定本机是否为主机。
S300d、启动本机交互系统110并基于主机判定的交互时序与用户进行交互。
具体的,在本实施例中,主机信息用于从多个医疗设备100中确定出主机。后续的,基于主机信息对主机的判定,以及基于主机判定的交互时序与用户进行交互等步骤,同图6的步骤完全相同。可以理解的,在本实施例中,同样省去了对单机特征和多机特征的判定,而是直接通过主机信息来认定用户需要同时启动多个医疗设备100的交互系统110。相较于图6的实施例,本实施例更简洁,节约了用户语音信息的信息量,且同样较好的达到了类似于图6的实施效果。
相似的逻辑,请参见图8a的实施例。在本实施例中,身份特征包括排序特征。具体步骤如下:
S100f、获取环境内语音信息。
S201f、确定到语音信息中包括所述身份特征。
S202f、分析所述身份特征中的所述排序特征的排序。
S203f、依据比对结果判定本机是否为主机。
S300f、基于所述主机判定的交互时序启动本机交互系统110与用户进行交互。
具体的,图8a的实施例中,身份特征中直接包括排序特征。这使得用户可以通过在语音信息中呼出身份特征,就能同时启动多个医疗设备100的交互系统110。并且,被启动的多个交互系统110中已经通过排序特征的比对推选出一台主机。之后,再采用类似图8的主机判定的交互时序来同用户进行交互。即基于排序特征被推选为主机的医疗设备100直接与用户进行交互,而其余医疗设备100则通过主机来间接与用户进行交互。本实施例同样节约了用户的语音信息中信息量,可以获得更智能的交互效果。
一种实施例请参见图10。图10为本申请多医疗设备共存时的语音交互方 法另一实施例的流程图。在本实施例中,身份特征包括单机特征、多机特征和声源距离条件。本方法包括:
S10h、获取环境语音信息及声源距离值。
具体的,本实施例在预分配身份特征时,还预设了声源距离条件。声源距离条件可以包括声源距离阈值和声源距离的比较。声源距离阈值为一个与距离相关的数值常量,声源距离的比较需要本机同环境内其余医疗设备100获得的声源距离值进行比较。医疗设备100在获取空间环境中的语音信息时,还同时通过传感器检测用户的距离,并测得发出声音用户的声源距离值。能检测用户声源距离值的传感器较多,本申请不具体限定传感器的测距方式,只要能检测到用户的声源距离值,都可以用到实施例多医疗设备共存时的语音交互方法中来。可以理解的,声源距离值也为一个与距离相关的数值常量。
S21h、确定到所述声源距离值满足所述声源距离条件,所述身份特征通过预分配获得。
具体的,在检测得到用户的声源距离值之后,医疗设备100将检测得到的与声源距离值与预设得到的声源距离阈值进行比较;或医疗设备100将检测得到的声源距离值与环境内其余医疗设备100所检测得到的声源距离值进行比对。因为声源距离值和声源距离阈值是两个与距离相关的数值常量之间的比较,因此能够较为迅速和直接的得出两个数值的大小,从而得到比较结果。而多个医疗设备100之间声源距离值的比较也可以迅速得到比较结果。当确定到声源距离值小于声源距离阈值,或本机获得的声源距离值相较于环境内其余医疗设备100的声源距离值都更小时,可以判定本机获得的声源距离值满足声源距离条件,并进入后续步骤。
S22h、判定所述语音信息中存在所述身份特征。
S30h、启动所述本机交互系统110与用户进行交互。
具体的,当确定到声源距离值小于声源距离阈值以后,医疗设备100判定用户与本机的距离在足够近的范围之内。当用户在空间环境中移动至距离某一医疗设备100足够近的范围内时,可以认为用户是有指向性的对该医疗设备100发出的语音信息。或者在多个医疗设备100中寻找距离用户最近的医疗设备100。此时医疗设备100自动判定该语音信息中存在身份特征,并启动 交互系统110进入单机交互模式与用户进行交互。
本实施例提供用户在空间环境内使用本方法进行交互的便利。因为当空间环境内医疗设备100数量较多时,用户要记住各个医疗设备100所对应的单机特征、多个多机特征、排序特征等明细,难免出现差错。虽然可以通过列表、医疗设备100上贴图示等方式进行提示,但当用户只需要对空间环境内的某一个特定医疗设备100进行交互时,可以通过图10的实施方式来获得更简洁、快速的启动操作。可以理解的,当用户移动至该医疗设备100足够近的范围内之后,通过呼出语音信息,且该语音信息可以不是身份特征,而是任意的语音信息,就可以非常方便的启动该医疗设备100的交互系统110,而免去了可能存在的大量的记忆或查表、查图示标记等繁琐的操作。
当然,声源距离条件可以同时包括声源距离阈值以及声源距离的比对两个条件。在同一空间环境下,如果医疗设备100判定本机的声源距离值最小,但该最小的声源距离值没有小于声源距离阈值,可以认为用户并不是针对该医疗设备100进行的启动操作,同时引入两个判定条件可以提高用户指向性的准确度,降低误操作的发生率。
对应这种较为简单的单机交互模式,还可以采用图11的实施例来进行。在图11的实施例中,身份特征还包括音量条件。本申请多医疗设备共存时的语音交互方法包括:
S10i、获取环境语音信息及音量值。
S21i、确定到所述音量值满足所述音量条件,所述身份特征通过预分配获得。
S22i、判定所述语音信息中存在所述身份特征。
S30i、启动所述本机交互系统110与用户进行交互。
具体的,本实施例方法的思路与图10的思路类似,其区别在于不采用对用户进行测距的方式,而是直接通过用户呼出的语音信息的音量大小来确定用户是否处于足够近的范围内。当确定到语音信息的音量大小超过音量阈值时,通常可以确定到用户距离医疗设备100处于足够近的范围内,并由此认定用户的语音信息中存在身份特征,启动本机的单机交互模式与用户进行交互。或医疗设备100确定到本机获得的用户语音信息音量值为环境内最高的情况, 也可以认定用户的语音信息中存在身份特征,启动本机的交互系统110与用户进行交互。可以理解的,图11方法的实施方式,其获得的效果也与图10的方法获得的效果相近。进一步的,图11的方法实施方式,还因为不需要对用户测距,而采用医疗设备100上本就具备的交互系统110来获得用户语音信息的音量值,而省去了传感器的配备,简化了医疗设备100的传感器使用数量,节约成本。
可以理解的,还存在一种实施例,同时结合声源距离条件和音量条件来判定用户的语音信息是否作为身份特征。因为不同用户在基于本方法与医疗设备100进行交互时,可能存在嗓门大小的差异,导致一些声音洪亮的用户在无意识的情况下触发身边的医疗设备100进入单机交互模式。或一些用户在呼出多机特征的同时没有注意与最近的医疗设备100保持足够间隔距离,使得该医疗设备100进入单机交互模式,并且同时启动了多机特征所对应的多个医疗设备100进入交互状态。这种情况在采用声源距离阈值或音量阈值作为判断条件的时候更为突出。因此,可以设定声源距离条件和音量条件结合判断用户的语音信息是否为身份特征。即用户需要在一定距离范围内,且音量值达到足够高的情况下才能触发身份特征中声源距离条件和音量条件的功能,以此避免误触发的现象发生。
当然,如上述的一些实施例,在图10和图11的实施例中,也存在不区分单机特征和多机特征的情况。只要语音信息的声源距离值和/或音量值达到条件,都可以对应启动医疗设备100的交互系统110进行交互,而不用再对单机或多机进行区分。可以理解的,在声源距离条件和音量条件为比对后取最高值的情况下,都只会启动一台医疗设备100的交互系统110与用户进行交互,而不会产生逻辑混乱的情况。
请参见图12的实施例。图12示意了图1所示的多医疗设备共存时的语音交互方法另一实施例的流程图。在图12的实施例中,医疗设备100在启动本机交互系统110与用户进行交互之后,还包括:
S40、生成并展示反馈信息以显示本机交互系统110已启动。
具体的,用户在通过身份特征启动医疗设备100的交互系统110之后,存在检验自己呼出的身份特征是否被医疗设备100有效接收的需求。反馈信 息可以是视觉反馈信息,也可以是听觉反馈信息。因为交互系统110如果同时对用户进行反馈,用户并不容易通过语音反馈来快速、批量的检测到目标医疗设备100是否被启动。此时,可以采用上述的交互时序来依次提供听觉反馈信息,进而方便用户的接收。更便捷的,医疗设备100可以通过设置于本机上的显示屏、指示灯等视觉反馈装置,来生成并展示反馈图像、反馈灯光等视觉反馈信息,对用户启动视觉反馈,进而显示本机的交互系统110已经启动交互功能。即当空间环境内存在多个医疗设备100的情况下,被启动交互功能的医疗设备100可以通过图案、灯光等信息对用户进行视觉反馈,提供用户快速确认自己发出的身份特征是否被目标医疗设备100有效接收的便利。
可以理解的,当用户发现自己呼出的身份特征并没有被全部目标医疗设备100接收到,即用户没能通过身份特征启动到目标医疗设备100时,用户可以通过重新呼出身份特征等方式来增补启动目标医疗设备100,再进行批量的交互,提高本申请方法的可靠性。
请参阅图13的实施例。图13也为图1所示的多医疗设备共存时的语音交互方法另一实施例的流程图。在图13的实施方式中,本方法包括:
S10j、获取环境内语音信息。
S20j、确定到所述语音信息中存在所述身份特征,所述身份特征通过预分配获得。
S30j、启动本机交互系统110与用户进行交互。
S51j、获取环境内语音信息。
S52j、确定到所述语音信息中存在退组信息。
S53j、退出所述本机交互系统并停止与用户进行交互。
具体的,在启动本机交互系统110与用户进行交互之后,本实施例提供一种退出机制。当用户确定到自己因为身份特征呼出错误,启动到非目标的医疗设备100之后,可以通过呼出的退组信息来指定某一个或多个医疗设备100退出当前交互环节。通过本实施例方法的执行,提供了用户在启动医疗设备100的交互系统110之后,修正自己语音操作所对应的目标医疗设备100明细的便捷。或者,用户在进行批量交互的同时,如果需要对处于多机交互模式中的部分医疗设备100进行进一步的交互操作,也可以通过呼出退组信息来 将不需要进行进一步交互的医疗设备100排除到批量交互的序列之外。由此,本实施例提供了用户快捷进行批量交互,同时对当前正在交互的医疗设备100进行二次挑选的便利。可以理解的,图13的实施例还可以针对单机交互模式进行,从而结束用户与当前交互的医疗设备100的交互过程,进而启动下一次的交互操作。
一种实施例,退组信息可以包括预设的闲置时间阈值。即医疗设备100在一定时间段内没有与用户进行任何交互,且该时间段超过预设的闲置时间段,可以认定为用户已经没有与该医疗设备100进行交互,自动退出当前的交互状态,节约资源。
一种实施例,请参见图13a。包括:
S10k、获取环境内语音信息。
S20k、确定到所述语音信息中存在所述身份特征,所述身份特征通过预分配获得。
S30k、启动本机交互系统110与用户进行交互。
S40k、获取环境内语音信息;
S50k、确定到所述语音信息中存在组网信息;
S60k、基于所述组网信息启动所述环境内与所述组网信息匹配的其余医疗设备100的交互系统110;
S70k、控制本机交互系统110基于所述组网信息的交互时序与用户进行交互。
具体的,可以将本实施例理解为图13的反向组网实施例。图13是在医疗设备100的交互系统110进入交互状态之后,退出当前交互状态的情况。而本实施例提供医疗设备100接用户的组网信息后增补需要交互的医疗设备100的便捷。可以理解的,对于S20k步骤中的身份特征,不仅限于单机特征或多机特征。只要本机的交互系统110进入交互状态,就可以通过得到用户的组网信息来启动环境内与组网信息匹配的其余医疗设备100的交互系统110。这样,在用户需要在保留当前进入交互状态的医疗设备100的前提下,增补想要与之交互的其余医疗设备100来进行更大批量的交互。
相应的,对于组网信息所对应到的增补进来的医疗设备100,其交互时序 也需要与组网信息所匹配。即当前处于交互状态的医疗设备100需要与增补进来的医疗设备100重新分配交互时序。重新分配的交互时序可以在保留当前交互时序的前提下,对经组网信息增补进来的医疗设备100基于优先级、用户指定等顺序依次排在当前交互时序之后,形成重新分配的交互时序。还可以不保留当前交互时序的前提下,将增补的医疗设备100与当前正处于交互状态的医疗设备100一起进行优先级、用户指定等顺序的重新排列,来新创重新分配的交互时序。这些交互时序都属于组网信息的交互时序。
还有一种实施例,对于组网信息的交互时序,可以采用设定主机的方式来进行。即当前已经存在主机,或当前为单机交互的场景下,经组网信息补入的医疗设备100被自动定义为非主机,用户依然沿用当前的主机或将单机认定为主机,并基于该主机信息的交互时序进行交互。此时,组网信息的交互时序等同于主机判定的交互时序。当然,也可以在增补之后重新认定主机,即重新对进入交互状态的各个医疗设备100进行排序特征的比对,然后依据比对结果判定主机,此时组网信息的交互时序依然等同于主机判定的交互时序。
请看回图2,图2为本申请涉及的一种医疗设备100的示意图。在图2的实施例中,医疗设备100还包括处理器101、输入装置102、输出装置103和存储装置104。所述处理器101、输入装置102、输出装置103和存储装置104相互连接,其中,所述存储装置104用于存储计算机程序,所述计算机程序包括程序指令,所述处理器101被配置用于调用所述程序指令,执行上述的多医疗设备共存时的语音交互方法。
具体的,处理器101调用存储装置104中存储的程序指令,执行以下操作:
获取环境内语音信息;
确定到所述语音信息中存在所述身份特征,所述身份特征通过预分配获得;
启动本机交互系统110与用户进行交互。
可以理解的,本申请医疗设备100因为通过处理器101调用存储装置104的程序,可以执行上述的多医疗设备共存时的语音交互方法,从而在空间环境中存在多个医疗设备100的场景下,提供用户通过与本机对应的身份特征来 启动本机交互系统110与用户进行交互的便捷。同时避免了因为语音指令相近或相似,使得用户在与目标医疗设备100进行交互的过程中同一空间环境下非目标医疗设备100也激发交互并造成交互逻辑混乱的缺陷。
存储装置104可以包括易失性存储装置(volatile memory),例如随机存取存储装置(random-access memory,RAM);存储装置104也可以包括非易失性存储装置(non-volatile memory),例如快闪存储装置(flash memory),固态硬盘(solid-state drive,SSD)等;存储装置104还可以包括上述种类的存储装置的组合。
处理器101可以是中央处理器(central processing unit,CPU)。该处理器101还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
请参见图14,图14为本申请提供的一种医疗系统200的示意图。在图14的实施例中,医疗系统200包括:
获取模块201,用于获取环境内语音信息;
分析模块202,用于确定到所述语音信息中存在所述身份特征,所述身份特征通过预分配获得;
控制模块203,用于启动本机交互系统110与用户进行交互。
可以理解的,本申请医疗系统200同样用于执行本申请多医疗设备共存时的语音交互方法。具体的,获取模块201启动监测空间环境内的语音信息。分析模块202对获取模块201监测得到的语音信息进行分析,当确定到语音信息中存在身份特征以后,分析模块202发送指令给控制模块203,以使得控 制模块203启动本机交互系统与用户进行交互。由此,当空间环境中存在多个医疗系统200的情况下,本申请医疗系统200可以通过确定用户呼出的语音信息中是否包含对应本机预分配获得的身份特征来开启与用户进行交互的功能,避免将用户未针对本机的语音指令错认为本机的指令,并启动本机相应的功能,造成用户语音指令的不明确。
一种实施例,所述医疗系统200还包括配对模块204,所述配对模块204用于获得预分配的身份特征。可以理解的,如上述的方法实施例,用户可以在出厂阶段就完成对配对模块204的身份特征的预分配,也可以在医疗系统200被放置于空间环境内,且空间环境内存在多个医疗系统200之后,再基于空间环境内的医疗系统200的具体分布和功能,通过配对模块204来完成身份特征的预分配。
一种实施例,配对模块204在获取预分配的身份特征中包括单机特征和多机特征。分析模块202在确定到语音信息中存在身份特征时,还需要确定语音信息中包括的身份特征为单机特征或多机特征。
随后,控制模块203在启动本机交互系统100与用户进行交互时,当分析模块202确定到身份特征为单机特征时,控制模块203启动本机交互系统110与用户进行交互;
当分析模块202确定到身份特征为多机特征时,则控制模块203启动本机交互系统110,并基于交互时序与用户进行交互。
一种实施例参见图15,医疗系统200还包括排序模块205。排序模块205用于确定本机交互系统110与用户进行交互的时序。
在一种实施例中,排序模块205基于获取模块201获取到的时序信息以确定本机交互系统110与用户进行交互的时序。具体的,分析模块202用于在确定到语音信息中存在身份特征时,分析并获得的语音信息包括的多机特征以及时序信息;
控制模块203用于启动本机交互系统110基于交互时序与用户进行交互,排序模块205用于控制本机交互系统110基于时序信息的交互时序依次与用户进行交互。
一种实施例,排序模块205基于配对模块204获取到的排序特征以确定 本机交互系统110与用户进行交互的时序。具体的,配对模块204在获取预分配的多机特征中包括排序特征;
获取模块201获取环境内语音信息之后,分析模块202确定到语音信息中存在身份特征时,确定该身份特征为多机特征;
排序模块205还用于分析多机特征中的排序特征的优先级排序;
控制模块203在启动交互系统110基于交互时序与用户进行交互时,基于排序特征的交互时序控制多个医疗系统200依次与用户进行交互。
一种实施例,医疗系统200还包括判断模块206。判断模块206用于确定本机是否作为主机与用户进行交互。
在一种实施例中,判断模块206基于获取模块201获取到的主机信息以确定本机是否作为主机与用户进行交互。具体的,获取模块201在获取环境内语音信息时,用户呼出的语音信息包括多机特征以及主机信息;
分析模块202用于确定到身份特征为多机特征后,判断模块206用于基于主机信息判定本机是否为主机;
控制模块203用于基于主机判定的交互时序启动本机交互系统110与用户进行交互。
一种实施例,排序模块205基于配对模块204获取到的多机特征中的排序特征以确定本机是否作为主机与用户进行交互。具体的,配对模块204获取预分配的多机特征中包括排序特征;
分析模块202用于确定到语音信息中存在身份特征时,确定到身份特征为多机特征;
排序模块205还用于分析身份特征中的排序特征优先级的排序;判断模块206用于确定到本机的排序特征对应的优先级的排序,判定本机是否为主机;
控制模块203用于在交互系统110基于交互时序与用户进行交互后,基于主机判定的交互时序启动本机交互系统110与用户进行交互。
一种实施例,分析模块202用于确定到所述语音信息中存在所述身份特征时,确定到所述语音信息中还包括时序信息;
控制模块203用于启动本机交互系统110与用户进行交互时,基于所述 时序信息的交互时序与用户进行交互。
一种实施例,配对模块204分配获得的身份特征包括排序特征,排序模块205用于确定到所述语音信息中存在所述身份特征时,分析所述排序特征的排序;
控制模块203用于启动本机交互系统110与用户进行交互时,基于所述排序特征的交互时序与用户进行交互。
一种实施例,分析模块202用于确定到所述语音信息中存在所述身份特征时,确定到所述语音信息中还包括主机信息;
判断模块206用于基于所述主机信息判定本机是否为主机;
控制模块203用于启动本机交互系统110与用户进行交互时,基于所述主机判定的交互时序与用户进行交互。
其中,配对模块204分配获得的所述身份特征包括排序特征,
排序模块205用于确定到所述语音信息中存在所述身份特征时,比对本机的所述排序特征的排序;
判断模块206用于基于所述比对结果判定本机是否为主机;
控制模块203用于启动本机交互系统110与用户进行交互时,基于所述主机判定的交互时序与用户进行交互。
一种实施例,控制模块203基于主机判定的交互时序启动本机交互系统110与用户进行交互包括:
若判断模块206判定到本机为所述主机,则控制模块203启动本机交互系统110与用户进行交互;
若判断模块206判定到本机不为所述主机,则控制模块203启动本机交互系统110通过所述主机与用户间接交互。
一种实施例,医疗系统200还包括声源测距模块207,声源测距模块207用于检测声源距离值。具体的,配对模块204获取预分配的身份特征中包括声源距离条件;
获取模块201获取环境内语音信息时,声源测距模块207用于获取声源距离值;
分析模块202用于确定到声源距离值满足声源距离条件;
分析模块202用于判定语音信息中存在身份特征。可以理解的,控制模块203后续基于身份特征启动本机交互系统110与用户进行语音互动。
一种实施例,医疗系统200还包括音量检测模块208,音量检测模块208用于检测音量值。具体的,配对模块204获取预分配的身份特征包括音量条件;
获取模块201获取环境内语音信息时,音量检测模块208用于获取环境语音信息的音量值;
分析模块202用于确定到语音信息中的音量值满足所述音量条件;
分析模块202还用于判定所述语音信息中存在所述身份特征。可以理解的,控制模块203后续基于身份特征启动本机交互系统110与用户进行语音互动。
一种实施例,配对模块204获得预分配的所述声源距离条件包括声源距离阈值,或/和
基于分析模块202判定到本机声源距离值大于环境内任意其余所述医疗设备的声源距离值。
其中,配对模块204获得预分配的所述音量条件包括音量阈值,或
基于分析模块202判定到本机音量值大于环境内任意其余所述医疗设备的音量值。
一种实施例,医疗系统200还包括反馈模块209。反馈模块209用于生成并展示反馈信息以显示本机交互系统110已启动。具体的,控制模块203在启动本机交互系统110与用户进行交互之后,反馈模块209通过连通显示屏、指示灯等视觉反馈装置,对用户生成并展示反馈图像、反馈灯光等信号,用于告知用户本机交互系统110已进入交互状态。或,反馈模块209通过交互系统110对用户提供听觉反馈。
一种实施例,控制模块203启动本机交互系统110与用户进行交互之后,获取模块201用于获取环境内语音信息;
分析模块202用于确定到语音信息中存在退组信息;
控制模块203还用于退出本机交互系统110并停止与用户进行交互。
一种实施例,所述医疗系统200还包括组网模块210。在控制模块203 启动本机交互系统110与用户进行交互之后,获取模块201用于继续获取环境内语音信息;
分析模块202用于确定到所述语音信息中存在组网信息;
组网模块210用于基于所述组网信息启动所述环境内与所述组网信息匹配的其余医疗设备100的交互系统110;
控制模块203还用于控制本机交互系统110基于所述组网信息的交互时序与用户进行交互。
本申请医疗系统200,可以设置于监护仪或中央站上。且监护仪与中央站还可以包括多个主体,如主机、前端设备、远程服务器等。除获取模块201、测距模块207、音量检测模块208需要设置于前端设备上,用以获取用户语音信息、声源距离信息、音量值之外,本申请医疗系统200中其余功能模块在多个主体上的分布并不需要特别限定。可以设置在前端、中端或后端的任意主体上运行。监护仪或中央站因为通过本申请医疗系统200执行本申请多医疗设备共存时的语音交互方法,具备了当空间环境中存在多个医疗系统200的情况下,通过确定用户呼出的语音信息中是否包含对应本机的身份特征来开启与用户进行交互的功能,避免将用户未针对本机的语音指令错认为本机的指令,并启动本机相应的功能,造成用户语音指令的不明确。
以上所述的实施方式,并不构成对该技术方案保护范围的限定。任何在上述实施方式的精神和原则之内所作的修改、等同替换和改进等,均应包含在该技术方案的保护范围之内。

Claims (24)

  1. 一种多医疗设备共存时的语音交互方法,其特征在于,包括:
    获取环境内语音信息;
    确定到所述语音信息中存在身份特征,所述身份特征通过预分配获得;
    启动本机交互系统与用户进行交互。
  2. 根据权利要求1所述的多医疗设备共存时的语音交互方法,其特征在于,所述身份特征中包括单机特征和多机特征,所述确定到所述语音信息中存在所述身份特征,包括:
    确定到语音信息中包括单机特征或多机特征;
    所述启动本机交互系统与用户进行交互,包括:
    当确定到所述语音特征信息为单机特征时,启动本机交互系统与用户进行交互;
    当确定到所述身份特征为多机特征时,启动本机交互系统并基于交互时序与用户进行交互。
  3. 根据权利要求2所述的多医疗设备共存时的语音交互方法,其特征在于,所述确定到所述语音信息中存在所述身份特征,包括:
    确定到所述身份特征为多机特征;
    确定到所述语音信息中还包括时序信息;
    所述启动本机交互系统并基于交互时序与用户进行交互,包括:
    基于所述时序信息的交互时序与用户进行交互。
  4. 根据权利要求2所述的多医疗设备共存时的语音交互方法,其特征在于,所述多机特征中包括排序特征,所述确定到所述语音信息中存在所述身份特征,包括:
    确定到所述身份特征为多机特征;
    分析所述多机特征中的所述排序特征的排序;
    所述启动本机交互系统并基于交互时序与用户进行交互,包括:
    基于所述排序特征的交互时序与用户进行交互。
  5. 根据权利要求2所述的多医疗设备共存时的语音交互方法,其特征在 于,所述确定到所述语音信息中存在所述身份特征,包括:
    确定到所述身份特征为多机特征;
    确定到所述语音信息中还包括主机信息;
    基于所述主机信息判定本机是否为主机;
    所述启动本机交互系统并基于交互时序与用户进行交互,包括:
    基于所述主机判定的交互时序与用户进行交互。
  6. 根据权利要求2所述的多医疗设备共存时的语音交互方法,其特征在于,所述多机特征包括排序特征,所述确定到所述语音信息中存在所述身份特征,包括:
    确定到所述身份特征为多机特征;
    分析并比对本机的所述排序特征的排序;
    依据所述比对结果判定本机是否为主机;
    所述启动本机交互系统并基于交互时序与用户进行交互,包括:
    基于所述主机判定的交互时序与用户进行交互。
  7. 根据权利要求1所述的多医疗设备共存时的语音交互方法,其特征在于,所述确定到所述语音信息中存在所述身份特征,包括:
    确定到所述语音信息中包括时序信息;
    所述启动本机交互系统与用户进行交互,包括:
    基于所述时序信息的交互时序与用户进行交互。
  8. 根据权利要求1所述的多医疗设备共存时的语音交互方法,其特征在于,所述身份特征包括排序特征,所述确定到所述语音信息中存在所述身份特征,包括:
    分析所述排序特征的排序;
    所述启动本机交互系统与用户进行交互,包括:
    基于所述排序特征的交互时序与用户进行交互。
  9. 根据权利要求1所述的多医疗设备共存时的语音交互方法,其特征在于,所述确定到所述语音信息中存在所述身份特征,包括:
    确定到所述语音信息中还包括主机信息;
    基于所述主机信息判定本机是否为主机;
    所述启动本机交互系统与用户进行交互,包括:
    基于所述主机判定的交互时序与用户进行交互。
  10. 根据权利要求1所述的多医疗设备共存时的语音交互方法,其特征在于,所述身份特征包括排序特征,所述确定到所述语音信息中存在所述身份特征,包括:
    分析并比对本机的所述排序特征的排序;
    依据所述比对结果判定本机是否为主机;
    所述启动本机交互系统与用户进行交互,包括:
    基于所述主机判定的交互时序与用户进行交互。
  11. 根据权利要求5、6、9或10中任一项所述的多医疗设备共存时的语音交互方法,其特征在于,所述启动本机交互系统与用户进行交互,包括:
    若判定到本机为所述主机,则启动本机交互系统与用户进行交互;
    若判定到本机不为所述主机,则启动本机交互系统通过所述主机与用户间接交互。
  12. 根据权利要求1所述的多医疗设备共存时的语音交互方法,其特征在于,所述身份特征包括声源距离条件或音量条件,所述获取环境内语音信息,包括:
    获取环境语音信息,并同时获取所述语音信息的声源距离值或音量值;
    所述确定到所述语音信息中存在所述身份特征,包括:
    确定到所述声源距离值满足所述声源距离条件,或
    确定到所述音量值满足所述音量条件;
    判定所述语音信息中存在所述身份特征。
  13. 根据权利要求1所述的多医疗设备共存时的语音交互方法,其特征在于,所述启动本机交互系统与用户进行交互之后,还包括:
    获取环境内语音信息;
    确定到所述语音信息中存在退组信息;
    退出所述本机交互系统并停止与用户进行交互。
  14. 根据权利要求1所述的多医疗设备共存时的语音交互方法,其特征在于,在启动本机交互系统与用户进行交互之后,还包括:
    获取环境内语音信息;
    确定到所述语音信息中存在组网信息;
    基于所述组网信息启动所述环境内与所述组网信息匹配的其余医疗设备的交互系统;
    控制本机交互系统基于所述组网信息的交互时序与用户进行交互。
  15. 根据权利要求1所述的多医疗设备共存时的语音交互方法,其特征在于,所述启动本机交互系统与用户进行交互之后,还包括:
    生成并展示反馈信息以显示本机交互系统已启动。
  16. 一种医疗设备,其特征在于,包括处理器、输入装置、输出装置和存储装置,所述处理器、输入装置、输出装置和存储装置相互连接,其中,所述存储装置用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如权利要求1~15中任一项所述的多医疗设备共存时的语音交互方法。
  17. 一种医疗系统,其特征在于,包括:
    获取模块,用于获取环境内语音信息;
    分析模块,用于确定到所述语音信息中存在所述身份特征,所述身份特征通过预分配获得;
    控制模块,用于启动本机交互系统与用户进行交互。
  18. 根据权利要求17所述的医疗系统,其特征在于,所述医疗系统还包括配对模块,所述配对模块用于获得预分配的身份特征。
  19. 根据权利要求18所述的医疗系统,其特征在于,所述医疗系统还包括排序模块,所述排序模块用于确定所述本机交互系统与用户进行交互的时序。
  20. 根据权利要求18所述的医疗系统,其特征在于,所述医疗系统还包括判断模块,所述判断模块用于确定本机是否作为主机与用户进行交互。
  21. 根据权利要求18所述的医疗系统,其特征在于,所述医疗系统还包括声源测距模块,所述声源测距模块用于检测声源距离值。
  22. 根据权利要求18所述的医疗系统,其特征在于,所述医疗系统还包括音量检测模块,所述音量检测模块用于检测音量值。
  23. 根据权利要求18所述的医疗系统,其特征在于,所述医疗系统还包括组网模块,所述组网模块用于启动所述环境内与所述组网信息匹配的其余医疗设备的交互系统。
  24. 根据权利要求18所述的医疗系统,其特征在于,所述医疗系统还包括反馈模块,所述反馈模块用于生成并展示反馈信息以显示本机交互系统已启动。
PCT/CN2019/084442 2019-04-26 2019-04-26 多医疗设备共存时的语音交互方法、医疗系统及医疗设备 WO2020215295A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980092353.XA CN113454732B (zh) 2019-04-26 2019-04-26 多医疗设备共存时的语音交互方法、医疗系统及医疗设备
PCT/CN2019/084442 WO2020215295A1 (zh) 2019-04-26 2019-04-26 多医疗设备共存时的语音交互方法、医疗系统及医疗设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/084442 WO2020215295A1 (zh) 2019-04-26 2019-04-26 多医疗设备共存时的语音交互方法、医疗系统及医疗设备

Publications (1)

Publication Number Publication Date
WO2020215295A1 true WO2020215295A1 (zh) 2020-10-29

Family

ID=72941277

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/084442 WO2020215295A1 (zh) 2019-04-26 2019-04-26 多医疗设备共存时的语音交互方法、医疗系统及医疗设备

Country Status (2)

Country Link
CN (1) CN113454732B (zh)
WO (1) WO2020215295A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104216689A (zh) * 2013-05-29 2014-12-17 上海联影医疗科技有限公司 医疗系统、医疗设备、医疗设备的控制方法及装置
CN105206275A (zh) * 2015-08-31 2015-12-30 小米科技有限责任公司 一种设备控制方法、装置及终端
US20160104293A1 (en) * 2014-10-03 2016-04-14 David Thomas Gering System and method of voice activated image segmentation
CN205459559U (zh) * 2015-12-31 2016-08-17 重庆剑涛科技有限公司 多功能医疗监护系统
CN109621194A (zh) * 2019-01-25 2019-04-16 王永利 电子针灸控制方法、电子针灸控制终端及电子针灸设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5970457A (en) * 1995-10-25 1999-10-19 Johns Hopkins University Voice command and control medical care system
WO2011003353A1 (zh) * 2009-07-09 2011-01-13 广州广电运通金融电子股份有限公司 可视化自助服务终端、远程交互自助银行系统和服务方法
US9124694B2 (en) * 2012-08-08 2015-09-01 24/7 Customer, Inc. Method and apparatus for intent prediction and proactive service offering
CN103823967A (zh) * 2013-12-19 2014-05-28 中山大学深圳研究院 一种基于ims的数字家庭互动医疗系统
JP6402748B2 (ja) * 2016-07-19 2018-10-10 トヨタ自動車株式会社 音声対話装置および発話制御方法
CN109429522A (zh) * 2016-12-06 2019-03-05 吉蒂机器人私人有限公司 语音交互方法、装置及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104216689A (zh) * 2013-05-29 2014-12-17 上海联影医疗科技有限公司 医疗系统、医疗设备、医疗设备的控制方法及装置
US20160104293A1 (en) * 2014-10-03 2016-04-14 David Thomas Gering System and method of voice activated image segmentation
CN105206275A (zh) * 2015-08-31 2015-12-30 小米科技有限责任公司 一种设备控制方法、装置及终端
CN205459559U (zh) * 2015-12-31 2016-08-17 重庆剑涛科技有限公司 多功能医疗监护系统
CN109621194A (zh) * 2019-01-25 2019-04-16 王永利 电子针灸控制方法、电子针灸控制终端及电子针灸设备

Also Published As

Publication number Publication date
CN113454732B (zh) 2023-11-28
CN113454732A (zh) 2021-09-28

Similar Documents

Publication Publication Date Title
CN111680124B (zh) 基于rpa的大规模定制客户需求获取、查询方法
US20210080940A1 (en) Server, electronic device, and electronic device information providing method
US20200374627A1 (en) Electronic device including a microphone array
US11822784B2 (en) Split-screen display processing method and apparatus, device, and storage medium
US11934848B2 (en) Control display method and electronic device
WO2013189396A2 (zh) 一种在触摸屏上移动应用图标的方法和系统
US10666450B2 (en) Operation execution control server, rule generation server, terminal device, linkage system, method for controlling operation execution control server, method for controlling rule generation server, method for controlling terminal device, and computer-readable recording medium
EP3531263A1 (en) Gesture input processing method and electronic device supporting the same
WO2018214930A1 (zh) 账户快速登录的方法、移动终端及具有存储功能的装置
CN104951421B (zh) 一种串行总线通信设备的自动编号与类型识别方法及装置
CN105551490A (zh) 一种电子测量仪器的智能语音交互系统及方法
CN109195201A (zh) 连接网络的方法、装置、存储介质及用户终端
CN112908321A (zh) 设备控制方法、装置、存储介质及电子装置
WO2020215295A1 (zh) 多医疗设备共存时的语音交互方法、医疗系统及医疗设备
CN109885232A (zh) 数据输入设备的控制方法、装置及系统
CN116737016B (zh) 触控数据扫描的控制方法、装置、计算机设备和存储介质
EP2955628A1 (en) Method and device for displaying application execution screen in electronic device
JP2020507149A (ja) 包含基準又は除外基準の検出自動化のための方法及びシステム
CN103870111B (zh) 信息处理方法及电子设备
WO2020248504A1 (zh) 学生平板的状态指示方法、装置、学生平板及存储介质
CN111279302A (zh) 一种用于智能终端的图标显示方法及图标显示装置
WO2016045401A1 (zh) 触摸屏操作方法及装置
WO2020215702A1 (zh) 信息发送方法、装置、电子设备及存储介质
CN115210687A (zh) 通过语音控制设备控制设备集
CN110413324B (zh) 一种控制方法、装置和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19926113

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19926113

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 180322)

122 Ep: pct application non-entry in european phase

Ref document number: 19926113

Country of ref document: EP

Kind code of ref document: A1