CN108346435B - Voice simulation training system for pension nursing staff - Google Patents

Voice simulation training system for pension nursing staff Download PDF

Info

Publication number
CN108346435B
CN108346435B CN201710051203.7A CN201710051203A CN108346435B CN 108346435 B CN108346435 B CN 108346435B CN 201710051203 A CN201710051203 A CN 201710051203A CN 108346435 B CN108346435 B CN 108346435B
Authority
CN
China
Prior art keywords
pin
control
chip
unit
voice data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710051203.7A
Other languages
Chinese (zh)
Other versions
CN108346435A (en
Inventor
孟宪超
高迟
于耕农
王琳
刘琳琳
徐希晨
徐延华
李俊海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Derun Aged Care Training Co ltd
Original Assignee
Shandong Derun Aged Care Training Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Derun Aged Care Training Co ltd filed Critical Shandong Derun Aged Care Training Co ltd
Priority to CN201710051203.7A priority Critical patent/CN108346435B/en
Publication of CN108346435A publication Critical patent/CN108346435A/en
Application granted granted Critical
Publication of CN108346435B publication Critical patent/CN108346435B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/75Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 for modelling vocal tract parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The embodiment of the application discloses a voice simulation training system for a pension care giver, which is arranged in a human body training model; the system comprises: the device comprises a storage unit, a control unit and a playing unit; the storage unit is used for storing various voice data in advance; the control unit is used for detecting a first control instruction, selecting first voice data stored in the storage unit and corresponding to the first control instruction based on the first control instruction, and sending the first voice data to the playing unit; the playing unit is used for playing the first voice data; the voice playing data comprise voice data in various training scenes, and the content of the voice playing data is associated with the attribute parameters of the set human training model.

Description

Voice simulation training system for pension nursing staff
Technical Field
The application relates to the field of intelligent equipment, in particular to a voice simulation training system for a pension care giver.
Background
Since China walks into the aging society in 1999, population aging rapidly develops, and the population base of the aged is large, the aging is fast to grow, and the tendency of empty nest is increasingly obvious. The "empty nest family", "four two one family", "parent-child living family" are emerging in large numbers, the family scale is increasingly miniaturized, cored, empty nest, the home care function is gradually weakened, and the professional service demands of the old in the aspects of living care, rehabilitation care, medical care, mental culture and the like are increasingly prominent. The number of disabled and semi-disabled elderly people who need care is in a situation of increasing, the care and nursing problems are increasingly prominent, and the demands of socialization for aged people are increased.
The current professional talents of the pension nurses have huge gaps and the contradiction between supply and demand is very prominent. The service team of the pension nursing staff has lower overall quality, and the professional level, the business capability, the service quality and the like can not effectively meet the requirements of the service objects. Especially, the caretaker of the aged cannot adapt to the characteristics of the irritable type old people and the self-defense type old people quickly, and can not serve the old people as soon as possible. Accordingly, there is a need to provide a speech simulation training system for caregivers. In the prior art, no effective solution exists at present for the problems.
Disclosure of Invention
In order to solve the existing technical problems, the embodiment of the application provides a voice simulation training system for a pension care giver.
In order to achieve the above object, the technical solution of the embodiment of the present application is as follows:
the embodiment of the application provides a voice simulation training system for a pension care giver, which is arranged in a human body training model; the system comprises: the device comprises a storage unit, a control unit and a playing unit; wherein,,
the storage unit is used for storing various voice data in advance;
the control unit is used for detecting a first control instruction, selecting first voice data stored in the storage unit and corresponding to the first control instruction based on the first control instruction, and sending the first voice data to the playing unit;
the playing unit is used for playing the first voice data;
the voice playing data comprise voice data in various training scenes, and the content of the voice playing data is associated with the attribute parameters of the set human training model.
In the above scheme, the system further comprises an interface unit, configured to establish communication connection with a control device, and obtain data sent by the control device;
the control unit is used for obtaining various voice data sent by the control equipment through the interface unit, and sending the various voice data to the storage unit for storage; and the interface unit is also used for obtaining the control parameters sent by the control equipment and sending the control parameters to the storage unit for storage.
In the above scheme, the control unit is configured to obtain the control parameter when obtaining the first voice data, and send the first voice data and the control parameter to the playing unit;
the playing unit is used for playing the first voice data based on the control parameter.
In the above scheme, the control unit comprises at least one sensor and a control chip; the control chip is used for generating and detecting a first control instruction based on the specific state of a specified sensor in the at least one sensor.
In the above aspect, the at least one sensor is characterized by a plurality of switch sensors; one end of each of the plurality of switch sensors is respectively connected with a trigger input pin of the control chip, and the other end of each of the plurality of switch sensors is respectively grounded; the control chip is used for generating and detecting a first control instruction based on the closing state of a specified switch sensor in the switch sensors.
In the above scheme, the control chip further includes: an audio output pin connected with the playing unit; and the chip selection pin, the data pin and the clock pin are connected with the storage unit.
In the above scheme, the storage unit comprises a storage chip; the first pin of the memory chip is connected with the chip selection pin of the control chip;
the second pin of the memory chip is connected with the data pin of the control chip, and the second pin is connected with the fifth pin through a fourth resistor;
the third pin, the seventh pin and the eighth pin of the memory chip are connected with the power output pin of the control chip;
the fourth pin of the memory chip is grounded;
and a sixth pin of the memory chip is connected with a clock pin of the control chip.
In the above scheme, the playing unit comprises a power amplification chip and a loudspeaker, and a first pin of the power amplification chip is grounded;
the second pin and the third pin of the power amplifier chip are grounded through a ninth capacitor;
the fourth pin of the power amplifier chip is connected with the audio output pin of the control chip through a fifth resistor and an eighth capacitor, and the fourth pin is connected with the fifth pin through a sixth resistor;
the eighth pin and the fifth pin of the power amplification chip are respectively connected with the loudspeaker;
the sixth pin of the power amplifier chip is connected with a power supply; after the tenth capacitor and the eleventh capacitor are connected in parallel, the first ends of the tenth capacitor and the eleventh capacitor are connected with the sixth pin, and the second ends of the tenth capacitor and the eleventh capacitor are grounded;
and the seventh pin of the power amplifier chip is grounded.
In the above scheme, the control chip further comprises a data positive signal pin and a data negative signal pin which are connected with the interface unit.
In the above scheme, the interface unit comprises an interface chip, and a first pin of the interface chip is connected with a power supply through a diode;
the second pin of the interface chip is connected with the data positive signal pin of the control chip;
and a third pin of the interface chip is connected with a data negative signal pin of the control chip.
The voice simulation training system for the pension nursing staff is arranged in a human body training model; the system comprises: the device comprises a storage unit, a control unit and a playing unit; the storage unit is used for storing various voice data in advance; the control unit is used for receiving a first control instruction, selecting first voice data stored in the storage unit and corresponding to the first control instruction based on the first control instruction, and sending the first voice data to the playing unit; the playing unit is used for playing the first voice data; the voice playing data comprise voice data in various training scenes, and attribute parameters of the voice playing data are associated with the gender of the set human training model. By adopting the technical scheme of the embodiment of the application, the voice simulation training system is arranged in the human body training model, voice data under various training scenes are stored in the system, different voice data can be played through the set scene requirements, the scene which is more close to the real situation can be conveniently presented when a nursing staff trains by using the human body training model, the nursing skill of the nursing staff is effectively improved, and the skill improvement efficiency is improved.
Drawings
Fig. 1 is a schematic diagram of a composition structure of a voice simulation training system for a pension care giver according to an embodiment of the present application;
fig. 2 is a schematic diagram of another composition structure of a voice simulation training system for a pension care giver according to an embodiment of the present application;
FIG. 3 is a schematic diagram of one implementation of a control unit in a caregiver voice simulation training system in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of one implementation of a memory unit in a caregiver voice simulation training system in accordance with an embodiment of the present application;
fig. 5 is a schematic diagram of an implementation of a playing unit in a voice simulation training system of an pension care giver according to an embodiment of the present application;
fig. 6 is a schematic diagram of one implementation of an interface unit in a pension caregiver's voice simulation training system in accordance with an embodiment of the present application.
Detailed Description
The application will be described in further detail with reference to the accompanying drawings and specific examples.
Embodiments of the present application provide a system for simulated training of a caretaker of a pension (the system for simulated training of a caretaker of a pension is referred to simply as a system in the following embodiments of the present application) that is arranged in a mannequin that can be used for pre-post training for a caretaker of a pension, for example: training how to use a hot water bag, training how to turn over and lie on one side or move to a bed head, training how to scratch back and discharge phlegm, training how to replace a urine collecting bag, training how to replace clothes, training how to clean the oral cavity, training how to wash hair on a bed, training how to move from the bed to a wheelchair and other various actions. It can be understood that the human body training model has the characteristics of being similar to a real human body in height and weight, being movable in each joint and the like, and is also convenient for a pension care worker to simulate a real scene more in the training process by using the human body training model. Based on this, fig. 1 is a schematic diagram of a composition structure of a voice simulation training system for a pension care giver according to an embodiment of the present application; as shown in fig. 1, the system includes: a storage unit 12, a control unit 11, and a playback unit 13; wherein the storage unit 12 is configured to store a plurality of voice data in advance;
the control unit 11 is configured to detect a first control instruction, select first voice data corresponding to the first control instruction stored in the storage unit 12 based on the first control instruction, and send the first voice data to the playing unit 13;
the playing unit 13 is configured to play the first voice data;
the voice playing data comprise voice data in various training scenes, and the content of the voice playing data is associated with the attribute parameters of the set human training model.
In this embodiment, on the one hand, the storage unit 12 stores in advance voice data in a plurality of training scenarios, where the voice data is words that may be spoken by an old person when a caregiver performs care on the old person in the plurality of training scenarios; the speech data in the plurality of training scenarios includes at least one of the following training scenarios: the training scenes such as hot water bags, turning over, lying on the side, moving to the bed head, digging back, discharging phlegm, changing urine collecting bags, changing clothes, cleaning the oral cavity, washing the hair on the bed, moving from the bed to a wheelchair and the like are used. On the other hand, the content of the voice playing data is associated with the attribute parameters of the set human body training model; the attribute parameters of the set human training model include a gender attribute parameter, for example, when the gender attribute parameter is set as a male, the voice data played by the playing unit 13 is voice data meeting the range of the male audio parameter, so that the sound heard by the user is male sound; correspondingly, when the gender attribute parameter is set as female, the voice data played by the playing unit 13 is voice data meeting the range of female audio parameters, so that the sound heard by the user is female sound, and the user is more close to a real training use scene. The set attribute parameters of the mannequin may also include personality attribute parameters, such as at least one of the following: mature type personality, dependent type personality, self-defense type personality, irritability type personality and the like, voice data meeting the personality attribute parameters and meeting the training scenes can be prestored in the storage unit 12, so that a nursing staff can train the old people with different personality in different scenes, adapt to the personality characteristics and language habits of different old people during training, shorten the real contact time with the old people in the aspect of bearing in mind, and serve the old people better.
Fig. 2 is a schematic diagram of another composition structure of a voice simulation training system for a pension care giver according to an embodiment of the present application; as shown in fig. 2, the system further comprises an interface unit 14, configured to establish a communication connection with a control device, and obtain data sent by the control device;
the control unit 11 is configured to obtain, through the interface unit 14, a plurality of voice data sent by the control device, and send the plurality of voice data to the storage unit 12 for storage; and is further configured to obtain, via the interface unit 14, control parameters sent by the control device, and send the control parameters to the storage unit 12 for storage.
Specifically, the interface unit 14 establishes communication connection with the control unit 11 on one hand, and may also establish communication connection with a control device through a wired or wireless connection manner on the other hand; the control device may specifically be any electronic device, such as a personal computer (PC, personal Computer), and the type of PC may specifically be a desktop computer, a notebook computer, a tablet computer, a mobile phone, etc., and may of course also be an intelligent wearable device, such as a smart watch, a smart glasses, etc. As an embodiment, the interface unit 14 may be an interface unit 14 implemented based on a universal serial bus (USB, universal Serial Bus), i.e. the system may establish a communication connection with the control device via a USB connection line; a User (e.g., a training teacher) may select voice data in the storage unit 12 of the system to be written through a User Interface (UI) provided by the control device, and write the selected voice data in the storage unit 12 of the system through the Interface unit 14; accordingly, when the stored voice data needs to be updated, the user (for example, training teacher) may also select the voice data to be updated through the UI provided by the control device, and write the stored data to be updated into the storage unit 12, so as to update the voice data in the storage unit 12. On the other hand, the user (e.g., training teacher) may set control parameters, which are parameters related to playing the voice data by the playing unit 13 of the system, such as a volume parameter when playing the voice data, a related attribute parameter such as continuous playing or intermittent playing when playing the voice data, etc., through the UI provided by the control device, and store the control parameters in the storage unit 12 of the system along with writing of the voice data through the interface unit 14. As an embodiment, the control unit 11 is configured to obtain the control parameter when obtaining the first voice data, and send the first voice data and the control parameter to the playing unit 13; the playing unit 13 is configured to play the first voice data based on the control parameter, so that when the playing unit 13 plays the first voice data, the first voice data is played according to the volume represented by the control parameter and/or attribute parameters such as continuous playing or intermittent playing represented by the control parameter.
In this embodiment, the control unit 11 detects a first control instruction, and selects first voice data corresponding to the first control instruction stored in the storage unit 12 based on the first control instruction. Specifically, the control unit 11 includes at least one sensor and a control chip; the control chip is used for generating and detecting a first control instruction based on the specific state of a specified sensor in the at least one sensor. As an embodiment, the at least one sensor is characterized by a plurality of switch sensors, different states of the at least one sensor may be characterized by an open-close state of at least one switch sensor of the plurality of switch sensors, and the control chip may generate and detect the first control instruction based on a closed state of a specified switch sensor of the plurality of switch sensors. Specifically, the mapping relationship between the opening and closing states of the control command and the corresponding switch sensor may be preconfigured in the control chip, for example, the control command 1 corresponds to the closing state of the switch sensor 1, and the control command 2 corresponds to the closing state of the switch sensor 2; of course, in other embodiments, the control command may correspond to the off state of at least two switch sensors, e.g., control command n corresponds to the off state of switch sensor 1 and switch sensor 2, etc. In a specific implementation process, as an implementation manner, the triggering of the opening and closing state of the switch sensor may be implemented by a physical switch key or a virtual switch key set on the human training model, and a training person may trigger the physical switch key or the virtual switch key set on the human training model according to the actual requirement of the training person to train in the training process, so as to trigger the control chip to select corresponding voice data and play the voice data through the playing unit 13. As another implementation manner, the open-close state of the switch sensor may be triggered by remote control of the control device, and in practical application, the switch sensor has a Wireless communication function, and the Wireless communication function of the switch sensor may be implemented by Wireless communication technologies such as Wi-Fi (Wireless-Fidelity) and bluetooth; the switch sensor can perform data interaction with the control device through a wireless communication function, an instruction which is sent by the control device and used for controlling the opening and closing states of the switch sensor is obtained, a designated switch sensor in the switch sensors is in a closed state based on the instruction, and the control chip generates a first control instruction based on the fact that the designated switch sensor is in the closed state.
In this embodiment, the control chip may be implemented by at least one integrated circuit chip; the control chip may have a plurality of pins; the control chip is characterized in that a trigger input pin is arranged in the plurality of pins of the control chip and is connected with one ends of the plurality of switch sensors, and the trigger input pin is used for obtaining the opening and closing states of the switch sensors. The control chip also comprises an audio output pin connected with the playing unit 13; a chip select pin, a data pin, and a clock pin connected to the memory cell 12. Specifically, after the voice data is written into the storage unit 12 through the interface unit 14, the mapping relationship between the voice data and the corresponding control command is configured in the control chip, which may also be understood as the correspondence between the voice data and the corresponding switch sensor in the closed state, so that when the control chip detects which one or more switch sensors are in the closed state, the corresponding voice data is obtained from the storage unit 12 through the chip selection pin, the data pin and the clock pin; the voice data is further transmitted to the playing unit 13 through an audio output pin for playing. In practical application, the stored corresponding relation comprises the corresponding relation representation of the information of the voice data and the closing state of the corresponding switch sensor; the information of the voice data may specifically be a storage location of the voice data; the chip select pin is used for performing data interaction with the storage unit 12 and pointing to the position of the corresponding voice data; the data pin is used for obtaining voice data corresponding to the storage position based on the storage position of the voice data pointed by the chip selection pin; the clock pin is used for outputting a clock signal to the memory unit 12; the audio output pins may include, in particular, a first audio output pin for left channel output and a second audio output pin for right channel output.
FIG. 3 is a schematic diagram of one implementation of a control unit in a caregiver voice simulation training system according to an embodiment of the present application; taking the control chip implemented by a voice chip JQ8900 as an example, as shown in fig. 3, the control unit comprises a control chip such as JQ8900 and seven switch sensors, wherein the control chip can comprise 24 pins which are represented by pin 1 and pin 2 … pin 24; the switch sensors are denoted by K1, K2, …, K7; pins 8, 9, 15, 16, 19, 18 and 17 on the control chip are used as trigger input pins and are respectively connected with one ends of switch sensors K1, K2, … and K7, and the other ends of the switch sensors K1, K2, … and K7 are grounded. In this example, the first capacitor C1 and the second capacitor C2 perform a filtering function, and one end and the other end of the first capacitor C1 are respectively connected with the pin 3 and the pin 5, where the pin 3 outputs a current of 3.3 volts (V) and 100 milliamps (mA), and can be understood as that the pin 3 is a V3 end; pin 5 is grounded; one end of the second capacitor C2 is connected with the other end of the pin 4 to be grounded, the pin 4 is a control chip power supply pin and is connected with a power supply (VMCU) to obtain 2.8-5.5V of voltage; the resistor R1 and the resistor R2 are current limiting resistors, and are respectively connected with the pin 6 and the pin 7, wherein the pin 6 is a serial port transmitting pin of the control chip, the pin 7 is a serial port receiving pin of the control chip, and the serial port receiving pin and the receiving pin are respectively used for being connected with a receiving pin (RX pin) and a transmitting pin (TX pin) of a Micro Control Unit (MCU); since the MCU is not set in this example, pin 6 and pin 7 are empty. One end of the third capacitor C3 is connected with the pin 1, one end of the fourth capacitor C4 is connected with the pin 2, and the pin 1 is the audio right channel output of a digital-to-analog converter (DAC, digital to Analog Converter) of the control chip; pin 2 is DAC audio left channel output of the control chip, the other end of the third capacitor C3 and the other end of the fourth capacitor C4 are connected to form an audio output pin of the control chip, and the audio output pin is connected with the playing unit; pin 24 is grounded as an audio ground, i.e., analog ground; the pin 23 is an audio decoupling end, and the pin 23 is connected with one end of the fifth capacitor C5; namely, the fifth capacitor C5 is an audio decoupling capacitor; pin 22 is the Real Time Clock (RTC) power supply pin, since this example has no clock function, pin 22 connects one end of the sixth capacitance C6, the other end of the fifth capacitance C5 and the other end of the sixth capacitance C6 are grounded; in this example, taking the interface unit implemented through the USB interface as an example, pin 20 is a positive signal (DM) end of the USB, i.e., a d+ end, connected to the USB-DM of the USB circuit, and pin 19 is a negative signal (DP) end of the USB, i.e., a D-end, connected to the USB-DP of the USB circuit; pin 12 is the chip select pin that is connected with the memory cell, pin 13 is the data pin that is connected with the memory cell, pin 14 is the clock pin that is connected with the memory cell, and pin 11 is as the signal busy end, is connected with one end of third resistance R3, the other end of third resistance R3 is connected with one end of emitting diode D1, the other end ground connection of emitting diode D1. The values of the first capacitor C1, the second capacitor C2, the third capacitor C3, the fourth capacitor C4, the fifth capacitor C5 and the sixth capacitor C6 may be 105 picofarads, the values of the first resistor R1 and the second resistor R2 may be 1 kiloohm (1 kΩ), and the value of the third resistor R3 may be 330 Ω; of course, not limited to the above values.
FIG. 4 is a schematic diagram of one implementation of a memory unit in a caregiver voice simulation training system according to an embodiment of the present application; the storage unit comprises a storage chip, the storage unit is realized by a storage circuit (SPILASH) capable of being repeatedly read and written in the example, and the storage chip is used for storing common voice data when old people with various characters use a hot water bag, turn over, lie on one side and move to a head of a bed, scratch back and discharge phlegm, replace a urine collecting bag, replace clothes, clean oral cavity, wash hair on a bed and move the old people from the bed to a wheelchair, and is read or written in under the control of a main control circuit. This example is described by taking the model number U2a25Q32 of the memory chip as an example. Referring to fig. 3 and 4, pin 1 of the memory chip is connected with a chip selection pin of the control chip, wherein the chip selection pin is an SPI-CS end (i.e., pin 12) of the control chip, pin 2 is connected with a data pin of the control chip, and the data pin is an SPI-DIO end (i.e., pin 12) of the control chip, for input/output of voice data; the pin 3, the pin 7 and the pin 8 are used as power supply pins of the control chip and are connected with a power output pin of the control chip, wherein the power output pin is the V3V3 end (i.e. pin 3) of the control chip so as to obtain 3.3V voltage; pin 4 is grounded; the pin 5 is connected with one end of a fourth resistor R4, and the other end of the fourth resistor R4 is connected with the SPI-DIO end of the control chip, namely the other end of the fourth resistor R4 is connected with the pin 2; the pin 6 is connected with a clock pin of the control chip, and the clock pin is an SPI-CLK end, namely a pin 14 of the control chip; and the pin 7 and the pin 8 are connected with one end of a seventh capacitor C7 after being short-circuited, and the other end of the seventh capacitor C7 is grounded. The value of the fourth resistor R4 may be 100Ω, and the value of the seventh capacitor C7 may be 105 picofarads, which is not limited to the above value.
FIG. 5 is a schematic diagram of an implementation of a play unit in a caregiver voice simulation training system according to an embodiment of the present application; the playing unit is realized through a power amplifier circuit; the power amplifier circuit comprises a power amplifier chip and a loudspeaker, and the power amplifier chip is illustrated by taking a chip U3 8002A as an example in the example. As shown in fig. 3 and 5, a pin 1 of the power amplification chip is grounded, a pin 2 and a pin 3 are connected with one end of a ninth capacitor C9, and the other end of the ninth capacitor is grounded; the pin 4 is respectively connected with one end of a fifth resistor R5 and one end of a sixth resistor R6, the other end of the fifth resistor R5 is connected with one end of an eighth capacitor C8, the other end of the eighth capacitor C8 is connected with an audio output pin of the control chip, and is used as an audio access port, and audio data is accessed into the pin 4 through the eighth capacitor C8 and the fifth resistor R5; the other end of the sixth resistor R6 is connected with the pin 5, the pin 5 and the pin 8 are used as power amplifier output ends and are respectively connected with two ends of a loudspeaker, and audio data is output and played through the loudspeaker; pin 6 is connected to a power supply (VMCU); the tenth capacitor C10 and the eleventh capacitor C11 are connected in parallel, one end after being connected in parallel is connected with the pin 7, the other end is connected with a power supply (VMCU), the power supply (VMCU) is connected with one end of the tenth capacitor C10 and one end of the eleventh capacitor C11, the other end of the tenth capacitor C10 and the other end of the eleventh capacitor C11 are connected with the pin 7, and the pin 7 is grounded. Wherein, the value of the fifth resistor R5 may be 20kΩ, and the value of the sixth resistor R6 may be 47kΩ; the values of the eighth capacitor C8 and the ninth capacitor C9 may be 105 picofarads; the value of the tenth capacitor C10 may be 104 picofarads, and the value of the eleventh capacitor C11 may be 475 picofarads, which is not limited to the above values.
As an implementation manner, fig. 6 is a schematic diagram of an implementation of an interface unit in a voice simulation training system of a pension care giver according to an embodiment of the present application, where in this example, the interface unit is implemented as a USB interface, and the interface circuit is implemented by using the USB interface circuit shown in fig. 6. As shown in fig. 3 and 6, in the USB interface circuit, pin 1 is connected to one end of a second diode D2, and the other end of the second diode D2 is connected to a power supply (VMCU); pin 2 is connected with the data positive signal pin of the control chip, namely connected with the USB-DM end (namely pin 20) of the control chip; pin 3 is connected with the data negative signal pin of the control chip, namely connected with the USB-DP end (namely pin 21) of the control chip; pin 4 is empty and the other pins are grounded.
By adopting the technical scheme of the embodiment of the application, the voice simulation training system is arranged in the human body training model, voice data under various training scenes are stored in the system, different voice data can be played through the set scene requirements, the scene which is more close to the real situation can be conveniently presented when a nursing staff trains by using the human body training model, the nursing skill of the nursing staff is effectively improved, and the skill improvement efficiency is improved.
In the embodiments provided in the present application, it should be understood that the disclosed system may be implemented in other manners. The system embodiment described above is merely illustrative, and for example, the division of the units is merely a logical function division, and there may be other division manners in actual implementation, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the functions of implementing the above system embodiments may be implemented by hardware related to program instructions, where the above program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A speech simulation training system for a pension care giver, wherein the system is arranged in a human body training model; the system comprises: the device comprises a storage unit, a control unit and a playing unit; wherein,,
the storage unit is used for storing various voice data in advance;
the control unit is used for detecting a first control instruction, selecting first voice data stored in the storage unit and corresponding to the first control instruction based on the first control instruction, and sending the first voice data to the playing unit;
the playing unit is used for playing the first voice data;
the plurality of voice data comprise voice data in a plurality of training scenes, and the content of the plurality of voice data is associated with the attribute parameters of the set human training model;
the system also comprises an interface unit, a control unit and a control unit, wherein the interface unit is used for establishing communication connection with the control unit and obtaining data sent by the control unit;
the control unit is used for obtaining various voice data sent by the control equipment through the interface unit, and sending the various voice data to the storage unit for storage; the interface unit is also used for obtaining control parameters sent by the control equipment and sending the control parameters to the storage unit for storage; the control parameters are parameters related to the playing of the voice data by a playing unit of the system;
wherein the control unit comprises at least one sensor and a control chip; the at least one sensor is characterized by a plurality of switch sensors; the control chip is used for generating and detecting a first control instruction based on the closing state of a specified switch sensor in the plurality of switch sensors;
the control chip can be pre-configured with a control instruction and a mapping relation of the opening and closing states of the corresponding switch sensors.
2. The system according to claim 1, wherein the control unit is configured to obtain the control parameter when obtaining the first voice data, and send the first voice data and the control parameter to the playback unit;
the playing unit is used for playing the first voice data based on the control parameter.
3. The system of claim 1, wherein one end of each of the plurality of switch sensors is respectively connected to a trigger input pin of the control chip, and the other end of each of the plurality of switch sensors is respectively grounded.
4. The system of claim 3, wherein the control chip further comprises:
an audio output pin connected with the playing unit;
and the chip selection pin, the data pin and the clock pin are connected with the storage unit.
5. The system of claim 3 or 4, wherein the memory unit comprises a memory chip; the first pin of the memory chip is connected with the chip selection pin of the control chip;
the second pin of the memory chip is connected with the data pin of the control chip, and the second pin is connected with the fifth pin through a fourth resistor;
the third pin, the seventh pin and the eighth pin of the memory chip are connected with the power output pin of the control chip;
the fourth pin of the memory chip is grounded;
and a sixth pin of the memory chip is connected with a clock pin of the control chip.
6. The system of claim 3 or 4, wherein the playing unit comprises a power amplification chip and a speaker, and a first pin of the power amplification chip is grounded;
the second pin and the third pin of the power amplifier chip are grounded through a ninth capacitor;
the fourth pin of the power amplifier chip is connected with the audio output pin of the control chip through a fifth resistor and an eighth capacitor, and the fourth pin is connected with the fifth pin through a sixth resistor;
the eighth pin and the fifth pin of the power amplification chip are respectively connected with the loudspeaker;
the sixth pin of the power amplifier chip is connected with a power supply; after the tenth capacitor and the eleventh capacitor are connected in parallel, the first ends of the tenth capacitor and the eleventh capacitor are connected with the sixth pin, and the second ends of the tenth capacitor and the eleventh capacitor are grounded;
and the seventh pin of the power amplifier chip is grounded.
7. The system of claim 3, wherein the control chip further comprises a data positive signal pin and a data negative signal pin connected to the interface unit.
8. The system of claim 7, wherein the interface unit comprises an interface chip, a first pin of the interface chip being connected to a power supply through a diode;
the second pin of the interface chip is connected with the data positive signal pin of the control chip;
and a third pin of the interface chip is connected with a data negative signal pin of the control chip.
CN201710051203.7A 2017-01-23 2017-01-23 Voice simulation training system for pension nursing staff Active CN108346435B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710051203.7A CN108346435B (en) 2017-01-23 2017-01-23 Voice simulation training system for pension nursing staff

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710051203.7A CN108346435B (en) 2017-01-23 2017-01-23 Voice simulation training system for pension nursing staff

Publications (2)

Publication Number Publication Date
CN108346435A CN108346435A (en) 2018-07-31
CN108346435B true CN108346435B (en) 2023-10-20

Family

ID=62974790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710051203.7A Active CN108346435B (en) 2017-01-23 2017-01-23 Voice simulation training system for pension nursing staff

Country Status (1)

Country Link
CN (1) CN108346435B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553555A (en) * 2020-03-27 2020-08-18 深圳追一科技有限公司 Training method, training device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203827504U (en) * 2014-03-05 2014-09-10 深圳雷柏科技股份有限公司 Bluetooth audio product capable of switching speech content
CN105702272A (en) * 2016-01-12 2016-06-22 深圳市德赛工业研究院有限公司 Audio play equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080161951A1 (en) * 2007-01-03 2008-07-03 Morris Jeffrey M Portable memory device with dynamically loaded audio content

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203827504U (en) * 2014-03-05 2014-09-10 深圳雷柏科技股份有限公司 Bluetooth audio product capable of switching speech content
CN105702272A (en) * 2016-01-12 2016-06-22 深圳市德赛工业研究院有限公司 Audio play equipment

Also Published As

Publication number Publication date
CN108346435A (en) 2018-07-31

Similar Documents

Publication Publication Date Title
CA2673644C (en) Situated simulation for training, education, and therapy
CN103877727B (en) A kind of by mobile phone control and the electronic pet that interacted by mobile phone
CN102527045B (en) Intelligent learning doll and realizing method and circuit system thereof
Das et al. Using smart phones for context-aware prompting in smart environments
CN104290097A (en) Learning type intelligent home social contact robot system and method
CN104102346A (en) Household information acquisition and user emotion recognition equipment and working method thereof
CN204654979U (en) A kind of Medical teaching bluetooth stethoscope
CN105046477A (en) Intelligent portable apparatus for recording and managing daily life
CN107330418B (en) Robot system
CN106603987A (en) Child companion robot
WO2018192567A1 (en) Method for determining emotional threshold and artificial intelligence device
CN101648079A (en) Emotional doll
CN104158963A (en) Intelligent facial expression expression system of intelligent mobile phone
CN108470567A (en) A kind of voice interactive method, device, storage medium and computer equipment
CN108874130A (en) Control method for playing back and Related product
CN108346435B (en) Voice simulation training system for pension nursing staff
CN204580129U (en) A kind of Intelligent bracelet
CN101360304B (en) Mobile terminal amusing method and corresponding mobile terminal
Zhao et al. Blossom: design of a tangible interface for improving intergenerational communication for the elderly
CN206481399U (en) A kind of children accompany robot
CN206210144U (en) Gesture language-voice converts cap
CN108091335A (en) A kind of real-time voice translation system based on speech recognition
CN110838357A (en) Attention holographic intelligent training system based on face recognition and dynamic capture
Scott Gesture Use by Chimpanzees (P an troglodytes): Differences Between Sexes in Inter‐and Intra‐S exual Interactions
CN109558853A (en) A kind of audio synthetic method and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant