CN111429876A - Disease symptom information acquisition system based on natural voice interaction - Google Patents

Disease symptom information acquisition system based on natural voice interaction Download PDF

Info

Publication number
CN111429876A
CN111429876A CN201911299131.3A CN201911299131A CN111429876A CN 111429876 A CN111429876 A CN 111429876A CN 201911299131 A CN201911299131 A CN 201911299131A CN 111429876 A CN111429876 A CN 111429876A
Authority
CN
China
Prior art keywords
component
voice
client subsystem
user
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911299131.3A
Other languages
Chinese (zh)
Inventor
汤文巍
章智云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vhs Shanghai Health Technology Co ltd
Original Assignee
Vhs Shanghai Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vhs Shanghai Health Technology Co ltd filed Critical Vhs Shanghai Health Technology Co ltd
Priority to CN201911299131.3A priority Critical patent/CN111429876A/en
Publication of CN111429876A publication Critical patent/CN111429876A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention provides a disease symptom information acquisition system based on natural voice interaction, which comprises: the system comprises a client subsystem and a service terminal system, wherein the client subsystem and the service terminal system are integrated and are connected through the Internet to complete mutual communication and data processing; the client subsystem comprises a microphone component, a network communication component B and a loudspeaker component, and the service terminal system comprises a network communication component C, a voice conversion character component, a natural semantic information extraction component, a natural language interaction component and a character synthesis voice component. Under the condition that the user has more doctors and less doctors, by adopting the system, a large amount of time for inquiring and communicating between the doctors and the user can be saved, and the working efficiency of the doctors and the diagnosis efficiency of the user are improved.

Description

Disease symptom information acquisition system based on natural voice interaction
Technical Field
The invention relates to the technical field of voice recognition processing, in particular to a disease symptom information acquisition system based on natural voice interaction.
Background
At present, the disease symptom information acquisition mode is generally a process that a doctor communicates with a user through inquiry, and the doctor asks questions and the user answers to record the symptoms and the disease information of the user. This approach has the following drawbacks: 1. different doctors have different experiences, so that incomplete information acquisition is easy to cause; 2. the working efficiency of the doctor is limited and cannot meet the requirements of a large number of users.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a disease symptom information collecting system based on natural voice interaction, so as to solve the problems proposed in the background art.
The technical problem solved by the invention is realized by adopting the following technical scheme: disease symptom information acquisition system based on natural voice interaction includes: the system comprises a client subsystem and a service terminal system, wherein the client subsystem and the service terminal system are integrated and are connected through the Internet to complete mutual communication and data processing; the client subsystem comprises a microphone component, a network communication component B and a loudspeaker component, the service terminal system comprises a network communication component C, a voice conversion character component, a natural semantic information extraction component, a natural language interaction component and a character synthesis voice component, a user awakens the client subsystem through an awakening language and describes own diseases and symptoms by using the natural language according to voice guidance played by the client subsystem, and the service terminal system completes extraction of the information of the diseases and symptoms described by the user through analysis and processing of voice data streams provided by the client subsystem.
The microphone component of the client subsystem is in a monitoring state by default, a user wakes up the client subsystem through a wake-up word, the client subsystem plays guide voice through the loudspeaker component, and then the microphone component enters the monitoring state; the user describes his disease and symptoms in natural language according to the guidance voice prompt. The client subsystem collects the user voice data stream through the microphone component and submits the user voice data stream to the network communication component B in real time; the network communication component B transmits the voice data stream to a network communication component C of the service terminal system through the Internet, and the client subsystem enters a waiting state; the network communication part C in the service terminal system submits the received voice data stream to the voice conversion character part in real time.
The voice conversion character part converts the natural voice data stream into natural language text information, and then submits the natural language text information to the natural semantic information extraction part in real time.
The natural semantic information extraction component firstly extracts the information related to the user symptoms and diseases based on natural semantic analysis, and the information is as follows: personal information: sex, age; the main complaint information is as follows: symptom name, disease name, time of occurrence of symptom, severity of symptom; life habit information: whether smoking is frequent, whether drinking is frequent, and whether staying up all night; and finally submitting the data to a natural language interaction component.
And the natural language interaction component generates a guide language to be replied to the user according to the current missing information by combining the transmitted extracted information with the context rule of the inquiry. The generated guide language is then submitted to a text-to-speech component.
The character synthesis voice component converts the received text information into a corresponding voice data stream and submits the voice data stream to the network communication component C, and the network communication component C transmits the voice data stream to the client subsystem in real time through the Internet.
And the network communication component B of the client subsystem enters a playing state after receiving the guide voice data stream returned by the service terminal system, plays the received voice data stream to a user for listening through the loudspeaker component, and then the microphone component enters a monitoring state again to acquire the voice of the user again.
Compared with the prior art, the invention has the beneficial effects that: the invention adopts a man-machine natural voice interaction mode, and can more simply and conveniently finish the collection of user symptoms and disease information. Under the condition that the user has more doctors and less doctors, by adopting the system, a large amount of time for inquiring and communicating between the doctors and the user can be saved, and the working efficiency of the doctors and the diagnosis efficiency of the user are improved.
Drawings
FIG. 1 is a schematic diagram of the architecture of the present invention.
Detailed Description
In the description of the present invention, it should be noted that unless otherwise specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected, mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements.
As shown in fig. 1, the disease symptom information collecting system based on natural voice interaction includes: the system comprises a client subsystem and a service terminal system, wherein the client subsystem and the service terminal system are integrated and are connected through the Internet to complete mutual communication and data processing; the client subsystem comprises a microphone component, a network communication component B and a loudspeaker component, the service terminal system comprises a network communication component C, a voice conversion character component, a natural semantic information extraction component, a natural language interaction component and a character synthesis voice component, a user awakens the client subsystem through an awakening language and describes own diseases and symptoms by using the natural language according to voice guidance played by the client subsystem, and the service terminal system completes extraction of the information of the diseases and symptoms described by the user through analysis and processing of voice data streams provided by the client subsystem. The microphone component of the client subsystem is in a monitoring state by default, a user wakes up the client subsystem through a wake-up word, the client subsystem plays guide voice through the loudspeaker component, and then the microphone component enters the monitoring state; the user describes his disease and symptoms in natural language according to the guidance voice prompt. The client subsystem collects the user voice data stream through the microphone component and submits the user voice data stream to the network communication component B in real time; the network communication component B transmits the voice data stream to a network communication component C of the service terminal system through the Internet, and the client subsystem enters a waiting state; the network communication part C in the service terminal system submits the received voice data stream to the voice conversion character part in real time. The voice conversion character part converts the natural voice data stream into natural language text information, and then submits the natural language text information to the natural semantic information extraction part in real time.
Example 2
As shown in fig. 1, the disease symptom information collecting system based on natural voice interaction includes: the system comprises a client subsystem and a service terminal system, wherein the client subsystem and the service terminal system are integrated and are connected through the Internet to complete mutual communication and data processing; the client subsystem comprises a microphone component, a network communication component B and a loudspeaker component, the service terminal system comprises a network communication component C, a voice conversion character component, a natural semantic information extraction component, a natural language interaction component and a character synthesis voice component, a user awakens the client subsystem through an awakening language and describes own diseases and symptoms by using the natural language according to voice guidance played by the client subsystem, and the service terminal system completes extraction of the information of the diseases and symptoms described by the user through analysis and processing of voice data streams provided by the client subsystem. The natural semantic information extraction component firstly extracts the information related to the user symptoms and diseases based on natural semantic analysis, and the information is as follows: personal information: sex, age; the main complaint information is as follows: symptom name, disease name, time of occurrence of symptom, severity of symptom; life habit information: whether smoking is frequent, whether drinking is frequent, and whether staying up all night; and finally submitting the data to a natural language interaction component. And the natural language interaction component generates a guide language to be replied to the user according to the current missing information by combining the transmitted extracted information with the context rule of the inquiry. The generated guide language is then submitted to a text-to-speech component. The character synthesis voice component converts the received text information into a corresponding voice data stream and submits the voice data stream to the network communication component C, and the network communication component C transmits the voice data stream to the client subsystem in real time through the Internet. And the network communication component B of the client subsystem enters a playing state after receiving the guide voice data stream returned by the service terminal system, plays the received voice data stream to a user for listening through the loudspeaker component, and then the microphone component enters a monitoring state again to acquire the voice of the user again.
The invention is based on the AI technology, and realizes that the user and the AI system collect the symptoms and the disease related information of the user in a natural language interaction mode. Has the following advantages: 1. the machine has high working efficiency, and can meet the requirements of a large number of users in 24 hours; 2. the machine is unified and standard, and the collected information is comprehensive and accurate; 3. the users are numerous, the machine can reduce the workload of doctors, and social scarce resources are saved.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. A disease symptom information acquisition system based on natural voice interaction comprises: client subsystem, service terminal system, its characterized in that: the client subsystem and the service terminal system are integrated and are connected through the Internet to complete mutual communication and data processing; the client subsystem comprises a microphone component, a network communication component B and a loudspeaker component, the service terminal system comprises a network communication component C, a voice conversion character component, a natural semantic information extraction component, a natural language interaction component and a character synthesis voice component, a user awakens the client subsystem through an awakening language and describes own diseases and symptoms by using the natural language according to voice guidance played by the client subsystem, and the service terminal system completes extraction of the information of the diseases and symptoms described by the user through analysis and processing of voice data streams provided by the client subsystem.
2. The system according to claim 1, wherein the system comprises: the microphone component of the client subsystem is in a monitoring state by default, a user wakes up the client subsystem through a wake-up word, the client subsystem plays guide voice through the loudspeaker component, and then the microphone component enters the monitoring state; the user describes own diseases and symptoms by natural language according to the prompt of the guide voice; the client subsystem collects the user voice data stream through the microphone component and submits the user voice data stream to the network communication component B in real time; the network communication component B transmits the voice data stream to a network communication component C of the service terminal system through the Internet, and the client subsystem enters a waiting state; the network communication part C in the service terminal system submits the received voice data stream to the voice conversion character part in real time.
3. The system according to claim 1, wherein the system comprises: the voice conversion character part converts the natural voice data stream into natural language text information, and then submits the natural language text information to the natural semantic information extraction part in real time.
4. The system according to claim 1, wherein the system comprises: the natural semantic information extraction component firstly extracts the information related to the user symptoms and diseases based on natural semantic analysis, and the information is as follows: personal information: sex, age; the main complaint information is as follows: symptom name, disease name, time of occurrence of symptom, severity of symptom; life habit information: whether smoking is frequent, whether drinking is frequent, and whether staying up all night; and finally submitting the data to a natural language interaction component.
5. The system according to claim 1, wherein the system comprises: the natural language interaction component generates a guide language to be replied to the user according to the current missing information by combining the transmitted extracted information with the context rule of the inquiry; the generated guide language is then submitted to a text-to-speech component.
6. The system according to claim 1, wherein the system comprises: the character synthesis voice component converts the received text information into a corresponding voice data stream and submits the voice data stream to the network communication component C, and the network communication component C transmits the voice data stream to the client subsystem in real time through the Internet.
7. The system according to claim 1, wherein the system comprises: and the network communication component B of the client subsystem enters a playing state after receiving the guide voice data stream returned by the service terminal system, plays the received voice data stream to a user for listening through the loudspeaker component, and then the microphone component enters a monitoring state again to acquire the voice of the user again.
CN201911299131.3A 2019-12-17 2019-12-17 Disease symptom information acquisition system based on natural voice interaction Pending CN111429876A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911299131.3A CN111429876A (en) 2019-12-17 2019-12-17 Disease symptom information acquisition system based on natural voice interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911299131.3A CN111429876A (en) 2019-12-17 2019-12-17 Disease symptom information acquisition system based on natural voice interaction

Publications (1)

Publication Number Publication Date
CN111429876A true CN111429876A (en) 2020-07-17

Family

ID=71546934

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911299131.3A Pending CN111429876A (en) 2019-12-17 2019-12-17 Disease symptom information acquisition system based on natural voice interaction

Country Status (1)

Country Link
CN (1) CN111429876A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839211A (en) * 2014-03-23 2014-06-04 合肥新涛信息科技有限公司 Medical history transferring system based on voice recognition
CN103853903A (en) * 2012-12-04 2014-06-11 天津市医学堂科技有限公司 Medical record information acquiring system
CN104485105A (en) * 2014-12-31 2015-04-01 中国科学院深圳先进技术研究院 Electronic medical record generating method and electronic medical record system
CN206021239U (en) * 2016-06-27 2017-03-15 好人生(上海)健康科技有限公司 It is specially adapted for the medical science point natural language interactive device that examines in advance
CN106650261A (en) * 2016-12-22 2017-05-10 上海智臻智能网络科技股份有限公司 Intelligent inquiry method, device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853903A (en) * 2012-12-04 2014-06-11 天津市医学堂科技有限公司 Medical record information acquiring system
CN103839211A (en) * 2014-03-23 2014-06-04 合肥新涛信息科技有限公司 Medical history transferring system based on voice recognition
CN104485105A (en) * 2014-12-31 2015-04-01 中国科学院深圳先进技术研究院 Electronic medical record generating method and electronic medical record system
CN206021239U (en) * 2016-06-27 2017-03-15 好人生(上海)健康科技有限公司 It is specially adapted for the medical science point natural language interactive device that examines in advance
CN106650261A (en) * 2016-12-22 2017-05-10 上海智臻智能网络科技股份有限公司 Intelligent inquiry method, device and system

Similar Documents

Publication Publication Date Title
US20180137250A1 (en) Mobile health intelligent medical guide system and method thereof
CN103137129B (en) Audio recognition method and electronic installation
CN103186663B (en) A kind of network public-opinion monitoring method based on video and system
CN106548788B (en) Intelligent emotion determining method and system
CN107135247A (en) A kind of service system and method for the intelligent coordinated work of person to person's work
CN106504754A (en) A kind of real-time method for generating captions according to audio output
CN106354835A (en) Artificial dialogue auxiliary system based on context semantic understanding
CN111329494B (en) Depression reference data acquisition method and device
CN105427855A (en) Voice broadcast system and voice broadcast method of intelligent software
CN103093752A (en) Sentiment analytical method based on mobile phone voices and sentiment analytical system based on mobile phone voices
CN110489527A (en) Banking intelligent consulting based on interactive voice and handle method and system
CN102609460A (en) Method and system for microblog data acquisition
CN109634994A (en) A kind of the matching method for pushing and computer equipment and storage medium of resume and position
CN107480450A (en) A kind of intelligence point examines method and system
CN102404278A (en) Song request system based on voiceprint recognition and application method thereof
US11176126B2 (en) Generating a reliable response to a query
CN104142936A (en) Audio and video match method and audio and video match device
CN101588322B (en) Mailbox system based on speech recognition
CN109978016A (en) A kind of network user identity recognition methods
CN111681779A (en) Medical diagnosis system
US20230359817A1 (en) Identifying utilization of intellectual property
CN112542156A (en) Civil aviation maintenance worker card system based on voiceprint recognition and voice instruction control
CN111429876A (en) Disease symptom information acquisition system based on natural voice interaction
CN109300478A (en) A kind of auxiliary Interface of person hard of hearing
CN112672120B (en) Projector with voice analysis function and personal health data generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200717

RJ01 Rejection of invention patent application after publication