CN106251872A - A kind of case input method and system - Google Patents
A kind of case input method and system Download PDFInfo
- Publication number
- CN106251872A CN106251872A CN201610645221.3A CN201610645221A CN106251872A CN 106251872 A CN106251872 A CN 106251872A CN 201610645221 A CN201610645221 A CN 201610645221A CN 106251872 A CN106251872 A CN 106251872A
- Authority
- CN
- China
- Prior art keywords
- voice signal
- typing
- module
- text
- content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000008569 process Effects 0.000 claims abstract description 21
- 238000001514 detection method Methods 0.000 claims description 14
- 238000013139 quantization Methods 0.000 claims description 11
- 238000001914 filtration Methods 0.000 claims description 7
- 230000036541 health Effects 0.000 description 11
- 208000024891 symptom Diseases 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 210000003097 mucus Anatomy 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 201000007100 Pharyngitis Diseases 0.000 description 1
- 208000036071 Rhinorrhea Diseases 0.000 description 1
- 206010039101 Rhinorrhoea Diseases 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/10—Speech classification or search using distance or distortion measures between unknown speech and reference templates
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/20—Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/24—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
- G10L25/87—Detection of discrete points within a voice signal
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The present invention relates to medical information field, disclose a kind of case input method and system, described case input method comprises determining that the entry item corresponding to typing information;Receive the voice signal of described typing information;And the typing content of described entry item is generated based on described voice signal.So greatly simplifie the process of case history typing, and easily operated, thus improve efficiency of inputting.
Description
Technical field
The present invention relates to medical information field, in particular it relates to a kind of case input method and system.
Background technology
In the prior art, the main method of electronic health record typing is also to rely on mouse and keyboard, for optionally
Typing content, is mainly selected by click, for option various in the case of, the efficiency of inputting of this mode is the lowest
Under;It addition, for the non-structured typing content of natural language class, doctor is mainly by input through keyboard, and this cures for youth
Teacher does not have difficulties, but for the older older doctor of qualifications and record of service, is limited to its operation level to computer, may be bothersome
Laborious and be difficult to avoid typing errors, reduce further business efficiency.
Summary of the invention
It is an object of the invention to provide a kind of case input method and system, this case input method and system can simplify
The process of case history typing, it is easy to operation, improves efficiency of inputting.
To achieve these goals, the present invention provides a kind of case input method, and described case input method comprises determining that
Entry item corresponding to typing information;Receive the voice signal of described typing information;And generate institute based on described voice signal
State the typing content of entry item.
The typing content being preferably based on the described voice signal described entry item of generation includes: for described voice signal
Carry out pretreatment;Characteristic parameter is extracted, to generate characteristic parameter data from pretreated voice signal;To described characteristic parameter
Data carry out speech recognition, to generate the text character string data corresponding to described voice signal;And based on described text word
Symbol string data generates described typing content.
It is preferably based on typing content described in described text-string data genaration and includes the one in following item: by institute
State text character string data as described typing content;And by the text character string data generated and the text word of pre-stored
Symbol string compares, and selects the text-string of the pre-stored the highest with the text character string data similarity generated as described
Typing content.
Preferably, described pretreatment includes: be filtered for described voice signal;The voice letter that sample quantization is filtered
Number, to obtain digitized voice signal;Preemphasis process is carried out for described digitized voice signal;And for through pre-
The described digitized voice signal increasing the weight of to process carries out end-point detection, to abandon the noise data of non-speech segment.
Preferably, the characteristic parameter extracted from pretreated voice signal includes: linear prediction residue error and Mel
Frequency cepstral coefficient (MFCC).
Correspondingly, the present invention also provides for a kind of case input system, and described case input system includes: entry item selectes mould
Block, for determining the entry item corresponding to typing information;Speech reception module, for receiving the voice letter of described typing information
Number;And content generating module, for generating the typing content of described entry item based on described voice signal.
Preferably, described content generating module includes: pretreatment module, for carrying out pre-place for described voice signal
Reason;Characteristic extracting module, for extracting characteristic parameter from pretreated voice signal, to generate characteristic parameter data;Voice
Identification module, for carrying out speech recognition to described characteristic parameter data, to generate the text word corresponding to described voice signal
Symbol string data;And employ module, for based on typing content described in described text-string data genaration.
Preferably, employ module described in and generate described typing content by the one in following item: by described text word
Symbol string data is as described typing content;And by the text character string data that generated compared with the text-string of pre-stored
Relatively, select the text-string of the pre-stored the highest with the text character string data similarity generated as in described typing
Hold.
Preferably, described pretreatment module includes: filtration module, for being filtered for described voice signal;Sampling
Quantization modules, for the voice signal that sample quantization is filtered, to obtain digitized voice signal;Pre-emphasis module, is used for
Preemphasis process is carried out for described digitized voice signal;And endpoint detection module, for processing for through preemphasis
Described digitized voice signal carry out end-point detection, to abandon the noise data of non-speech segment.
Preferably, the characteristic parameter that described characteristic extracting module is extracted from pretreated voice signal includes: the most pre-
Survey cepstrum coefficient and mel-frequency cepstrum coefficient (MFCC).
By technique scheme, receive the voice signal of typing information, and generate described record based on described voice signal
Enter the typing content of item.So greatly simplifie the process of case history typing, and easily operated, thus improve typing effect
Rate.
Other features and advantages of the present invention will be described in detail in detailed description of the invention part subsequently.
Accompanying drawing explanation
Accompanying drawing is used to provide a further understanding of the present invention, and constitutes the part of description, with following tool
Body embodiment is used for explaining the present invention together, but is not intended that limitation of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of the case input method provided according to one embodiment of the present invention;
Fig. 2 is the typing generating described entry item based on described voice signal provided according to one embodiment of the present invention
The flow chart of content;
Fig. 3 is the flow chart carrying out pretreatment for described voice signal provided according to one embodiment of the present invention;
Fig. 4 is the flow chart of the case input method provided according to another embodiment of the present invention;
Fig. 5 is the structured flowchart of the case input system provided according to one embodiment of the present invention;And
Fig. 6 is the structured flowchart of the content generating module provided according to one embodiment of the present invention.
Description of reference numerals
51 entry item chosen module 52 speech reception module
53 content generating module 531 pretreatment module
532 characteristic extracting module 533 sound identification modules
534 employ module
Detailed description of the invention
Below in conjunction with accompanying drawing, the detailed description of the invention of the present invention is described in detail.It should be appreciated that this place is retouched
The detailed description of the invention stated is merely to illustrate and explains the present invention, is not limited to the present invention.
Fig. 1 is the flow chart of the case input method provided according to one embodiment of the present invention.As it is shown in figure 1, this
The case input method of bright offer may include that in step 11 place, determines the entry item corresponding to typing information;In step 12
Place, receives the voice signal of described typing information;And in step 13 place, generate described entry item based on described voice signal
Typing content.So greatly simplifie the process of case history typing, and easily operated, thus improve efficiency of inputting.
For determining the entry item corresponding to typing information, a kind of mode, can be according to the record prestored in server
Enter order to determine, such as doctor by client (the electronic medical record system terminals on computer, panel computer, mobile phone etc.)
After pressing typing key, client can would correspond to typing key entry item information send to server, i.e. server prepare according to
The order of the entry item prestored is received, as can be, but not limited to according to conditions of patients, checking result, doctor's suggestion
Order receives.It is of course also possible to determine entry item according to the instruction of doctor, it is right that such as doctor can be pressed by client
Should be in the button of a certain typing information (symptom of such as sufferer), or (such as doctor is defeated to make corresponding instruction by voice
Enter the voice messaging of " symptom of sufferer "), so determine entry item.After determining entry item, can receive corresponding to
The typing information of this entry item, in order to the corresponding content of typing.Above-mentioned is typing each time to be sent extremely by client
Server completes the typing of whole case content, naturally it is also possible to client complete whole case content typing it
After, preserve whole case content is sent to server.
Wherein, as in figure 2 it is shown, typing content based on the described voice signal described entry item of generation can include following step
Suddenly.
In step 21 place, carry out pretreatment for described voice signal, to remove the interference information to voice signal, in order to
Voice messaging the most accurately can be obtained.
In step 22 place, extract characteristic parameter from pretreated voice signal, to generate corresponding to described voice signal
Characteristic parameter data, wherein, the characteristic parameter extracted from pretreated voice signal can include but not limited to linear pre-
Survey cepstrum coefficient and (LPCC) and Mel (Mel) frequency cepstral coefficient (MFCC).
In step 23 place, described characteristic parameter data are carried out speech recognition, to generate corresponding to described voice signal
Text character string data S0.This identification process can be realized by server or electronic medical record system, such as electronic medical record system
The characteristic parameter data extracted can be sent to server, realize speech recognition by server;Or electronic health record system
System itself can have the function of speech recognition to complete speech recognition.
In step 24 place, based on described text character string data S0Generate described typing content.If speech recognition be
Server completes, then text character string data S that server can will generate0Send to electronic medical record system, sick by electronics
System of going through is to generate described typing content.
Wherein, based on described text character string data S0Generate described typing content and include the one in following item: by institute
State text character string data S0As described typing content;And text character string data S that will be generated0Literary composition with pre-stored
This character string compares, text character string data S selecting with being generated0The text-string of the pre-stored that similarity is the highest is made
For described typing content.
For example, it is possible to directly by described text character string data S0As described typing content.Such as, described electronic health record
System terminal receives text character string data S corresponding to described voice signal0After, can be according to the setting of current entry item
No is that free text type entry item is arranged, if it is that free text type entry item is arranged that current entry item is arranged, then can will connect
Text character string data S corresponding to described voice signal received0In typing as Current electronic medical records system entry item
Hold, to simplify the Input Process of electronic health record, it is possible to display is in the corresponding position of electronic health record.
In order to improve the accuracy of case history typing, arrange relative to free text type entry item, it is also possible to by current typing
Item arranges and is configured to selectivity entry item.If current entry item is arranged is configured to selectivity entry item, then can will receive
Text character string data S corresponding to described voice signal0Text word with the N number of alternate item corresponding to selectivity entry item
Symbol string S1,S2,…SNCompare successively, select and text character string data S0The alternate item text-string S that similarity is the highestk
Typing content as Current electronic medical records system entry item, it is possible to shown the correspondence position at electronic health record, wherein k
=1,2 ..., N.
Described text character string data S0With any one alternate item text character string data SkCompare, the side of employing
Method can be first to calculate text character string data S0With alternate item text character string data SkEditing distance LD (s0,sk), i.e.
By text character string data S0Change into alternate item text character string data SkMinimum edit operation number of times, then according to equation
(1) text character string data S is calculated0With alternate item text character string data SkSimilarity S (s0,sk)。
Wherein, similarity S (s0,sk) value the biggest, then it represents that text character string data S0With alternate item text-string number
According to SkThe most similar.
Wherein, as it is shown on figure 3, the carrying out pretreatment for described voice signal and may comprise steps of of step 21.
In step 31 place, it is filtered for described voice signal.Such as can be, but not limited to use frequency range is
The anti-aliasing band filter of 250KHz~3.5KHz carries out bandpass filtering operation to received voice signal.
In step 32 place, the voice signal that sample quantization is filtered, to obtain digitized voice signal.Such as can but
Being not limited to sample frequency as 8KHz, quantified precision is that the voice signal after bandpass filtering is adopted by the A/D sampler of 12
Sample quantifies, to obtain digitized voice signal.
In step 33 place, carry out preemphasis process for described digitized voice signal.Such as but not limited to using one
Rank finite impulse response high-pass digital filter carries out preemphasis process to digitized voice signal, so that the frequency of voice signal
Spectrum becomes smooth, and the form of the transmission function of wherein said wave filter can be as shown in equation (2):
H (z)=1-0.98z-1 (2)
Wherein, z represents digital signal independent variable in z-transform domain, and parameter 0.98 is merely exemplary, according to tool
Body application can be adjusted in the range of 0.9~1.0.
In step 34 place, carry out end-point detection for the described digitized voice signal processed through preemphasis, to abandon
The noise data of non-speech segment.It is, for example possible to use speech terminals detection technology is to the audio digital signals after preemphasis process
Carry out the starting point of useful signal and the end-point detection of end point.
The present invention is described in detail by detailed description of the invention below with reference to Fig. 4, it is to be noted however that the present invention
It is not restricted to this.
When doctor is ready for case typing, electronic health record input system can be started, in step 41 place, can pass through
Predefined procedure or the doctor's instruction received determine entry item, and such as typing is for the entry item of conditions of patients;In step 42
Place, receives corresponding voice signal, such as, receive from doctor about the voice messaging of conditions of patients;In step 43 place, permissible
Voice signal is carried out pretreatment, filtering, sample quantization, preemphasis process and end-point detection etc. can be included as mentioned above
Deng;In step 44 place, the voice signal through pretreatment is carried out characteristic parameter extraction, to extract characteristic parameter data;In step
At rapid 45, described characteristic parameter data are carried out speech recognition, to generate the text-string number corresponding to described voice signal
According to, such as generate " throat is red and swollen, drops down nasal mucus, sneeze ";In step 46 place, it is judged that whether current entry item is arranged is free literary composition
This formula entry item is arranged, if it is, in step 47 place, the text character string data of generation is entered as content, such as will
" throat is red and swollen, drops down nasal mucus, sneeze " enters as content and can show;Set if not free text type entry item
Put but selectivity entry item, then can be in step 48 place, by the text character string data generated and corresponding to selectivity typing
The text-string of N number of alternate item of item compares, and selects wherein the highest with the text character string data similarity generated
The text-string of alternate item enters as content, such as in being entered as by alternate item " pharyngitis, rhinorrhea, sneeze "
Hold and can show.
Correspondingly, the present invention also provides for a kind of case input system, and Fig. 5 provides according to one embodiment of the present invention
The structured flowchart of case input system.As it is shown in figure 5, the case input system that the present invention provides may include that entry item is selected
Module 51, for determining the entry item corresponding to typing information;Speech reception module 52, for receiving the language of described typing information
Tone signal;And content generating module 53, for generating the typing content of described entry item based on described voice signal.So pole
The earth simplifies the process of case history typing, and easily operated, thus improves efficiency of inputting.
For determining the entry item corresponding to typing information, a kind of mode, entry item chosen module 51 can be according to service
The typing order prestored in device determines, such as, pass through client (on computer, panel computer, mobile phone etc. doctor
Electronic medical record system terminal) press typing key after, client can would correspond to typing key entry item information send to server,
The i.e. entry item chosen module 51 of server prepares to be received according to the order of the entry item prestored, as can but do not limit
In receiving according to conditions of patients, inspection result, the order of doctor's suggestion.Certainly, entry item chosen module 51 can also basis
The instruction of doctor determines entry item, and such as doctor can be pressed corresponding to a certain typing information (such as sufferer by client
Symptom) button, or by speech reception module 52 (such as microphone) receive from doctor voice indicate (such as doctor
The voice messaging of input " symptom of sufferer "), so determine entry item.After determining entry item, speech reception module
52 (such as microphones) can receive the typing information corresponding to this entry item, in order to the corresponding content of typing.Above-mentioned is by visitor
Typing transmission each time is completed the typing of whole case content by family end to server, naturally it is also possible in client
After completing the typing of whole case content, preserve whole case content is sent to server.
Wherein, as shown in Figure 6, described content generating module 53 may include that pretreatment module 531, for for described
Voice signal carries out pretreatment, to remove the interference information to voice signal, so as to obtain voice messaging the most accurately;
Characteristic extracting module 532, is used for from pretreated voice signal extraction characteristic parameter, to generate characteristic parameter data, wherein,
The characteristic parameter extracted from pretreated voice signal can include but not limited to linear prediction residue error and (LPCC) and
Mel (Mel) frequency cepstral coefficient (MFCC);Sound identification module 533, for carrying out voice knowledge to described characteristic parameter data
Not, to generate text character string data S corresponding to described voice signal0, this identification process can be sick by server or electronics
The system of going through realizes, and the characteristic parameter data extracted can be sent to server by such as electronic medical record system, passes through server
Realize speech recognition;Or electronic medical record system itself can have the function of speech recognition to complete speech recognition;And
Employ module 534, for based on described text character string data S0Generate described typing content, if speech recognition is in service
Device completes, then text character string data S that server can will generate0Send to electronic medical record system, by electronic health record system
System generates described typing content.
Wherein, employ module 534 described in and generate described typing content by the one in following item: by described text word
Symbol string data S0As described typing content;And text character string data S that will be generated0Text-string with pre-stored
Compare, text character string data S selecting with being generated0The text-string of the pre-stored that similarity is the highest is as described record
Enter content.
Such as, employing module 534 described in can be directly by described text character string data S0As described typing content.Example
As, described electronic medical record system terminal receives text character string data S corresponding to described voice signal0After, can basis
Whether current entry item is arranged is that free text type entry item is arranged, if it is free text type entry item that current entry item is arranged
Arrange, then text character string data S corresponding to described voice signal that can will receive0As Current electronic medical records system
The typing content of entry item, to simplify the Input Process of electronic health record, it is possible to display is in the corresponding position of electronic health record.
In order to improve the accuracy of case history typing, arrange relative to free text type entry item, it is also possible to by current typing
Item arranges and is configured to selectivity entry item.If current entry item is arranged is configured to selectivity entry item, the most such as, can pass through
Employ text character string data S corresponding to described voice signal that the comparison module of module 534 will receive0With corresponding to choosing
The text-string S of N number of alternate item of selecting property entry item1,S2,…SNCompare successively, select and text character string data S0
The alternate item text-string S that similarity is the highestkTyping content as Current electronic medical records system entry item, it is possible to by it
Show at the correspondence position of electronic health record, wherein k=1,2 ..., N.Wherein, similarity can be according to above-mentioned about case typing
The description of method calculates.
Wherein, described pretreatment module 531 may include that filtration module, for being filtered for described voice signal,
The anti-aliasing band filter that such as can be, but not limited to use frequency range to be 250KHz~3.5KHz comes received
Voice signal carries out bandpass filtering operation;Sample quantization module, for the voice signal that sample quantization is filtered, to obtain numeral
The voice signal changed, such as, can be, but not limited to sample frequency as 8KHz, and quantified precision is that the A/D sampler of 12 is logical to band
Filtered voice signal carries out sample quantization, to obtain digitized voice signal;Pre-emphasis module, for for described number
The voice signal of word carries out preemphasis process, such as but not limited to using single order finite impulse response high-pass digital filter pair
Digitized voice signal carries out preemphasis process so that the frequency spectrum of voice signal becomes smooth, detailed process with above-mentioned about
The description of case input method is similar, repeats no more in this;And endpoint detection module, for for the institute processed through preemphasis
State digitized voice signal and carry out end-point detection, to abandon the noise data of non-speech segment, it is, for example possible to use sound end
Audio digital signals after preemphasis is processed by detection technique carries out the starting point of useful signal and the end-point detection of end point.
About the detail of case input system and benefit and the above-mentioned details for case input method and benefit phase
With, repeat no more in this.
The preferred embodiment of the present invention is described in detail above in association with accompanying drawing, but, the present invention is not limited to above-mentioned reality
Execute the detail in mode, in the technology concept of the present invention, technical scheme can be carried out multiple letter
Monotropic type, these simple variant belong to protection scope of the present invention.
It is further to note that each the concrete technical characteristic described in above-mentioned detailed description of the invention, at not lance
In the case of shield, can be combined by any suitable means, in order to avoid unnecessary repetition, the present invention to various can
The compound mode of energy illustrates the most separately.
Additionally, combination in any can also be carried out between the various different embodiment of the present invention, as long as it is without prejudice to this
The thought of invention, it should be considered as content disclosed in this invention equally.
Claims (10)
1. a case input method, it is characterised in that described case input method includes:
Determine the entry item corresponding to typing information;
Receive the voice signal of described typing information;And
The typing content of described entry item is generated based on described voice signal.
Case input method the most according to claim 1, it is characterised in that generate described typing based on described voice signal
The typing content of item includes:
Pretreatment is carried out for described voice signal;
Characteristic parameter is extracted, to generate characteristic parameter data from pretreated voice signal;
Described characteristic parameter data are carried out speech recognition, to generate the text character string data corresponding to described voice signal;
And
Based on typing content described in described text-string data genaration.
Case input method the most according to claim 2, it is characterised in that based on described text-string data genaration institute
State typing content and include the one in following item:
Using described text character string data as described typing content;And
By the text character string data that generated compared with the text-string of pre-stored, the text character selecting with being generated
The text-string of the pre-stored that string data similarity is the highest is as described typing content.
Case input method the most according to claim 2, it is characterised in that described pretreatment includes:
It is filtered for described voice signal;
The voice signal that sample quantization is filtered, to obtain digitized voice signal;
Preemphasis process is carried out for described digitized voice signal;And
End-point detection is carried out, to abandon the noise number of non-speech segment for the described digitized voice signal processed through preemphasis
According to.
5. according to the case input method according to any one of claim 2-4, it is characterised in that believe from pretreated voice
Number extract characteristic parameter include: linear prediction residue error and mel-frequency cepstrum coefficient (MFCC).
6. a case input system, it is characterised in that described case input system includes:
Entry item chosen module, for determining the entry item corresponding to typing information;
Speech reception module, for receiving the voice signal of described typing information;And
Content generating module, for generating the typing content of described entry item based on described voice signal.
Case input system the most according to claim 6, it is characterised in that described content generating module includes:
Pretreatment module, for carrying out pretreatment for described voice signal;
Characteristic extracting module, for extracting characteristic parameter from pretreated voice signal, to generate characteristic parameter data;
Sound identification module, for carrying out speech recognition to described characteristic parameter data, to generate corresponding to described voice signal
Text character string data;And
Employ module, for based on typing content described in described text-string data genaration.
Case input system the most according to claim 7, it is characterised in that described in employ module by following item
Person generates described typing content:
Using described text character string data as described typing content;And
By the text character string data that generated compared with the text-string of pre-stored, the text character selecting with being generated
The text-string of the pre-stored that string data similarity is the highest is as described typing content.
Case input system the most according to claim 7, it is characterised in that described pretreatment module includes:
Filtration module, for being filtered for described voice signal;
Sample quantization module, for the voice signal that sample quantization is filtered, to obtain digitized voice signal;
Pre-emphasis module, for carrying out preemphasis process for described digitized voice signal;And
Endpoint detection module, for carrying out end-point detection for the described digitized voice signal processed through preemphasis, to lose
Abandon the noise data of non-speech segment.
10. according to the case input system according to any one of claim 7-9, it is characterised in that described characteristic extracting module
The characteristic parameter extracted from pretreated voice signal includes: linear prediction residue error and mel-frequency cepstrum coefficient
(MFCC)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610645221.3A CN106251872A (en) | 2016-08-09 | 2016-08-09 | A kind of case input method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610645221.3A CN106251872A (en) | 2016-08-09 | 2016-08-09 | A kind of case input method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106251872A true CN106251872A (en) | 2016-12-21 |
Family
ID=58078923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610645221.3A Pending CN106251872A (en) | 2016-08-09 | 2016-08-09 | A kind of case input method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106251872A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107093426A (en) * | 2017-04-26 | 2017-08-25 | 医惠科技有限公司 | The input method of voice, apparatus and system |
CN107331391A (en) * | 2017-06-06 | 2017-11-07 | 北京云知声信息技术有限公司 | A kind of determination method and device of digital variety |
CN107564571A (en) * | 2017-08-30 | 2018-01-09 | 浙江宁格医疗科技有限公司 | A kind of structured electronic patient record generation method and its corresponding storage facilities and mobile terminal based on phonetic entry |
CN107633876A (en) * | 2017-10-31 | 2018-01-26 | 郑宇� | A kind of internet medical information processing system and method based on mobile platform |
CN109817218A (en) * | 2019-03-13 | 2019-05-28 | 北京青燕祥云科技有限公司 | The method and system of medical speech recognition |
CN109920408A (en) * | 2019-01-17 | 2019-06-21 | 平安科技(深圳)有限公司 | Dictionary item setting method, device, equipment and storage medium based on speech recognition |
CN110196670A (en) * | 2019-05-31 | 2019-09-03 | 数坤(北京)网络科技有限公司 | A kind of document creation method, equipment and computer readable storage medium |
CN110767282A (en) * | 2019-10-30 | 2020-02-07 | 苏州思必驰信息科技有限公司 | Health record generation method and device and computer readable storage medium |
CN112712805A (en) * | 2020-12-29 | 2021-04-27 | 安徽科大讯飞医疗信息技术有限公司 | Method and device for generating electronic medical record report and computer readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102543078A (en) * | 2010-12-09 | 2012-07-04 | 盛乐信息技术(上海)有限公司 | Electronic card system, speech recording method and speech retrieval method of electronic card |
CN103839211A (en) * | 2014-03-23 | 2014-06-04 | 合肥新涛信息科技有限公司 | Medical history transferring system based on voice recognition |
CN104485105A (en) * | 2014-12-31 | 2015-04-01 | 中国科学院深圳先进技术研究院 | Electronic medical record generating method and electronic medical record system |
CN104794203A (en) * | 2015-04-24 | 2015-07-22 | 中国科学院南京地理与湖泊研究所 | System and method for alga enumeration data voice rapid inputting and report generation |
CN105260974A (en) * | 2015-09-10 | 2016-01-20 | 济南市儿童医院 | Method and system for generating electronic case history with informing and signing functions |
-
2016
- 2016-08-09 CN CN201610645221.3A patent/CN106251872A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102543078A (en) * | 2010-12-09 | 2012-07-04 | 盛乐信息技术(上海)有限公司 | Electronic card system, speech recording method and speech retrieval method of electronic card |
CN103839211A (en) * | 2014-03-23 | 2014-06-04 | 合肥新涛信息科技有限公司 | Medical history transferring system based on voice recognition |
CN104485105A (en) * | 2014-12-31 | 2015-04-01 | 中国科学院深圳先进技术研究院 | Electronic medical record generating method and electronic medical record system |
CN104794203A (en) * | 2015-04-24 | 2015-07-22 | 中国科学院南京地理与湖泊研究所 | System and method for alga enumeration data voice rapid inputting and report generation |
CN105260974A (en) * | 2015-09-10 | 2016-01-20 | 济南市儿童医院 | Method and system for generating electronic case history with informing and signing functions |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107093426A (en) * | 2017-04-26 | 2017-08-25 | 医惠科技有限公司 | The input method of voice, apparatus and system |
CN107331391A (en) * | 2017-06-06 | 2017-11-07 | 北京云知声信息技术有限公司 | A kind of determination method and device of digital variety |
CN107564571A (en) * | 2017-08-30 | 2018-01-09 | 浙江宁格医疗科技有限公司 | A kind of structured electronic patient record generation method and its corresponding storage facilities and mobile terminal based on phonetic entry |
CN107633876A (en) * | 2017-10-31 | 2018-01-26 | 郑宇� | A kind of internet medical information processing system and method based on mobile platform |
CN109920408A (en) * | 2019-01-17 | 2019-06-21 | 平安科技(深圳)有限公司 | Dictionary item setting method, device, equipment and storage medium based on speech recognition |
CN109920408B (en) * | 2019-01-17 | 2024-05-28 | 平安科技(深圳)有限公司 | Dictionary item setting method, device, equipment and storage medium based on voice recognition |
CN109817218A (en) * | 2019-03-13 | 2019-05-28 | 北京青燕祥云科技有限公司 | The method and system of medical speech recognition |
CN110196670A (en) * | 2019-05-31 | 2019-09-03 | 数坤(北京)网络科技有限公司 | A kind of document creation method, equipment and computer readable storage medium |
CN110767282A (en) * | 2019-10-30 | 2020-02-07 | 苏州思必驰信息科技有限公司 | Health record generation method and device and computer readable storage medium |
CN110767282B (en) * | 2019-10-30 | 2022-07-29 | 思必驰科技股份有限公司 | Health record generation method and device and computer readable storage medium |
CN112712805A (en) * | 2020-12-29 | 2021-04-27 | 安徽科大讯飞医疗信息技术有限公司 | Method and device for generating electronic medical record report and computer readable storage medium |
CN112712805B (en) * | 2020-12-29 | 2021-12-14 | 安徽科大讯飞医疗信息技术有限公司 | Method and device for generating electronic medical record report and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106251872A (en) | A kind of case input method and system | |
US10614803B2 (en) | Wake-on-voice method, terminal and storage medium | |
CN106504754B (en) | A kind of real-time method for generating captions according to audio output | |
JP6469252B2 (en) | Account addition method, terminal, server, and computer storage medium | |
WO2019148586A1 (en) | Method and device for speaker recognition during multi-person speech | |
Kelly et al. | Deep neural network based forensic automatic speaker recognition in VOCALISE using x-vectors | |
KR20180025121A (en) | Method and apparatus for inputting information | |
CN108053823A (en) | A kind of speech recognition system and method | |
JP2002032213A (en) | Method and system for transcribing voice mail message | |
CN105845139A (en) | Off-line speech control method and device | |
CN112349289B (en) | Voice recognition method, device, equipment and storage medium | |
JPWO2013027360A1 (en) | Speech recognition system, recognition dictionary registration system, and acoustic model identifier sequence generation device | |
US20130253932A1 (en) | Conversation supporting device, conversation supporting method and conversation supporting program | |
Silva et al. | Spoken digit recognition in portuguese using line spectral frequencies | |
CN115102789B (en) | Anti-communication network fraud studying, judging, early warning and intercepting comprehensive platform | |
CN110807093A (en) | Voice processing method and device and terminal equipment | |
CN110782902A (en) | Audio data determination method, apparatus, device and medium | |
CN112185342A (en) | Voice conversion and model training method, device and system and storage medium | |
Kanabur et al. | An extensive review of feature extraction techniques, challenges and trends in automatic speech recognition | |
CN109686365B (en) | Voice recognition method and voice recognition system | |
CN106228984A (en) | Voice recognition information acquisition methods | |
Tripathi et al. | VEP detection for read, extempore and conversation speech | |
JP3531342B2 (en) | Audio processing device and audio processing method | |
Abushariah et al. | Voice based automatic person identification system using vector quantization | |
CN114724589A (en) | Voice quality inspection method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161221 |