CN117133456A - Cognitive assessment method and system based on intelligent guidance and algorithm analysis - Google Patents

Cognitive assessment method and system based on intelligent guidance and algorithm analysis Download PDF

Info

Publication number
CN117133456A
CN117133456A CN202310995715.4A CN202310995715A CN117133456A CN 117133456 A CN117133456 A CN 117133456A CN 202310995715 A CN202310995715 A CN 202310995715A CN 117133456 A CN117133456 A CN 117133456A
Authority
CN
China
Prior art keywords
evaluation
cognitive
module
patient
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310995715.4A
Other languages
Chinese (zh)
Inventor
孙高峰
刘川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Smart Spirit Technology Co ltd
Original Assignee
Beijing Smart Spirit Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Smart Spirit Technology Co ltd filed Critical Beijing Smart Spirit Technology Co ltd
Priority to CN202310995715.4A priority Critical patent/CN117133456A/en
Publication of CN117133456A publication Critical patent/CN117133456A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • G06F16/90344Query processing by using string matching techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/126Character encoding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Primary Health Care (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention discloses a cognitive assessment method and a system based on intelligent guidance and algorithm analysis. The cognitive evaluation method comprises the following steps: patient information is input, and a primary cognitive evaluation task is established in a cognitive evaluation client subsystem; performing cognitive evaluation to obtain evaluation data of a patient; the cognitive evaluation algorithm model subsystem performs data calculation according to the evaluation data to obtain evaluation scores; and calibrating and rechecking the evaluation score to generate a cognitive evaluation report. According to the invention, a plurality of cognitive assessment guiding modes are adopted, so that the purpose of one-to-many doctor-patient or no participation of professional staff is achieved; the scoring difference caused by the cognition and the professional difference of professionals is avoided, so that the evaluation result is more objective and accurate, and the unification of the evaluation result is realized.

Description

Cognitive assessment method and system based on intelligent guidance and algorithm analysis
Technical Field
The invention relates to a cognitive evaluation method based on intelligent guidance and algorithm analysis, and also relates to a corresponding cognitive evaluation system, belonging to the technical field of cognitive evaluation.
Background
The traditional cognitive evaluation is concentrated in hospitals, so that the awareness rate is low, the visit rate is low, and the evaluation is inconvenient. Moreover, the cognitive assessment needs to be done under the direction of a professional. If the professional provides a paper scale, the patient is informed of the questions, and the patient can give a score while answering the questions.
Along with the continuous popularization of the Internet, various online evaluation modes are layered endlessly, and the cognitive evaluation scale is managed electronically. The method establishes a special cognitive test management system, manages various cognitive evaluation scales on line, and the background carries out evaluation through the evaluation terminal by creating an evaluation task, so that a patient carries out evaluation under the guidance of a professional, the professional gives scores in real time, and final data is stored in the system and an electronic report is generated, thereby being capable of checking historical evaluation data and the report on line.
In the chinese patent application with application number 202011073958.5, an electronic cognitive assessment system suitable for rapid detection of mild cognitive impairment is disclosed. The cognitive evaluation system comprises the following steps: inputting personal information and medical history information of a subject; completing a rapid cognitive assessment for mild cognitive impairment; storing data generated in the evaluation process and personal information of a subject into a database, analyzing and comparing the data, and generating an evaluation report; and checking the evaluation report and managing the data. The system is operated by a main test (doctor, researcher, etc.), can rapidly, quickly and accurately screen and evaluate whether the subject has mild cognitive impairment, can be operated in a short time, and is convenient for large-scale screening. However, the cognitive assessment system requires a full course of accompaniment by the professional and the resulting score is affected by the cognition and expertise level of the professional.
Disclosure of Invention
The invention aims to provide a cognitive assessment method based on intelligent guidance and algorithm analysis.
The invention aims to provide a cognitive evaluation system based on intelligent guidance and algorithm analysis.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
according to a first aspect of an embodiment of the present invention, there is provided a cognitive assessment method based on intelligent guidance and algorithm analysis, including the steps of:
s10, inputting patient information, and establishing a primary cognitive assessment task in a cognitive assessment client subsystem;
s20, performing cognitive evaluation to obtain evaluation data of a patient;
s30, carrying out data calculation by the cognitive evaluation algorithm model subsystem according to the evaluation data to obtain evaluation scores;
s40, calibrating and rechecking the evaluation score to generate a cognitive evaluation report,
in the step S30, the data calculation includes analyzing the voice content and analyzing the written content, identifying the voice content or the written content to obtain a text, encoding the text, calculating the similarity between the encoding and the encoding of the preset correct text of the system, and judging the question score if the similarity reaches a preset similarity threshold; and if the similarity is lower than the similarity threshold, judging that the question does not score, thereby obtaining the evaluation score.
Wherein preferably, the encoding comprises shape code encoding, sound code encoding and mixed encoding, wherein the mixed encoding is formed by combining shape code encoding and sound code encoding.
Wherein preferably, the shape code coding comprises character shape coding, character structure coding and four-corner coding.
Wherein, preferably, the character shape codes are used for classifying the characters into a plurality of groups according to the shapes of the characters, and the character shape codes of the characters in each group are identical;
the character structure codes are classified into a plurality of groups according to the constitution structure of the characters, and the character structure codes of the characters in each group are the same.
Preferably, the encoding rule of the voice code encoding is as follows: converting Chinese characters into pinyin, splitting the pinyin into initials and finals after conversion, and generating according to a corresponding coding table: coding combination of initials, finals, rhyme supplements and tones.
Preferably, the data calculation further comprises identifying human body posture feature points and human face feature points through a deep learning model, quantifying feature data of each frame of image data according to the feature points, and finally performing behavior feature matching according to the feature data of continuous frames.
Preferably, the data calculation further comprises adopting a deep learning algorithm to identify four patterns of connecting lines, cubes, picture clocks and cross pentagons which appear in the hand-painted image, wherein the identification comprises the steps of carrying out medical evidence-based labeling on sample data in a database, then building a convolutional neural network training identification model, and utilizing the identification model to carry out identification.
According to a second aspect of the embodiment of the invention, a cognitive evaluation system based on intelligent guidance and algorithm analysis is provided, which comprises a cognitive evaluation client subsystem, a cognitive evaluation algorithm model subsystem and a cognitive evaluation background management subsystem, and is used for realizing the cognitive evaluation method based on intelligent guidance and algorithm analysis.
Wherein preferably, the cognitive evaluation algorithm model subsystem comprises a voice content analysis module, a video action extraction module and an image recognition module,
the voice content analysis module analyzes voice to obtain text, the image recognition module converts handwritten text into text, and the video action extraction module is used for analyzing video content to obtain image data of each frame.
Preferably, the cognitive evaluation client subsystem comprises a patient login module, a dialect switching module, an intelligent guiding module, a voice broadcasting module, an audio/video recording module, a drawing module, a special clicking module and a global pause module.
Compared with the prior art, the invention has the following technical effects: the method has the advantages that the purpose that doctors and patients are one-to-many or the participation of professionals is not needed is achieved by adopting various cognition assessment guiding modes, scoring differences caused by cognition and professional differences of professionals are avoided, assessment results are objective and accurate, and unification of the assessment results is achieved; the compliance of the patient to complete the test is improved, and the phenomena that the patient refuses to evaluate, does not have the ability to complete the evaluation or cannot adhere to the completion of the evaluation and the like can be avoided; the labor cost is reduced, and the popularization of the cognitive health service is facilitated.
Drawings
Fig. 1 is a schematic diagram of an overall structure of a cognitive evaluation client terminal system according to an embodiment of the present invention;
FIG. 2 is a schematic workflow diagram of an audio/video module according to an embodiment of the present invention;
fig. 3 is a schematic diagram of the overall structure of a model subsystem of a cognitive evaluation algorithm in an embodiment of the present invention;
FIG. 4 is a schematic workflow diagram of a voice content analysis module according to an embodiment of the invention;
FIG. 5 is a schematic diagram of a workflow of a video motion extraction module according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a neural network model used by an image recognition module in an embodiment of the present invention;
FIG. 7 is a flow chart of a handwritten Chinese character recognition algorithm in an embodiment of the present invention;
fig. 8 is a schematic diagram of an overall structure of a cognitive evaluation background management subsystem according to an embodiment of the present invention.
Detailed Description
The technical contents of the present invention will be described in detail with reference to the accompanying drawings and specific examples.
Most patients who participated in the cognitive assessment had some degree of cognitive impairment. A part of people suffer from alzheimer's disease, and the pronunciation and writing of the people are different from those of normal people; some people have emotion fluctuation in the cognitive evaluation process, so that the matching degree of the answer questions is reduced, and therefore, the pronunciation or writing can be changed greatly. Therefore, the cognitive evaluation method and the system based on intelligent guidance and algorithm analysis can be utilized to improve the accuracy of the cognitive evaluation.
The invention can be used for guiding the patient to autonomously complete the cognitive assessment task without on-site guidance of a professional (such as a psychological doctor), so that the existing one-to-one guidance mode can be changed into the home test of the patient, and the professional can conduct on-line guidance through a platform; but also from one-to-one to one (doctor) to multiple (patient) modes. Therefore, the cost is saved, and the patient, especially the patient with inconvenient walking, is facilitated.
In addition, in the cognitive evaluation process, as the standard answer corresponding to each cognitive evaluation task is predicted, when the cognitive evaluation method based on intelligent guidance and algorithm analysis is used, similarity calculation is only carried out on data input by a patient and the pre-stored standard answer, so that the recognition accuracy can be improved, and the accuracy is remarkably higher than that of the situation without the standard answer.
In addition, the answer collection mode for each cognitive assessment task is preset as one of four modes of drawing (image), recording, video recording and clicking. Aiming at a cognitive evaluation task with an answer acquisition mode being a drawing (image) mode, presetting an answer value obtained by processing by adopting an image recognition method; aiming at a cognitive evaluation task with an answer acquisition mode being a recording mode, presetting to obtain an answer value by adopting a voice recognition technology; aiming at the cognitive evaluation task with the answer acquisition mode being a video recording mode, the cognitive evaluation task is preset to be processed by adopting a video action extraction method to obtain an answer value. Aiming at the cognitive evaluation task with the click mode as the answer acquisition mode, the clicked answer content is directly obtained.
Specifically, the embodiment of the invention discloses a cognitive evaluation system based on intelligent guidance and algorithm analysis, which comprises three subsystems: the system comprises a cognitive evaluation client subsystem, a cognitive evaluation algorithm model subsystem and a cognitive evaluation background management subsystem.
As shown in fig. 1, the cognitive evaluation client subsystem is used for displaying a cognitive evaluation scale to a patient, intelligently guiding the patient to perform evaluation, and completing submission of evaluation data. The subsystem comprises: the system comprises a patient login module, a dialect switching module, an intelligent guiding module, a voice broadcasting module, an audio/video recording module, a drawing module, a special clicking module and a global pause module. Specifically, the functions of each module are as follows:
patient login module: and the patient logs in the cognitive evaluation client subsystem, and obtains related scale information in the current evaluation task according to the data such as the patient number and the like.
Dialect switching module: before the patient enters the scale to answer, the well-known dialect is selected, and then the answer is started. The system loads corresponding voice data according to the dialect selected by the patient; similarly, the cognitive assessment algorithm model subsystem also analyzes corresponding audio/video files according to dialect types.
And the intelligent guiding module: the intelligent guidance system is a core module for realizing the independent completion of answering of patients, and mainly comprises drawing type question answering flow introduction, recording type question answering flow introduction, interactive guidance in the answering process and the like. The drawing and recording type questions are presented in a answering flow, so that a patient can independently complete answering flow learning, and the patient can smoothly complete answering of the type of questions in the scale. The interactive guidance in the answering process can provide more detailed interactive feedback and operation introduction, and continuously guide the next operation of the patient, so that the patient can complete independent answering. The process adopts the techniques of h5 animation, h5 audio/video recording, audio volume monitoring, voice translation, algorithm model analysis and the like, collects and analyzes the behaviors of the patient, and gives the operation guidance of the patient.
And the voice broadcasting module is used for: the voice broadcasting function of the questions, the stems and various prompts is provided, so that a patient can easily understand various questions and operations. The module supports the functions of starting, pausing, continuing playing, stopping and the like, and is convenient for completing richer interaction in the answering process.
Recording/video module: the module is used for completing collection of the patient audio/video question answering data. As shown in fig. 2, the module supports the preferential activation of the microphone/camera, and the recording is turned on or off again when needed, and the on-off operation can be repeated. Meanwhile, the module also provides a function of suspending recording and continuing recording. Therefore, the answering process can repeatedly perform the start-stop operation, and realize the audio/video segmented uploading function, thereby improving the experience of the patient. In addition, in the answering process, one microphone/camera is shared in the whole process, so that incomplete data recording caused by delayed starting of the microphone/camera can be avoided. The audio/video module can collect voice and is used for obtaining text by utilizing a voice recognition technology and encoding; videos can also be acquired for obtaining sequence data by using gesture recognition technology for behavioral analysis.
Drawing module: the module is used for completing collection of answering data of patients with drawing/writing questions. The Canvas drawing board function is adopted, so that a patient can complete drawing or writing requirements of the scale title. For the drawing type titles, the images collected by the drawing module can be analyzed by utilizing a neural network learning model; for writing titles, the images captured by the drawing module can be converted to text for encoding using OCR (optical character recognition) techniques.
A special clicking module: the module is used for achieving the purpose of answering part of the 'attention' dimension questions. The module plays the stem voices and the patient can do corresponding click recording operation according to the heard contents. These click records will have a corresponding relationship with the stem voices and serve as the basis for scoring the questions.
A global pause module: in the answering process of the cognitive assessment, a global pause/continue function may be invoked to pause and continue the recording, voice broadcast, and possibly timer functions.
As shown in fig. 3, the cognitive evaluation algorithm model subsystem performs calculation and analysis on data submitted by the cognitive evaluation client according to a fuzzy matching method of character strings to be described later, so as to perform result judgment according to matching between the content of the answer of the patient and the standard answer, and give a score (as shown in fig. 4). The system comprises a voice content analysis module, a video action extraction module and an image recognition module. Specifically, each module functions as follows:
the voice content analysis module: and analyzing the voice through a voice recognition technology to obtain text information. Clinical data result verification shows that the accuracy of the voice recognition technology reaches 90%.
Video action extraction module: according to the video data submitted by the patient, whether the patient takes corresponding actions or not is analyzed through a human body posture recognition technology and a human face recognition technology (as shown in fig. 5).
An image recognition module: including hand-drawing image recognition techniques and hand-writing OCR recognition, syntactic analysis techniques. For converting handwritten text or hand drawing images into text for processing.
Specifically, the hand-drawing image recognition technology is used for recognizing four patterns of connecting lines, cubes, drawing clocks and cross pentagons which appear in hand-drawing images in scale test, a deep learning algorithm is adopted in recognition, firstly, medical evidence-based labels are carried out on sample data in a database, then a convolutional neural network training recognition model (shown in fig. 6) is built, and finally, the accuracy of the model exceeds the scoring standard of professionals.
As shown in fig. 7, the technology of handwriting OCR recognition and syntax analysis adopts a handwriting kanji recognition algorithm to recognize text information handwritten by a patient, and performs syntax analysis and semantic analysis on the text information to determine whether a sentence meets the question requirement. In one embodiment of the invention, the handwriting Chinese character recognition algorithm model has a test accuracy of 83.7% on the test data set and the production data.
As shown in fig. 8, the cognitive evaluation background management subsystem manages patient information and an evaluation task, and includes a patient information management module, an evaluation task application management module, an evaluation task management module, an evaluation review module, and an evaluation report management module. Specifically, each module functions as follows:
patient information management module: a personal profile is established for the patient. Including patient name, sex, date of birth, contact, educational level, cognitive symptoms, etc.
The scale application management module: the method is used for uniformly managing a plurality of scales and comprises the steps of configuring the scale information and generating JSON data of the scales.
And the assessment task management module is used for: selecting a patient, selecting a scale, and pushing a cognitive assessment task for the patient. The evaluation task includes data such as a task number, a scale number, and a patient number.
And the measurement evaluation scale rechecking module: patient information is obtained according to the patient number, scale information is obtained according to the scale number, and scale answer information and algorithm scores of each question are obtained according to the task number. The scale answer information comprises the audio, video, drawing data, clicking data and the like of the patient answer, and the review module displays the data. The professional reviews the data.
And the evaluation report management module: and displaying the score of each dimension, feeding back the evaluation result to the patient, and guiding the patient whether to perform cognitive training. Providing preview and print functions.
Before introducing the cognitive assessment method based on intelligent guidance and algorithm analysis provided by the embodiment of the invention, the method for fuzzy matching of character strings adopted by the invention is introduced first. The method comprises the following steps:
s1: and encoding the acquired text.
The text is acquired and recognized (voice recognition or OCR recognition) by the audio/video module, the drawing module and the special clicking module for encoding.
The coding mode comprises shape code coding, sound code coding and mixed coding, and is selected and used in advance according to the accuracy. The specific coding mode usage is based on the actual scenario, which is not limited by the present invention. For example, when the recognition scene is that both input speech and handwriting input are required, or when the accuracy of the shape code encoding or the sound code encoding does not reach a preset, the hybrid encoding is used.
In one embodiment of the invention, the system correct text is the text and/or phonetic text of the standard answer in the system database. Here, the speech text of the standard answer is obtained by converting the speech as the standard answer into a text by using speech recognition software.
This is because in the present embodiment, a standard answer is already present for each question before cognitive evaluation, so this standard answer (correct answer) can be taken as a system correct text. And carrying out mixed coding on the correct text of the system by adopting the coding rule provided by the invention, so that the character string corresponding to the correct text of the system can be obtained. With the character string as a reference, the similarity of the character string corresponding to the collected text can be calculated, so that whether the collected text is a standard answer (correct answer) can be judged. Based on the normal mode threshold value of the cognitive test, the result of the whole cognitive evaluation (whether all questions are correct or not) is integrated, and whether the patient has cognitive disorder and/or severity can be judged.
In one embodiment of the present invention, the encoding rule of the shape code encoding is:
the Chinese character shape code is obtained by splicing character shape code (1 bit), character structure code (1 bit) and four corner code (5 bits) together in sequence.
Digital code coding: character (1 bit) 7.
English character shape code coding: character (1 bit) 7.
Wherein, the character shape codes are used for classifying the characters into a plurality of groups according to the shapes of the characters, and the character shape codes of each group of characters have the same codes, and examples are shown in the following table:
character shape Encoding Example
Square shape 0 Week, solid, four, field
Trapezoidal shape 1 Bang, zhi, pin, li
Sector shape 2 Claustrophobic, normal, speaking, six
Diamond shape 3 10. Jinzhong rice
Triangle-shaped 4 Upper, lower, big, man, soil
Jar shape 5 Shake, jin and an
House-shaped 6 In the middle of the abdomen, south and season
Screw shape 7 Summer heat relieving tool
Horn-shaped 8 And, throat and shallow
Step shape 9 Towards, add
The character structure codes are classified into a plurality of groups according to the constitution structure of the characters, and the character structure codes of each group of characters have the same coding, and examples are shown in the following table:
the four-corner coding rule is as follows: the Chinese characters are numbered according to the single strokes or the complex strokes contained in the Chinese characters, the strokes of the Chinese characters are classified into a plurality of categories, such as ten categories-head, horizontal, vertical, dot, fork, insert, square, angle, eight and small, and the Chinese characters are respectively represented by numerals 0 to 9. The strokes of four corners of each character are numbered according to the sequence of the upper left, the upper right, the lower left and the lower right of the positions of the strokes, and the numbers with corners are additionally selected to convert the Chinese characters into five-bit Arabic numerals at most, namely 5-bit numerals to represent one Chinese character.
Referring to the code encoding rules described above, the "skin" encodes "2040247", "ball" encodes "8113199", "3" encodes "3333333", and "d" encodes "ddddddd".
Wherein each character code is 7 bits in fixed length.
In one embodiment of the present invention, the encoding rule of the vocoder encodes is: the Chinese characters are converted into pinyin, the pinyin is split into initials and finals after conversion is completed, and coding combinations, namely combinations of initials, finals, rhyme supplements and tones are generated according to the corresponding coding tables.
Wherein, the Chinese character phonetic initial consonant coding table is shown in the following table:
initial consonant Encoding Initial consonant Encoding Initial consonant Encoding Initial consonant Encoding
B 1 P 2 m 3 f 4
d 5 T 6 n 7 l 7
g 8 K 9 h A j B
q C X D zh E ch F
sh G R H z E c F
s G Y I w J Others 0
The Chinese phonetic final encoding table is shown in the following table:
vowels of vowels Encoding Vowels of vowels Encoding Vowels of vowels Encoding Vowels of vowels Encoding
A 1 O 2 e 3 i 4
U 5 V 6 ai 7 ei 7
ui 8 Ao 9 ou A iou B
ie C Ve D er E an F
en G In H un I vn J
ang F Eng G ing H ong K
Referring to the above-mentioned code encoding rule, the code of "national flag" is encoded as [ '2852', '4C02', ] and the code of "national enterprise" is encoded as [ '2852', '4C03', '.
In one embodiment of the present invention, the coding rule of the hybrid coding is: shape code encoding+sound code encoding=hybrid encoding, which is to splice the shape code encoding and sound code encoding together. For example, referring to the above mixed coding rule, the mixed coding of "mikano" is 52442487J51, i.e., the shape code coding of "mikano" is "5244248" + "mikano" is "7J51".
Whether the text is obtained by speech recognition or image recognition, the text may be used to generate a vocoded code, a pictographic code, or a hybrid code. In the following step S2, a corresponding code is selected as required as a similarity calculation between the "code of the collected text" and the corresponding code of the standard answer.
For example, the text "national flag" is obtained through voice recognition, and further the sound code corresponding to the "national flag" is obtained. The vocoded is then encoded as "encoding of the captured text". Alternatively, the text "national flag" is obtained through speech recognition, and then the shape code corresponding to the "national flag" is obtained, and then the shape code is used as the code of the acquired text. As another alternative, after the text "national flag" is obtained through speech recognition, the sound code corresponding to the "national flag" and the shape code corresponding to the "national flag" are obtained, and the sound code and the shape code are spliced into a hybrid code as the code of the "collected text".
S2: and inputting the codes of the pre-stored correct text of the system and the codes of the acquired text into a KMP (Knuth-Morris-Pratt) algorithm to obtain two character strings needing to calculate the similarity.
The formula of the KMP algorithm is as follows:
wherein next [ i ] is an array; next val [ i ] is an array calculated on the basis of next; is_similarity is a similarity calculation function (e.g., hamming distance); s [ i ] and S [ next [ i ] are character strings, namely coding sequences, of which the similarity needs to be calculated; thresh is a preset threshold.
It will be appreciated by those skilled in the art that the character encoding of the correct text of the system may be predetermined based on the standard answers, and the corresponding character strings may be pre-calculated and stored. If the character string of the correct text of the system is pre-stored, the character string can be directly called in the step. For convenience of description only, the invention is not limited in this regard.
S3: and (2) inputting the two character strings with the similarity to be calculated obtained in the step (S2) into a similarity calculation function to obtain a specific similarity value.
The similarity calculation function is as follows:
wherein sim is the similarity of two coding sequences, and the value is [0,1 ]]The larger the value of sim is, the higher the similarity of two coding sequences is, and the smaller the value of sim is, the smaller the similarity of two coding sequences is; wi is the weight of each symbol in the code of the character string to be matched,n is the length of the code of the character string to be matched; p [ i ]]The code elements on the corresponding bits of the codes representing the two character strings to be matched are the same as 1 and are different as 0. Character encoding is a generic term for character shape encoding and character structure encoding as follows.
And setting weight wi for each code element in the codes of the correct text of the system of each question and the codes of the acquired text during recognition according to the answer type (a voice answer mode, an input text answer mode or an answer mode combining the voice answer mode and the input text answer mode) of each question by cognitively evaluating. The rule for setting the weight wi is as follows:
if the question is an input text answer mode, a shape code coding mode (mode one) is correspondingly adopted. In the shape code encoding, the shape code encoding weights include a character shape encoding weight w1, a character structure encoding weight w2, and a four corner encoding weight w3. The character shape coding weight and the character structure coding weight are equal, and the weights of the character shape coding weight and the character structure coding weight are smaller than the four-corner coding weight. The sum of the character shape coding weight w1, the character structure coding weight w2, and the four corner coding weight w3 is 1.
If the question is a speech answer mode, the corresponding is vocoded (mode two). In the encoding of the tone, the initial weight w4, the final weight w5 and the complementary weight w6 are equal, and the weights of the three are larger than the tone weight. The total weight of the three is 1.
If the question is a answer mode combining voice and input words, the corresponding hybrid coding is adopted. In the hybrid coding, the shape code coding weight and the sound code coding weight are equal, and the sum of the weights of the shape code coding weight and the sound code coding weight is 1, so that in the hybrid mode, the shape code coding weight is 1/2 of the shape code coding weight in the first mode, and the sound code coding weight is 1/2 of the sound code coding weight in the second mode. That is, in the mixed coding, the weight of the shape code included in the mixed coding is 0.5w1, the weight of the character structure code is 0.5w2, and the weight of the four-corner code is 0.5w3. The sound code weight in the mixed coding comprises an initial weight of 0.5w4, a final weight of 0.5w5 and a final complementary weight of 0.5w6. That is, the weight relationships in the shape code codes are the same as the weight relationships in the shape code codes, the weight relationships in the sound code codes are the same as the weight relationships in the sound code codes, and the weight sum is 1.
S4: and (3) comparing the similarity value obtained in the step (S3) with a preset similarity threshold value, and judging whether the acquired text is successfully matched with the correct text of the system.
If the similarity value obtained in the step S3 is greater than or equal to a preset similarity threshold, the matching is successful, and the system displays the matched text to the user; if the similarity value obtained in step S3 is smaller than the preset similarity threshold, the matching fails, and the system prompts the user to reenter and return to step S1.
The preset similarity threshold is determined according to the specific situation, and is usually obtained through a great deal of practical experience, which is not limited by the invention. Typically, a similarity threshold is obtained using cognitive evaluation constant modulus.
Because the similarity between the codes of the collected text and the correct system file (the pre-stored standard answer) is calculated in the step S3, if the similarity between the codes of the collected text and the correct system file is high, the collected text can be judged to be consistent with the standard answer (answer to questions); otherwise, the questions are not consistent (wrong questions are answered).
It should be noted that, in step S4, if the similarity value obtained in step S3 is greater than or equal to the preset similarity threshold, the matching is successful, and the process returns to step S1 to enter the next question until all questions are completed; if the similarity value obtained in step S3 is smaller than the preset similarity threshold, the matching fails, and the system prompts the user to re-input or return to step S1 to enter the next question. In this embodiment, returning to step S1 may be to instruct the user to answer the wrong question again until the answer is normal; or instruct the user to answer the next question until all questions are completed.
Based on the hybrid coding method, the cognitive assessment method based on intelligent guidance and algorithm analysis provided by the invention is introduced below, and at least comprises the following steps.
S10, inputting patient information, and establishing a primary cognitive assessment task in a cognitive assessment client subsystem.
Specifically, the method comprises the following substeps:
s101, acquiring personal information of patient
The professional logs in the cognitive evaluation background management subsystem, and the personal information of the patient is input through the patient information management module. Specifically, the personal information includes information such as patient name, sex, date of birth, contact, educational level, cognitive symptoms, and the like.
S102, establishing a primary cognition assessment task based on personal information of patients
And the evaluation task management module generates corresponding scale evaluation tasks according to personal information of the patient and algorithm recommendation or patient requirements, adds primary cognitive evaluation tasks to the patient, and pushes the tasks to the cognitive evaluation client subsystem.
S20, the patient logs in a cognitive evaluation client subsystem to perform cognitive evaluation, and evaluation data are obtained.
The patient logs in the cognitive evaluation client subsystem, and after receiving the cognitive evaluation task, selects an answer mode (for example, in a familiar dialect) to perform evaluation answer.
Before answering questions, the cognitive evaluation client subsystem assembles questions according to the table JSON information and displays the questions to a patient in the forms of audio recording, video recording, drawing, clicking, writing and the like. In the answering process, the voice broadcasting module is utilized to carry out voice broadcasting in each step, the audio/video recording module is utilized to carry out whole-course audio recording and/or video recording, and the intelligent guiding module is utilized to guide a patient to independently complete answering. If the patient wants to suspend answering in the middle, the global suspension module can be utilized to realize suspension or answering. The patient can draw or write by using the drawing module, and click operation is completed by using the special click module. When the patient answers a question, the cognitive evaluation client subsystem automatically collects corresponding data such as audio, video, drawing, clicking, writing and the like, stores the data into a database as evaluation data, and pushes the evaluation data to the cognitive evaluation algorithm model subsystem.
And S30, carrying out data calculation by the cognitive evaluation algorithm model subsystem according to the evaluation data to obtain an evaluation score.
The cognitive assessment algorithm model subsystem comprises a voice content analysis module (used for analyzing voice content); the video action extraction module is used for analyzing video content to obtain image data of each frame; image recognition module (for analyzing the contents of drawing or writing). After receiving evaluation data such as drawing, recording, video recording, clicking, writing and the like submitted by a patient, the cognitive evaluation algorithm model subsystem can identify, analyze and extract the evaluation data according to the question information and the score rule so as to calculate scores and update the scores to a database, and a professional can check the scale answer scores when the cognitive evaluation background management subsystem rechecks.
Specifically, the data calculation includes analyzing the voice content by using a voice content analysis module; analyzing the video content by utilizing a video action extraction module; and analyzing the contents of drawing or writing by using an image recognition module. The analysis is carried out by adopting the fuzzy matching method of the character strings. For example, the voice content or the written content is identified to obtain a text, the text is encoded, and similarity calculation is performed between the encoding and the encoding of the preset system correct text. Judging the question score when the similarity reaches a preset threshold value; and judging that the questions are not scored when the similarity is lower than a preset threshold value, thereby obtaining the evaluation score. The similarity threshold values of different questions in the evaluation task can be different; the same may be true.
S301: the speech content is analyzed.
Since the nonstandard pronunciation can cause errors of homophones and near phones, and the errors can greatly reduce the matching accuracy of subsequent keywords, the problem is solved by adopting a hybrid coding mode, and as mentioned above, the hybrid coding is to splice the shape code codes and the sound code codes together. Therefore, unified coding of Chinese, digital and English characters can be realized by utilizing a hybrid coding mode.
Firstly, a voice recognition technology is adopted to recognize text information in voice data for voice contents obtained by recording in the answering process of a patient. Since this is prior art, it is not described in detail here.
The hybrid coding algorithm in this embodiment matches a more accurate answer in the model library according to the word form and pronunciation in the text, and corrects the result of voice recognition, so that the judgment of the answer of the patient is more accurate (more in line with the actual real answer of the patient).
S302: video action extraction
And recognizing human body posture feature points and human face feature points through a deep learning model, quantifying feature data of each frame of image data according to the feature points, and finally performing behavior feature matching according to the feature data of continuous frames. The DDTW algorithm (Derivative Dynamic Time Warping ) is used for behavior feature matching, and the above extracted feature sequence data is actually a time sequence data, and the main task of the time sequence is to compare one sequence with another sequence to compare the similarity of the two time sequences. Assume that there are two time sequences Q and C, of length n and m, respectively, wherein:
Q=q 1 ,q 2 ,...,q i ,...,q N (1)
C=c 1 ,c 2 ,...,c j ,...,c m (2)
to align the two sequences, an n×m-dimensional matrix grid needs to be constructed, and the matrix element (i, j) represents the distance d (qi, cj) between two points qi and cj. In other words, the similarity between each point of the sequence Q and each point of the sequence C is higher as the distance is smaller. The Euclidean distance is generally used. Each matrix element (i, j) represents an alignment of points qi and cj, where qi is any point in the sequence Q; cj is any point in sequence C. With this matrix, the distance between two time series can be calculated and a regular path (W) is shown. Since there are many different ways of "warping" the time axis, many regular paths (W is expressed as follows) can be obtained, but a shortest regular path needs to be found to determine the "distance" between two time sequences, i.e. the similarity. The shortest regular path can be quickly planned with dynamic programming. The dynamic programming algorithm can be summarized as finding a path through a plurality of lattice points in the n×m lattice, where the lattice points through which the path passes are aligned points at which two sequences are calculated.
W=w 1 ,w 2 ,...,w k ,...,w k max(m,n)≤K<m+n-1 (3)
The i-th element of w defines a mapping of two time series for w= (i, j). And, the regular path is generally subject to the following constraints.
(1) Boundary conditions: defining that the regular path must start and end in the diagonal elements of the matrix;
(2) Continuity: this limits the allowed steps in the regular path to adjacent cells (including diagonally adjacent cells);
(3) Monotonicity: this forces the points in W to go monotonically in time.
Combining continuity and monotonicity constraints, the path of each lattice is only three directions. For example, if a path has passed through a trellis point (i, j), then the next passing trellis point can only be one of three cases: (i+1, j), (i, j+1) or (i+1, j+1).
The DDTW algorithm determines an index of a shape of time-series data by first derivative, the shape representing a directional trend of the series data, and in order to simplify the calculation, a derivative estimating method is used instead of the cumbersome and accurate first derivative, and the estimated derivative formula is as follows:
wherein Qi is any point in Qi, qi represents one feature point data, D x [q]Representing the distance between two feature points; it can be seen from the formula that the estimate is only derived from the slope of the straight line of the point and its left neighbor and the average of the slopes of the straight lines of the left and right neighbors. Notably, this formula is more robust to outliers than algorithms that utilize only two data points. It is also noted that the formula does not include the first and last points, but instead uses the second and penultimate points instead.
D q [0]=D q [1]
D q [m]=D q [m-1]
When identifying the actions in the video of the patient, the DDTW algorithm analyzes the video file frame by frame, and the judgment of the action speed and the action standard degree is not influenced; in addition, other recognition models are added in the DDTW algorithm, for example, the patient can be recognized to be without a mask, and some influences of facial shielding are eliminated, so that a judgment result is more accurate.
S303: the image is identified.
The image recognition includes pattern recognition and character recognition.
(1) And (3) pattern recognition: and a deep learning algorithm is adopted to identify four patterns of connecting lines, cubes, picture clocks and cross pentagons which appear in the hand-painted image. Firstly, carrying out medical evidence-based labeling on sample data in a database, then building a convolutional neural network training recognition model, and then carrying out recognition by using the recognition model. Finally, the model accuracy exceeds the scoring criteria of the practitioner.
(2) Character recognition: and identifying the text information handwritten by the patient by adopting a handwritten Chinese character identification algorithm, and carrying out grammar analysis and semantic analysis on the text information to judge whether the sentence meets the question requirement.
And S40, calibrating and rechecking the evaluation score to generate a cognitive evaluation report.
Answer data of a scale in the evaluation task and calculation and score results given by an algorithm model can be checked in an evaluation scale rechecking module, and professional staff can calibrate and recheck the answer data and the score results in a cognitive evaluation background management subsystem. After the rechecking is completed, an electronic version cognition evaluation report is generated, and an evaluation result is fed back to the patient. The cognitive evaluation background management subsystem and the cognitive evaluation client subsystem can preview and print the evaluation report.
In summary, the cognitive assessment method and the system based on intelligent guidance and algorithm analysis provided by the invention can intelligently guide a patient to complete an assessment task independently by adopting a voice synchronous broadcasting mode; the automatic switching function of multiple dialects is adopted, so that the patient can select the dialects familiar to the patient for automatic evaluation; by adopting a mode of separating answer and rechecking functions of a cognitive evaluation meter, a professional can check evaluation data in a cognitive evaluation background management subsystem, so that the aim of one-to-many doctor-patient or no participation of the professional is fulfilled, and therefore, the grading difference caused by the difference of cognition and professionality of the professional is avoided, the evaluation result is more objective and accurate, and the unification of the evaluation result is realized.
The cognitive evaluation method and the system based on intelligent guidance and algorithm analysis can overcome the defects that the cognitive ability of a cognitive disorder patient is low or the attention cannot be concentrated for a long time, and the like, and improve the compliance of the patient to finish the test, so that the phenomena that the patient refuses to evaluate, cannot finish the evaluation or cannot adhere to the completion of the evaluation, and the like are avoided. Moreover, by adopting the cognitive assessment method and system based on intelligent guidance and algorithm analysis, provided by the invention, remote assessment can be utilized to realize simultaneous assessment of a plurality of patients by one doctor, so that the labor cost in the assessment process is reduced, and the popularization of cognitive health service is facilitated.
The cognitive evaluation method and the system based on intelligent guidance and algorithm analysis provided by the invention are described in detail. Any obvious modifications to the present invention, without departing from the spirit thereof, would constitute an infringement of the patent rights of the invention and would take on corresponding legal liabilities.

Claims (10)

1. A cognitive assessment method based on intelligent guidance and algorithm analysis is characterized by comprising the following steps:
s10, inputting patient information, and establishing a primary cognitive assessment task in a cognitive assessment client subsystem;
s20, performing cognitive evaluation to obtain evaluation data of a patient;
s30, carrying out data calculation by the cognitive evaluation algorithm model subsystem according to the evaluation data to obtain evaluation scores;
s40, calibrating and rechecking the evaluation score to generate a cognitive evaluation report;
in the step S30, the data calculation includes analyzing the voice content and analyzing the written content, identifying the voice content or the written content to obtain a text, encoding the text, calculating the similarity between the encoding and the encoding of the preset correct text of the system, and judging the question score if the similarity reaches a preset similarity threshold; and if the similarity is lower than the similarity threshold, judging that the question does not score, thereby obtaining the evaluation score.
2. The cognitive assessment method of claim 1, wherein:
the coding comprises shape code coding, sound code coding and mixed coding; wherein the hybrid code is a combination of a shape code and a sound code.
3. The cognitive assessment method of claim 2, wherein:
the shape code codes comprise character shape codes, character structure codes and four-corner codes.
4. The cognitive assessment method of claim 3, wherein:
the character shape codes are used for classifying the characters into a plurality of groups according to the shapes of the characters, and the character shape codes of the characters in each group are identical;
the character structure codes are classified into a plurality of groups according to the constitution structure of the characters, and the character structure codes of the characters in each group are the same.
5. The cognitive assessment method of claim 2, wherein the encoding rule of the vocoded voice code is:
converting Chinese characters into pinyin, splitting the pinyin into initials and finals after conversion, and generating according to a corresponding coding table: coding combination of initials, finals, rhyme supplements and tones.
6. The cognitive assessment method of claim 1, wherein said data calculation further comprises:
and recognizing human body posture feature points and human face feature points through a deep learning model, quantifying feature data of each frame of image data according to the feature points, and finally performing behavior feature matching according to the feature data of continuous frames.
7. The cognitive assessment method of claim 6, wherein said data calculation further comprises:
adopting a deep learning algorithm to identify four patterns of connecting lines, cubes, drawing clocks and cross pentagons which appear in a hand-drawn image, wherein the identification comprises the steps of carrying out medical evidence-based labels on sample data in a database, then building a convolutional neural network training identification model, and utilizing the identification model to identify.
8. The cognitive evaluation system based on intelligent guidance and algorithm analysis is characterized by comprising a cognitive evaluation client subsystem, a cognitive evaluation algorithm model subsystem and a cognitive evaluation background management subsystem, and is used for realizing the cognitive evaluation method based on intelligent guidance and algorithm analysis according to any one of claims 1-7.
9. The cognitive assessment system of claim 8, wherein:
the cognitive evaluation algorithm model subsystem comprises a voice content analysis module, a video action extraction module and an image recognition module;
the voice content analysis module analyzes voice to obtain text, the image recognition module converts handwritten text into text, and the video action extraction module is used for analyzing video content to obtain image data of each frame.
10. The cognitive assessment system of claim 9, wherein:
the cognitive evaluation client terminal system comprises a patient login module, a dialect switching module, an intelligent guiding module, a voice broadcasting module, a recording/video recording module, a drawing module, a special clicking module and a global pause module.
CN202310995715.4A 2023-08-09 2023-08-09 Cognitive assessment method and system based on intelligent guidance and algorithm analysis Pending CN117133456A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310995715.4A CN117133456A (en) 2023-08-09 2023-08-09 Cognitive assessment method and system based on intelligent guidance and algorithm analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310995715.4A CN117133456A (en) 2023-08-09 2023-08-09 Cognitive assessment method and system based on intelligent guidance and algorithm analysis

Publications (1)

Publication Number Publication Date
CN117133456A true CN117133456A (en) 2023-11-28

Family

ID=88861969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310995715.4A Pending CN117133456A (en) 2023-08-09 2023-08-09 Cognitive assessment method and system based on intelligent guidance and algorithm analysis

Country Status (1)

Country Link
CN (1) CN117133456A (en)

Similar Documents

Publication Publication Date Title
CN110750959B (en) Text information processing method, model training method and related device
US6434547B1 (en) Data capture and verification system
US7580835B2 (en) Question-answering method, system, and program for answering question input by speech
WO2021218028A1 (en) Artificial intelligence-based interview content refining method, apparatus and device, and medium
CN111651497B (en) User tag mining method and device, storage medium and electronic equipment
CN112487139A (en) Text-based automatic question setting method and device and computer equipment
US20230176911A1 (en) Task performance adjustment based on video analysis
CN116320607A (en) Intelligent video generation method, device, equipment and medium
CN110110066B (en) Interactive data processing method and device and computer readable storage medium
CN111581623A (en) Intelligent data interaction method and device, electronic equipment and storage medium
Wang et al. A text-guided generation and refinement model for image captioning
CN113935331A (en) Abnormal semantic truncation detection method, device, equipment and medium
CN113837907A (en) Man-machine interaction system and method for English teaching
CN1267805C (en) User's interface, system and method for automatically marking phonetic symbol to correct pronunciation
CN113505786A (en) Test question photographing and judging method and device and electronic equipment
JP6802332B1 (en) Information processing method and information processing equipment
CN117133456A (en) Cognitive assessment method and system based on intelligent guidance and algorithm analysis
CN114420123A (en) Voice recognition optimization method and device, computer equipment and storage medium
CN1295673C (en) Voice correcting device and method
US20030091965A1 (en) Step-by-step english teaching method and its computer accessible recording medium
KR102260115B1 (en) Language learning method that provides learning materials to improve pronunciation
CN111610948B (en) Intelligent formula online editing system and method
CN113158644B (en) Retrieval lattice and implicit emotion recognition method based on multitask learning
TWI743798B (en) Method and apparatus for chinese multiple speech recognition
CN117453895B (en) Intelligent customer service response method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination