CN114387678A - Method and apparatus for evaluating language readability using non-verbal body symbols - Google Patents

Method and apparatus for evaluating language readability using non-verbal body symbols Download PDF

Info

Publication number
CN114387678A
CN114387678A CN202210030279.2A CN202210030279A CN114387678A CN 114387678 A CN114387678 A CN 114387678A CN 202210030279 A CN202210030279 A CN 202210030279A CN 114387678 A CN114387678 A CN 114387678A
Authority
CN
China
Prior art keywords
target object
symbols
worded
specific characteristics
body symbols
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210030279.2A
Other languages
Chinese (zh)
Inventor
王彤
张硕
周纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lingyun Meijia Xi'an Intelligent Technology Co ltd
Original Assignee
Lingyun Meijia Xi'an Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lingyun Meijia Xi'an Intelligent Technology Co ltd filed Critical Lingyun Meijia Xi'an Intelligent Technology Co ltd
Priority to CN202210030279.2A priority Critical patent/CN114387678A/en
Publication of CN114387678A publication Critical patent/CN114387678A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • A61B5/02055Simultaneously evaluating both cardiovascular condition and temperature
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1072Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1077Measuring of profiles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1079Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physiology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Cardiology (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Pulmonology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Optics & Photonics (AREA)
  • Vascular Medicine (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)

Abstract

The invention provides a method and equipment for evaluating language reading capability by using non-worded body symbols, wherein the method comprises the steps of acquiring video data of the non-worded body symbols of a target object by a multi-vision machine vision technology, converting the video data of the non-worded body symbols into three-dimensional data files and storing the three-dimensional data files in a classified manner; identifying the specific characteristics of the non-wording body symbols of the target object, and classifying and labeling the specific characteristics of the non-wording body symbols of the target object; inputting the specific characteristics of the non-worded body symbols of the target object into a data mapping rule model for classification, deconstruction and analysis; and obtaining the language reading capability and the expression capability of the target object by mapping the evaluation standard of the non-word body symbol. The invention has the beneficial effects that: the language reading ability and the expression ability of the children can be analyzed and evaluated from the aspect of behaviors, so that the screening standard of early behaviors of language reading disorder of the children is established.

Description

Method and apparatus for evaluating language readability using non-verbal body symbols
Technical Field
The invention relates to the technical field of visual recognition, in particular to a method and equipment for evaluating language reading capability by using non-worded body symbols.
Background
The narrow language refers to the natural language expressed by human being after brain consciousness processing, while the broad language refers to a set of communication instructions expressed by the same kind of organisms with a common processing rule, and the instructions are transmitted in a visual, sound or tactile manner. A number of studies have found that over 80% of the information in person-to-person daily communications is conveyed through non-verbal body symbols, including eye and facial expressions, body movements and touches, posture and appearance, body spatial distance, and the like.
Non-verbal body symbols are caused by body movements of a person and may convey a lot of information themselves. For example, the posture is a body language that an individual uses the body or limbs to act to express certain emotion and attitude, the eyes can reflect the emotion, attitude and emotional change of the person, the face accurately conveys different heart states and emotions by means of the movement of tens of muscles, and the touch of the body is often used to express some strong emotions. The non-word body symbols are very important communication means for the deaf-mutes, and the information transmission mode is easier to understand and recognize. With the intensive research, the non-word body symbol has important application values in the processes of lie detection, interrogation, performance and teaching. Research shows that children with language disorder have the capability of understanding, expressing and applying languages deviated from normal, and even have behavior characteristics of anxiety, recession, short attention, hyperactivity or stumbling and the like because the children cannot clearly say things and like to express the meanings of the children by gestures or excessively say that the children do not agree with the syntax. According to neurological research of cerebellar defect theory, it is found that the children with reading disorder show various movement defects, such as clumsy overall movement, poor balance and coordination capability and other behavioral characteristics. Autistic children develop a number of specific, non-verbal body-symbol characteristics that can be used as indicators for early warning and diagnosis of autistic children. In the process of education and teaching, the non-worded body symbols are effectively utilized, so that classroom knowledge can be transferred and implemented more vividly, students can understand and express the classroom knowledge better, and meanwhile, the non-worded body symbols can also be used as an effective auxiliary tool for teachers to capture and analyze the learning states and learning abilities of the students.
Through the document search of the prior art, if the non-wording body symbol is used as a keyword for searching, no document report which is the same as or similar to the subject of the invention is reported. If the non-wording body symbols are replaced and searched by the body language, the Nanjing artificial intelligence chip innovation research institute of the automatic research institute of the Chinese academy of sciences provides a body language detection and behavior analysis method and system based on deep learning. The method has the advantages that the behavior video, the picture and the voice of the target are extracted, the picture and the video of the facial expression and the four-limb behavior of the target under different moods are respectively obtained, the voice data under the corresponding mood are obtained, the data set is enriched, and the training set is manufactured after labeling. Guangzhou Vision wind science and technology Limited company, a human body limb language identification method and system, discloses a human body limb language identification system, which is used for identifying and marking the body characteristic parts through a key point detector, helping a body tracking algorithm to know the performance of each posture at different angles, and performing sound alarm or short message alarm on the identified suspicious human body limb language person through a designed alarm module to timely identify the suspicious people so as to achieve the purpose of preventing crimes. A human body limb language identification method and system of Guangzhou Fangtou digital creative science and technology GmbH constructs a virtual world environment, obtains limb feature points of a human body based on a Kinect camera, and constructs a limb semantic collection according to the limb feature points. The above documents only build a human body limb language identification method, many alternatives exist for data acquisition and analysis of the limb language, and the documents do not relate to floor research in specific application scenarios.
Ford global technology corporation "predicting vehicle motion based on driver limb language" studies predicting future motion of a vehicle based on the limb language of the driver detected by the limb language component. The method comprises the steps of ' a method for identifying depression and suicide tendency fusing body language, micro expression and language ' of southern China university, ' a suicide risk assessment method based on body language ', ' a hostage clamping risk assessment method based on deep learning of body language ', ' a prisoner crossing intention assessment method based on body language ', ' collecting body behavior data, micro expression data and the like by using Kinect with an infrared camera, respectively converting the information into characteristic text descriptions, classifying, marking and describing the data, and finally carrying out suicide risk assessment, hostage clamping risk assessment, prisoner crossing intention assessment and the like according to classification marks and descriptions. The human body limb language identification method is constructed in the above documents, and the practical scenes of limb language collection and analysis, driver and vehicle prediction, suicide risk assessment, hostage clamping risk assessment, prisoner jail-crossing intention assessment and the like are researched.
The language ability is used as a key word for retrieval, and documents in the prior art are found that the PYTHON crawler program is compiled by Zhejiang high-quality science and technology Limited, so that the required audio, question and answer linguistic data and encyclopedia knowledge can be obtained, and meanwhile, the special scene linguistic data accumulated in project practice by combining the high-quality science is merged into a high-quality language database for analysis, so that the leap degree, language logic, expression initiative, politeness degree and the like of the speech of the child are evaluated. A language ability evaluation method, a device, a system, computer equipment and a storage medium of worthy technology Limited company such as Beijing hidden-virtual collects voice data of a user practicing a target language, analyzes the pronunciation accuracy rate of each word, and finally calculates the pronunciation accuracy rate of the voice data based on the pronunciation accuracy rate of each word. The Zhongshan university 'a language ability obstacle grading system and an implementation method thereof' discloses a language ability obstacle grading system and an implementation method thereof, wherein the language ability obstacle grading system comprises a testing module, a timing module and a grading evaluation module, and the voice or writing answered by a testee is recognized as a character answer, so that the language ability obstacle grading system can carry out objective testing. Shanghai Hanzi information technology Limited company, a language ability testing method and system, analyzes the answering behavior and answering results and judges the final language ability level of a tester according to the answering behavior and answering results of the tester. The above documents mostly perform language ability evaluation by text (including data of characters, test questions and the like), audio (voice data) and do not relate to non-wording body symbol data.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method and the equipment for evaluating the language reading capability by using the non-worded body symbols are provided, and aim to analyze the language reading capability and the expression capability of children from the aspect of behaviors.
In order to solve the technical problems, the invention adopts the technical scheme that: a method for evaluating language readability using non-verbal body symbols, comprising,
acquiring video data of a non-worded body symbol of a target object by a multi-vision machine vision technology, converting the video data of the non-worded body symbol into a three-dimensional data file and storing the three-dimensional data file in a classified manner;
identifying the specific characteristics of the non-wording body symbols of the target object, and classifying and labeling the specific characteristics of the non-wording body symbols of the target object;
inputting the specific characteristics of the non-worded body symbols of the target object into a data mapping rule model for classification, deconstruction and analysis;
and after the specific features of the non-word body symbols of the target object are classified, deconstructed and analyzed, mapping the evaluation standard of the non-word body symbols to obtain the language reading capability and the expression capability of the target object.
Further, the obtaining of the video data of the non-wording body symbols of the target object by the multi-vision machine vision technology and the converting of the video data of the non-wording body symbols into the three-dimensional data file and the classified storage specifically include,
respectively acquiring video data of non-wording body symbols of a target object at different angles through an image acquisition system with multiple machine positions;
and fusing and reconstructing the video data of the planes of the non-wording body symbols of different angles of the target object by a machine vision technology to obtain the three-dimensional data of the target object.
Further, the identifying the specific features of the non-wording body symbol of the target object and the classifying and labeling the specific features of the non-wording body symbol of the target object specifically include,
sending the three-dimensional data of the target object to edge end acquisition equipment, and performing rough extraction on the specific characteristics of the non-wording body symbol of the target object;
sending the specific characteristics of the non-wording body symbols of the target object which is extracted roughly to a cloud server for fine processing;
and classifying and labeling the specific characteristics of the non-worded body symbols of the refined target object.
Further, the specific characteristics of the non-worded body symbol of the target object comprise physiological function characteristics, body shape characteristics and human posture characteristics.
Further, the physiological function characteristics comprise respiration, heartbeat, blood pressure and blood oxygen saturation;
the physical form characteristics comprise height, sitting height, body girth and body symmetry;
the human body posture characteristics comprise upper limb coordination, lower limb coordination, action extension degree, limb swinging amplitude, movement duration and movement count.
The invention also provides a device for evaluating language reading ability by using non-word body symbols, which comprises,
the data acquisition module is used for acquiring the video data of the non-word body symbol of the target object by a multi-vision machine vision technology, converting the video data of the non-word body symbol into a three-dimensional data file and storing the three-dimensional data file in a classified manner;
the data identification module is used for identifying the specific characteristics of the non-wording body symbols of the target object and classifying and labeling the specific characteristics of the non-wording body symbols of the target object;
the data mapping module is used for inputting the specific characteristics of the non-worded body symbols of the target object into the data mapping rule model for classification, deconstruction and analysis;
and the function evaluation module is used for mapping the evaluation standard of the non-word body symbol after classifying, deconstructing and analyzing the specific characteristics of the non-word body symbol of the target object to obtain the language reading capability and the expression capability of the target object.
Further, the data acquisition module specifically comprises,
the multi-angle video acquisition unit is used for respectively acquiring video data of the non-wording body symbols of different angles of the target object through an image acquisition system of a plurality of machine positions;
and the three-dimensional data modeling unit is used for fusing and reconstructing the video data of the planes of the non-worded body symbols of different angles of the target object by a machine vision technology to obtain the three-dimensional data of the target object.
Further, the data identification module specifically comprises,
the characteristic rough extraction unit is used for sending the three-dimensional data of the target object to the edge end acquisition equipment and roughly extracting the specific characteristics of the non-worded body symbols of the target object;
the characteristic fine processing unit is used for sending the specific characteristics of the non-wording body symbols of the target object which is extracted roughly to the cloud server for fine processing;
and the characteristic classification unit is used for classifying and labeling the specific characteristics of the non-worded body symbols of the precisely processed target object.
The invention also provides a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the method for evaluating language reading ability by using non-word body symbols as described in any one of the above items when executing the computer program.
The present invention also provides a storage medium storing a computer program which, when executed by a processor, is operable to implement a method of assessing language readability using non-verbal body symbols as defined in any one of the above.
The invention has the beneficial effects that: collecting non-worded body symbols of a target object from multiple angles by a multi-vision machine vision technology, identifying specific characteristics of the non-worded body symbols, and classifying and labeling the specific characteristics; and classifying, deconstructing and analyzing through a data mapping rule model, and finally mapping a non-word body symbol evaluation standard to obtain the language reading capability and expression capability of the target object, so that the language reading capability and expression capability of children can be analyzed and evaluated from a behavior perspective, and the screening standard of early behaviors of language reading disorder of children is established.
Drawings
The following detailed description of the invention refers to the accompanying drawings.
FIG. 1 is a flow chart of a method of assessing language readability using non-verbal body symbols in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart of data collection according to an embodiment of the present invention;
FIG. 3 is a flow chart of data identification according to an embodiment of the present invention;
FIG. 4 is a block diagram of an apparatus for assessing language reading ability using non-verbal body symbols in accordance with an embodiment of the present invention;
FIG. 5 is a block diagram of a data acquisition module according to an embodiment of the present invention;
FIG. 6 is a block diagram of a data identification module according to an embodiment of the present invention;
FIG. 7 is a schematic block diagram of a computer apparatus of an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As shown in fig. 1, the first embodiment of the present invention is: a method for evaluating language readability using non-verbal body symbols, comprising,
s10, acquiring video data of the non-worded body symbols of the target object by a multi-vision machine vision technology, converting the video data of the non-worded body symbols into three-dimensional data files and storing the three-dimensional data files in a classified manner;
s20, identifying the specific characteristics of the non-worded body symbols of the target object, and classifying and labeling the specific characteristics of the non-worded body symbols of the target object;
s30, inputting the specific characteristics of the non-worded body symbols of the target object into a data mapping rule model for classification, deconstruction and analysis;
in the step, the data mapping rule model is a preliminary model established through a correlation analysis algorithm according to the correlation between the non-worded body symbols and the language reading capability and the expression capability. By enlarging the number of samples, the characteristic of the non-worded body symbol is finely classified, deconstructed and analyzed, and the accuracy of a correlation analysis algorithm model of the non-worded body symbol and the language reading capability and the expression capability is optimized.
S40, after classification, deconstruction and analysis are carried out according to the specific characteristics of the non-word body symbol of the target object, mapping the non-word body symbol evaluation standard to obtain the language reading capability and the expression capability of the target object.
As shown in fig. 2, in step S10, the method includes obtaining video data of the non-worded body symbol of the target object by the multi-vision machine vision technology, converting the video data of the non-worded body symbol into a three-dimensional data file and storing the three-dimensional data file in a classified manner,
s11, respectively acquiring video data of the non-wording body symbols of the target object at different angles through an image acquisition system of a plurality of machine positions;
and S12, fusing and reconstructing the video data of the planes of the non-worded body symbols of different angles of the target object through a machine vision technology to obtain the three-dimensional data of the target object.
As shown in fig. 3, the step S20 of identifying the specific features of the non-worded body symbols of the target object and classifying and labeling the specific features of the non-worded body symbols of the target object specifically includes,
s21, sending the three-dimensional data of the target object to edge end acquisition equipment, and performing rough extraction on the specific characteristics of the non-worded body symbol of the target object;
s22, sending the specific characteristics of the non-wording body symbols of the target object which is extracted roughly to a cloud server for fine processing;
and S23, classifying and labeling the specific characteristics of the non-worded body symbols of the refined target object.
Considering that the existing artificial intelligence algorithm is based on a deep neural network, has high requirement on computing power and cannot realize real-time performance under an open scene, the method firstly builds a side cloud combination framework, divides data processing of non-worded body symbols into edge end acquisition equipment with low computing power and a cloud server with high computing power, and realizes reasonable distribution and maximum utilization of the computing power through crude extraction of edge ends and fine processing of the cloud.
The specific characteristics of the non-worded body symbols of the target object comprise physiological function characteristics, body form characteristics and human posture characteristics.
Wherein the physiological function characteristics comprise respiration, heartbeat, blood pressure and blood oxygen saturation; the data are dynamically and continuously collected to track various physiological function conditions of children and teenagers in motion scenes.
The physical form characteristics comprise height, sitting height, body girth and body symmetry;
the human posture characteristics comprise upper limb coordination, lower limb coordination, motion extension degree, limb swing amplitude, motion duration, motion counting, eye movement control and blink counting. The extraction and analysis of relevant parameters of human kinematics and dynamics are realized through a real-time human posture evaluation algorithm.
The embodiment of the invention has the beneficial effects that: collecting non-worded body symbols of a target object from multiple angles by a multi-vision machine vision technology, identifying specific characteristics of the non-worded body symbols, and classifying and labeling the specific characteristics; and classifying, deconstructing and analyzing through a data mapping rule model, and finally mapping a non-word body symbol evaluation standard to obtain the language reading capability and expression capability of the target object, so that the language reading capability and expression capability of children can be analyzed and evaluated from a behavior perspective, and the screening standard of early behaviors of language reading disorder of children is established.
As shown in fig. 4, a second embodiment of the present invention is an apparatus for evaluating language reading ability using non-verbal body symbols, comprising,
the data acquisition module 10 is used for acquiring video data of the non-word body symbol of the target object by a multi-vision machine vision technology, converting the video data of the non-word body symbol into a three-dimensional data file and storing the three-dimensional data file in a classified manner;
the data identification module 20 is used for identifying the specific characteristics of the non-wording body symbols of the target object and classifying and labeling the specific characteristics of the non-wording body symbols of the target object;
the data mapping module 30 is used for inputting the specific characteristics of the non-worded body symbols of the target object into the data mapping rule model for classification, deconstruction and analysis;
and the function evaluation module 40 is used for mapping the evaluation standard of the non-word body symbol after classifying, deconstructing and analyzing the specific characteristics of the non-word body symbol of the target object to obtain the language reading capability and the expression capability of the target object.
As shown in fig. 5, the data acquisition module 10 specifically includes,
the multi-angle video acquisition unit 11 is used for respectively acquiring video data of the non-wording body symbols of different angles of the target object through an image acquisition system of a multi-machine position;
and the three-dimensional data modeling unit 12 is configured to fuse and reconstruct the video data of the planes of the non-worded body symbols at different angles of the target object by using a machine vision technology, so as to obtain three-dimensional data of the target object.
As shown in fig. 6, the data identification module 20 specifically includes,
the characteristic rough extraction unit 21 is used for sending the three-dimensional data of the target object to edge end acquisition equipment and roughly extracting the specific characteristics of the non-worded body symbols of the target object;
the characteristic fine processing unit 22 is used for sending the specific characteristics of the non-worded body symbols of the target object which is extracted roughly to the cloud server for fine processing;
and the feature classification unit 23 is used for classifying and labeling the specific features of the non-worded body symbols of the refined target object.
It should be noted that, as will be clear to those skilled in the art, the specific implementation process of the apparatus and units for evaluating language reading ability by using non-word body symbols may refer to the corresponding description in the foregoing method embodiments, and for convenience and brevity of description, no further description is provided herein.
The above-described means for assessing language reading ability using non-verbal body symbols may be embodied in the form of a computer program which is executable on a computer device as shown in fig. 7.
Referring to fig. 7, fig. 7 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a terminal or a server, where the terminal may be an electronic device with a communication function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, and a wearable device. The server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 7, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer programs 5032 include program instructions that, when executed, cause the processor 502 to perform a method of assessing language readability with non-verbal body symbols.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of the computer program 5032 on the non-volatile storage medium 503, which computer program 5032, when executed by the processor 502, causes the processor 502 to perform a method for assessing language readability using non-verbal body symbols.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 7 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation of the computer device 500 to which the present application may be applied, and that a particular computer device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 502 is adapted to run a computer program 5032 stored in the memory to implement a method of assessing language readability with non-verbal body symbols as described above.
It should be understood that in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program comprises program instructions. The program instructions, when executed by the processor, cause the processor to perform a method of assessing language readability with non-verbal body symbols, as described above.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for assessing language readability using non-verbal body symbols, comprising: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
acquiring video data of a non-worded body symbol of a target object by a multi-vision machine vision technology, converting the video data of the non-worded body symbol into a three-dimensional data file and storing the three-dimensional data file in a classified manner;
identifying the specific characteristics of the non-wording body symbols of the target object, and classifying and labeling the specific characteristics of the non-wording body symbols of the target object;
inputting the specific characteristics of the non-worded body symbols of the target object into a data mapping rule model for classification, deconstruction and analysis;
and after the specific features of the non-word body symbols of the target object are classified, deconstructed and analyzed, mapping the evaluation standard of the non-word body symbols to obtain the language reading capability and the expression capability of the target object.
2. The method of assessing language readability with non-verbal body symbols according to claim 1, wherein: the method comprises the steps of acquiring video data of non-worded body symbols of a target object by a multi-vision machine vision technology, converting the video data of the non-worded body symbols into three-dimensional data files and storing the three-dimensional data files in a classified manner,
respectively acquiring video data of non-wording body symbols of a target object at different angles through an image acquisition system with multiple machine positions;
and fusing and reconstructing the video data of the planes of the non-wording body symbols of different angles of the target object by a machine vision technology to obtain the three-dimensional data of the target object.
3. The method of assessing language readability with non-verbal body symbols according to claim 2, wherein: the identifying the specific characteristics of the non-worded body symbols of the target object and the classifying and labeling the specific characteristics of the non-worded body symbols of the target object specifically comprise,
sending the three-dimensional data of the target object to edge end acquisition equipment, and performing rough extraction on the specific characteristics of the non-wording body symbol of the target object;
sending the specific characteristics of the non-wording body symbols of the target object which is extracted roughly to a cloud server for fine processing;
and classifying and labeling the specific characteristics of the non-worded body symbols of the refined target object.
4. The method of assessing language readability with non-verbal body symbols according to claim 3, wherein: the specific characteristics of the non-word body symbol of the target object comprise physiological function characteristics, body shape characteristics and human posture characteristics.
5. The method of assessing language readability with non-verbal body symbols according to claim 4, wherein: the physiological function characteristics comprise respiration, heartbeat, blood pressure and blood oxygen saturation;
the physical form characteristics comprise height, sitting height, body girth and body symmetry;
the human body posture characteristics comprise upper limb coordination, lower limb coordination, action extension degree, limb swinging amplitude, movement duration and movement count.
6. An apparatus for assessing language readability using non-verbal body symbols, said apparatus comprising: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
the data acquisition module is used for acquiring the video data of the non-word body symbol of the target object by a multi-vision machine vision technology, converting the video data of the non-word body symbol into a three-dimensional data file and storing the three-dimensional data file in a classified manner;
the data identification module is used for identifying the specific characteristics of the non-wording body symbols of the target object and classifying and labeling the specific characteristics of the non-wording body symbols of the target object;
the data mapping module is used for inputting the specific characteristics of the non-worded body symbols of the target object into the data mapping rule model for classification, deconstruction and analysis;
and the function evaluation module is used for mapping the evaluation standard of the non-word body symbol after classifying, deconstructing and analyzing the specific characteristics of the non-word body symbol of the target object to obtain the language reading capability and the expression capability of the target object.
7. The apparatus for assessing language readability with non-verbal body symbols of claim 6, wherein: the data acquisition module specifically comprises a data acquisition module,
the multi-angle video acquisition unit is used for respectively acquiring video data of the non-wording body symbols of different angles of the target object through an image acquisition system of a plurality of machine positions;
and the three-dimensional data modeling unit is used for fusing and reconstructing the video data of the planes of the non-worded body symbols of different angles of the target object by a machine vision technology to obtain the three-dimensional data of the target object.
8. The apparatus for assessing language readability with non-verbal body symbols of claim 7, wherein: the data identification module specifically comprises a data identification module,
the characteristic rough extraction unit is used for sending the three-dimensional data of the target object to the edge end acquisition equipment and roughly extracting the specific characteristics of the non-worded body symbols of the target object;
the characteristic fine processing unit is used for sending the specific characteristics of the non-wording body symbols of the target object which is extracted roughly to the cloud server for fine processing;
and the characteristic classification unit is used for classifying and labeling the specific characteristics of the non-worded body symbols of the precisely processed target object.
9. A computer device, characterized by: the computer device comprises a memory having stored thereon a computer program which, when executed by a processor, implements the method of assessing language readability with non-verbal body symbols according to any of claims 1 to 5.
10. A storage medium, characterized by: the storage medium stores a computer program which, when executed by a processor, implements the method of assessing language readability with non-verbal body symbols of any of claims 1 to 5.
CN202210030279.2A 2022-01-11 2022-01-11 Method and apparatus for evaluating language readability using non-verbal body symbols Pending CN114387678A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210030279.2A CN114387678A (en) 2022-01-11 2022-01-11 Method and apparatus for evaluating language readability using non-verbal body symbols

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210030279.2A CN114387678A (en) 2022-01-11 2022-01-11 Method and apparatus for evaluating language readability using non-verbal body symbols

Publications (1)

Publication Number Publication Date
CN114387678A true CN114387678A (en) 2022-04-22

Family

ID=81201437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210030279.2A Pending CN114387678A (en) 2022-01-11 2022-01-11 Method and apparatus for evaluating language readability using non-verbal body symbols

Country Status (1)

Country Link
CN (1) CN114387678A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114795192A (en) * 2022-07-01 2022-07-29 佛山科学技术学院 Joint motion degree intelligent detection method and system

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102245085A (en) * 2008-10-14 2011-11-16 俄亥俄大学 Cognitive and linguistic assessment using eye tracking
CN103857347A (en) * 2011-08-09 2014-06-11 俄亥俄大学 Pupillometric assessment of language comprehension
CN109431523A (en) * 2018-10-19 2019-03-08 昆山杜克大学 Autism primary screening apparatus based on asocial's sonic stimulation behavior normal form
CN109717878A (en) * 2018-12-28 2019-05-07 上海交通大学 A kind of detection system and application method paying attention to diagnosing normal form jointly for autism
CN110313923A (en) * 2019-07-05 2019-10-11 昆山杜克大学 Autism early screening system based on joint ability of attention test and audio-video behavioural analysis
CN110890140A (en) * 2019-11-25 2020-03-17 上海交通大学 Virtual reality-based autism rehabilitation training and capability assessment system and method
CN111081371A (en) * 2019-11-27 2020-04-28 昆山杜克大学 Virtual reality-based early autism screening and evaluating system and method
CN111489819A (en) * 2019-01-29 2020-08-04 富士施乐株式会社 Method, server and computer readable medium for detecting cognitive and language disorders
CN111528859A (en) * 2020-05-13 2020-08-14 浙江大学人工智能研究所德清研究院 Child ADHD screening and evaluating system based on multi-modal deep learning technology
CN111768863A (en) * 2020-06-28 2020-10-13 暨南大学 Artificial intelligence-based infant development monitoring system and method
WO2021000497A1 (en) * 2019-07-03 2021-01-07 平安科技(深圳)有限公司 Retrieval method and apparatus, and computer device and storage medium
CN113761160A (en) * 2021-08-06 2021-12-07 上海沐月信息技术发展有限公司 Evaluation method and application of sensory analysis

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102245085A (en) * 2008-10-14 2011-11-16 俄亥俄大学 Cognitive and linguistic assessment using eye tracking
CN103857347A (en) * 2011-08-09 2014-06-11 俄亥俄大学 Pupillometric assessment of language comprehension
CN109431523A (en) * 2018-10-19 2019-03-08 昆山杜克大学 Autism primary screening apparatus based on asocial's sonic stimulation behavior normal form
CN109717878A (en) * 2018-12-28 2019-05-07 上海交通大学 A kind of detection system and application method paying attention to diagnosing normal form jointly for autism
CN111489819A (en) * 2019-01-29 2020-08-04 富士施乐株式会社 Method, server and computer readable medium for detecting cognitive and language disorders
WO2021000497A1 (en) * 2019-07-03 2021-01-07 平安科技(深圳)有限公司 Retrieval method and apparatus, and computer device and storage medium
CN110313923A (en) * 2019-07-05 2019-10-11 昆山杜克大学 Autism early screening system based on joint ability of attention test and audio-video behavioural analysis
CN110890140A (en) * 2019-11-25 2020-03-17 上海交通大学 Virtual reality-based autism rehabilitation training and capability assessment system and method
CN111081371A (en) * 2019-11-27 2020-04-28 昆山杜克大学 Virtual reality-based early autism screening and evaluating system and method
CN111528859A (en) * 2020-05-13 2020-08-14 浙江大学人工智能研究所德清研究院 Child ADHD screening and evaluating system based on multi-modal deep learning technology
CN111768863A (en) * 2020-06-28 2020-10-13 暨南大学 Artificial intelligence-based infant development monitoring system and method
CN113761160A (en) * 2021-08-06 2021-12-07 上海沐月信息技术发展有限公司 Evaluation method and application of sensory analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈墨;韦小满;: "自闭症儿童非语言沟通能力的评估研究", 中国特殊教育, no. 05, 15 May 2015 (2015-05-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114795192A (en) * 2022-07-01 2022-07-29 佛山科学技术学院 Joint motion degree intelligent detection method and system
CN114795192B (en) * 2022-07-01 2022-09-16 佛山科学技术学院 Joint mobility intelligent detection method and system

Similar Documents

Publication Publication Date Title
Cimtay et al. Cross-subject multimodal emotion recognition based on hybrid fusion
Zhang et al. Intelligent facial emotion recognition and semantic-based topic detection for a humanoid robot
Bousmalis et al. Towards the automatic detection of spontaneous agreement and disagreement based on nonverbal behaviour: A survey of related cues, databases, and tools
CN108664932B (en) Learning emotional state identification method based on multi-source information fusion
WO2022067524A1 (en) Automatic emotion recognition method and system, computing device and computer readable storage medium
CN108647657A (en) A kind of high in the clouds instruction process evaluation method based on pluralistic behavior data
CN115936944B (en) Virtual teaching management method and device based on artificial intelligence
Jazouli et al. Automatic detection of stereotyped movements in autistic children using the Kinect sensor
Ashraf et al. On the review of image and video-based depression detection using machine learning
Yurtsever et al. BabyPose: Real-time decoding of baby’s non-verbal communication using 2D video-based pose estimation
Xu et al. Two-stage temporal modelling framework for video-based depression recognition using graph representation
CN114970701A (en) Multi-mode fusion-based classroom interaction analysis method and system
Yu et al. Cloud-edge collaborative depression detection using negative emotion recognition and cross-scale facial feature analysis
CN114387678A (en) Method and apparatus for evaluating language readability using non-verbal body symbols
Sidhu et al. Deep learning based emotion detection in an online class
Serbaya Analyzing the role of emotional intelligence on the performance of small and medium enterprises (SMEs) using ai-based convolutional neural networks (CNNs)
Jianwattanapaisarn et al. Emotional characteristic analysis of human gait while real-time movie viewing
Bhatia et al. A multimodal system to characterise melancholia: cascaded bag of words approach
CN116383618A (en) Learning concentration assessment method and device based on multi-mode data
Verhoef et al. Towards affective computing that works for everyone
Sommer et al. Simultaneous and spatiotemporal detection of different levels of activity in multidimensional data
CN115132027A (en) Intelligent programming learning system and method based on multi-mode deep learning
Rathi et al. Embedding Affect Awareness into Online Learning Environment using Deep Neural Network
Maddu et al. Online learners’ engagement detection via facial emotion recognition in online learning context using hybrid classification model
Fang et al. FAF: A novel multimodal emotion recognition approach integrating face, body and text

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination