CN115601575B - Method and system for assisting expression of common expressions of aphasia and aphasia writers - Google Patents

Method and system for assisting expression of common expressions of aphasia and aphasia writers Download PDF

Info

Publication number
CN115601575B
CN115601575B CN202211307489.8A CN202211307489A CN115601575B CN 115601575 B CN115601575 B CN 115601575B CN 202211307489 A CN202211307489 A CN 202211307489A CN 115601575 B CN115601575 B CN 115601575B
Authority
CN
China
Prior art keywords
template
movable part
matching
determining
selecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211307489.8A
Other languages
Chinese (zh)
Other versions
CN115601575A (en
Inventor
姜静
王磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangzhou Polytechnic College
Original Assignee
Yangzhou Polytechnic College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangzhou Polytechnic College filed Critical Yangzhou Polytechnic College
Priority to CN202211307489.8A priority Critical patent/CN115601575B/en
Publication of CN115601575A publication Critical patent/CN115601575A/en
Application granted granted Critical
Publication of CN115601575B publication Critical patent/CN115601575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a system for assisting expression of common expressions of aphasia writers, and relates to the technical field of input methods. The method comprises the following specific steps: determining a movable part of a user; initially establishing a corresponding relation between the movable part and a screen key according to the movable part; selecting different modeling modes according to different movable parts, and determining the moving direction of the movable parts; and finally determining the corresponding relation between the movable part and the key according to the moving direction, and finishing instruction input. According to the invention, through an informatics technology and an input method in the process of optimizing the expression of common expressions, a person losing the language and writing can smoothly express simple ideas or instructions, and the expression efficiency and the treatment efficiency of the crowd losing the language and writing are improved.

Description

Method and system for assisting expression of common expressions of aphasia and aphasia writers
Technical Field
The invention relates to the technical field of input methods, in particular to a method and a system for assisting the expression of commonly used expressions of aphasia writers.
Background
In the prior art, if a user inputs characters to a notebook computer, the user usually uses a manual keyboard to input the characters, but in real life, people who have a clear consciousness in a short time but cannot speak or cannot write, such as sudden car accident injury, sudden cerebral infarction and the like, are not capable of expressing simple ideas or instructions although the consciousness is clear. Although eye tracking input techniques are currently available, they are quite inefficient and, in addition, instruction input cannot be achieved if the user's eyes are not movable upon impact. Therefore, it is a problem to be solved by those skilled in the art how to design an input method so that the user can express the instruction smoothly.
Disclosure of Invention
In view of the above, the present invention provides a method and a system for assisting the expression of commonly used expressions of aphasia writers, so as to solve the problems set forth in the background art.
In order to achieve the above purpose, the present invention adopts the following technical scheme: a method for assisting the expression of common expressions of aphasia writers comprises the following specific steps:
determining a movable part of a user;
initially establishing a corresponding relation between the movable part and a screen key according to the movable part;
selecting different modeling modes according to different movable parts, and determining the moving direction of the movable parts;
and finally determining the corresponding relation between the movable part and the key according to the moving direction, and finishing instruction input.
Optionally, building templates of the movable parts, performing different modeling according to the characteristics of the movable parts, and building templates for the first parts to perform template matching; and establishing a background area aiming at the second part, and carrying out background difference.
Optionally, the process of creating the template is:
determining an ellipse region of the ROI, generating an ellipse by using a gen_circle () function, and finding the center of the ellipse;
acquiring an image of the elliptical region from the image, and obtaining an ROI through reduction_domain ();
performing erosion and expansion operation on the ROI, and obtaining a difference value between an expansion region and an erosion region;
obtaining a fitting contour line of the region by using an edge_sub_pix function, and selecting the contour line by using a select_conductors_ xld function;
the template is created through the create_shape_model () function, and the outline of the template is found by using get_shape_model_conductors (), so that the template is built.
Optionally, acquiring an image of the movable part; determining a target position of the movable part by using a template matching algorithm according to the image; and comparing the target position with a preset position, scoring the matching degree through a matching function, and selecting template matching for modeling if the score reaches a first threshold.
Optionally, establishing a uniform matching template for coarse matching of the movable part is further included.
Optionally, the specific process of using the uniform matching template is:
utilizing the uniform matching template to roughly match the movable part to establish a monitoring area;
modeling a movable part in the monitoring area to obtain a first template;
and the user acts on the movable part, the moving direction of the movable part is determined by using the first template, and instruction input is completed according to the corresponding relation between the movable part and the keys.
Optionally, the formula of the matching function is:
wherein (i, j) represents a movement offset value when traversing the target image; t (m, n) represents a gray value at coordinates (m, n) in the template; s (i+m, j+n) represents the gray value of the target image at (i+m, j+n); t represents the gray average value of the template area; s represents the gray scale average of the target image subregion.
On the other hand, a system for assisting the expression of common expressions of aphasia writers is provided, which comprises a moving part determining module, a corresponding relation determining module, a model building module, an instruction input module,
The mobile part determining module is used for determining the mobile part of the user;
the corresponding relation determining module is used for preliminarily establishing the corresponding relation between the movable part and the screen key according to the movable part;
the model construction module is used for selecting different modeling modes according to different movable parts and determining the moving direction of the movable parts;
the instruction input module is used for finally determining the corresponding relation between the movable part and the key according to the moving direction.
Compared with the prior art, the invention discloses a method and a system for assisting the expression of the common expression of the aphasia and aphasia writers, which enable the aphasia and aphasia writers to smoothly express simple ideas or instructions and improve the expression efficiency and the treatment efficiency of the aphasia and aphasia writers through an informatics technology and an input method in the process of optimizing the expression of the common expression.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is an on-line usage flow chart of the present invention;
FIG. 3 is a schematic diagram of an input method according to the present invention;
fig. 4 is a system configuration diagram of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention discloses a method for assisting the expression of common expressions of a aphasia writer, which is shown in fig. 1, and comprises the following specific steps:
s1, determining a movable part of a user;
s2, initially establishing a corresponding relation between the movable part and a screen key according to the movable part;
s3, selecting different modeling modes according to different movable parts, and determining the moving direction of the movable parts;
and S4, finally determining the corresponding relation between the movable part and the key according to the moving direction, and finishing instruction input.
Further, the specific process of S2 is as follows: in combination with the movable parts of the user, such as eyes, mouth, neck, etc., the control rules are set: if a certain part can have satisfactory four movement modes, such as eyeball movement, and can perform up, down, left and right movements, selecting the part, and setting the up, down, left and right movements of the eyeball as corresponding up, down, left and right control instructions; if two or more than two parts can have four satisfactory movement modes, such as eyeball upward, mouth opening, neck left turn, tongue stretching movement and the like, the four satisfactory movements are selected by user definition and set into corresponding upper, lower, left and right control instructions; if only a certain part can move, if the mouth is large, the mouth is large once, twice, three times and four times are set to be up, down, left and right control instructions.
Further, the specific process of S2 is as follows: establishing templates of the movable parts, carrying out different modeling according to the characteristics of the movable parts, and aiming at the first parts, establishing templates to carry out template matching; and establishing a background area aiming at the second part, and carrying out background difference. (the template can be established at the position with obvious image recognition characteristics such as eyes or chin, template matching can be performed, and the background area can be established at the position with less obvious characteristics such as arms, hands and feet and the like but with large movement range, so as to perform background difference).
And carrying out template matching or background difference according to the image corresponding to the execution condition of the user action, and giving out the score of template matching or the gray level difference and the area parameter of the background difference when the patient executes the actions in place. If the template matching reaches 0.7 in all actions, the part can be selected to use the template matching method, or the gray level difference of the background difference is larger than 20 and the area is 200 pixels, the part can be selected to use the background difference method, and if the requirements are met, the part can be arbitrarily selected.
Further, the process of establishing the template is as follows:
determining an elliptical region of the ROI, generating an ellipse by using a gen_circle () function, and finding the center of the ellipse by using an area_center ();
acquiring an image of an elliptical region from the image, and obtaining an ROI through reduction_domain ();
respectively carrying out corrosion and expansion operation on the ROI to obtain a difference value of the two areas;
obtaining a fitting contour line of the region by using an edge_sub_pix function, and selecting a proper contour line by using a select_conductors_ xld function;
the template is created through the create_shape_model () function, and the outline of the template is found by using get_shape_model_conductors (), so that the template is built.
Further, an image of the movable part is acquired; determining a target position of the movable part by using a template matching algorithm according to the image; and comparing the target position with a preset position, scoring the matching degree through a matching function, and selecting template matching for modeling if the score reaches a first threshold.
The expression of the matching function is:
wherein (i, j) represents a movement offset value when traversing the target image; t (m, n) represents a gray value at coordinates (m, n) in the template; s (i+m, j+n) represents the gray value of the target image at (i+m, j+n); t represents the gray average value of the template area; s represents the gray scale average of the target image subregion.
Still further, establishing a uniform matching template for rough matching of the movable part is also included.
The specific process of utilizing the uniform matching template is as follows:
coarsely matching the movable part by using the uniform matching template to establish a monitoring area;
modeling a movable part in a monitoring area to obtain a first template;
the user acts on the movable part, the moving direction of the movable part is determined by using the first template, and instruction input is completed according to the corresponding relation between the movable part and the keys.
The rough matching template can locate most of the situations, but because the state of the patient is different each time, the original rough matching template is not very accurate, so after the target part is located, a new matching template is automatically built again to be used as the fine locating template. The template is automatically cleared after this run out and is not used for the next time.
The online use flow of the invention is shown in fig. 2, and the default selection time (1-10 seconds) of the system is set according to the user condition, and after the up, down, left and right control instructions are sent out, the default selection time to be set is finished, namely, the content indicated in the screen or the next instruction is automatically selected.
An input method is displayed on a screen, and for selection of word stock: the upper and lower books of the primary grade Chinese textbook of the university of 7 months in 2016 of the people education press (book numbers are 9787107312403 and 9787107315206 respectively) and the upper and lower books of the secondary grade Chinese textbook of the university of 12 months in 2017 of the people education press (book numbers are 9787107319327 and 9787107323836 respectively) are selected to form a word stock of the input method by adding 1861 common words, and the structure of the input method displayed on the screen is shown in figure 3.
The English 26 letters are displayed on a screen according to the upper graph classification (as shown in figure 3), the upper, lower, left and right areas can be selected, the target area is selected again according to the selection method, and if the number of letters in the target area is 1, the letters needing to be selected are selected; if 2, repeating the above process again until the required letter is selected.
The input method has the common prompt function: when the pinyin of the word to be expressed is input, the 12 near-voice words are prompted in the prompt box according to the appearance sequence of the primary and secondary Chinese textbooks of primary and secondary schools, and if the word to be input is included in the 12 words, the word is selected according to an up-down left-right region selection method; if not included in the 12 words, the second letter of the pinyin of the word is continued to be input.
Illustrating the instruction input method of the present invention: setting up, down, left and right control instructions according to the movable part of the user and the method; and setting default selection time of the system according to the condition of a user, if the user needs to express thirst, firstly selecting left on an initial screen, selecting and redistributing six pinyin at the left side in an upper selection frame, a lower selection frame, a left selection frame and a right selection frame, selecting letter K again, after the default selection time of the system set previously passes, displaying 12 near-voice words which are discharged in the word stock according to the appearance sequence on the screen, if the 12 near-voice words are selected directly, and if the 12 near-voice words are not selected, not selecting. After the default selection time of the system set in advance is passed, the system presents the pinyin selection interface again, and selects according to the method, if the selection is performed for a plurality of times through the process, the word which is required to be expressed cannot be displayed, and the user who loses the speech can be reminded to change the expression of the similar meaning, for example: if the thirsty can not be expressed smoothly, the expression of the near word meaning such as dry mouth, water and the like can be changed.
On the other hand, a system for assisting the expression of common expressions of aphasia writers is provided, as shown in fig. 4, and comprises a mobile part determining module, a corresponding relation determining module, a model building module, an instruction input module and a mobile part determining module, wherein the mobile part determining module is used for determining the mobile part of a user;
the corresponding relation determining module is used for preliminarily establishing the corresponding relation between the movable part and the screen key according to the movable part;
the model construction module is used for selecting different modeling modes according to different movable parts and determining the moving direction of the movable parts;
and the instruction input module is used for finally determining the corresponding relation between the movable part and the key according to the moving direction.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (5)

1. A method for assisting the expression of commonly used expressions of aphasia writers is characterized by comprising the following specific steps:
determining a movable part of a user;
initially establishing a corresponding relation between the movable part and a screen key according to the movable part;
selecting different modeling modes according to different movable parts, and determining the moving direction of the movable parts;
finally determining the corresponding relation between the movable part and the key according to the moving direction, and finishing instruction input;
establishing templates of the movable parts, carrying out different modeling according to the characteristics of the movable parts, and aiming at a first part, establishing templates to carry out template matching; establishing a background area aiming at the second part, and carrying out background difference; according to the image corresponding to the execution condition of the user action, carrying out template matching or background difference, giving out the score of template matching when the patient executes the actions in place up, down, left and right, or the gray level difference value and the area parameter of the background difference, if the template matching reaches 0.7 in all the execution of the actions, selecting the part to use a template matching method, or the gray level difference value of the background difference is more than 20 and the area is 200 pixels, selecting the part to use a background difference method, and if the requirements are met, selecting one of the two parts at will;
acquiring an image of the movable part; determining a target position of the movable part by using a template matching algorithm according to the image; comparing the target position with a preset position, scoring the matching degree through a matching function, and selecting template matching for modeling if the score reaches a first threshold;
the formula of the matching function is as follows:
wherein (i, j) represents a movement offset value when traversing the target image; t (m, n) represents a gray value at coordinates (m, n) in the template; s (i+m, j+n) represents the gray value of the target image at (i+m, j+n);representing a gray average value of the template area; />Representing the gray-scale average of the target image subregion.
2. The method for assisting the expression of commonly used expressions of aphasia writers according to claim 1, wherein the process of creating the template is:
determining an ellipse region of the ROI, generating an ellipse by using a gen_circle () function, and finding the center of the ellipse;
acquiring an image of the elliptical region from the image, and obtaining an ROI through reduction_domain ();
performing erosion and expansion operation on the ROI, and obtaining a difference value between an expansion region and an erosion region;
obtaining a fitting contour line of the region by using an edge_sub_pix function, and selecting the contour line by using a select_conductors_ xld function;
creating a template through the create_shape_model () function, and finding the outline of the template by using get_shape_model_conductors (), so as to complete the establishment of the template.
3. The method of claim 1, further comprising creating a uniform matching template for coarse matching of the movable part.
4. The method for assisting the expression of commonly used expressions of aphasia writers according to claim 3, wherein the specific process of using the uniform matching template is as follows:
utilizing the uniform matching template to roughly match the movable part to establish a monitoring area;
modeling a movable part in the monitoring area to obtain a first template;
and the user acts on the movable part, the moving direction of the movable part is determined by using the first template, and instruction input is completed according to the corresponding relation between the movable part and the keys.
5. A system for assisting the expression of common expressions of aphasia writers is characterized by comprising a moving part determining module, a corresponding relation determining module, a model building module, an instruction input module,
The mobile part determining module is used for determining the mobile part of the user;
the corresponding relation determining module is used for preliminarily establishing the corresponding relation between the movable part and the screen key according to the movable part;
the model construction module is used for selecting different modeling modes according to different movable parts and determining the moving direction of the movable parts;
the instruction input module is used for finally determining the corresponding relation between the movable part and the key according to the moving direction;
establishing templates of the movable parts, carrying out different modeling according to the characteristics of the movable parts, and aiming at a first part, establishing templates to carry out template matching; establishing a background area aiming at the second part, and carrying out background difference; according to the image corresponding to the execution condition of the user action, carrying out template matching or background difference, giving out the score of template matching when the patient executes the actions in place up, down, left and right, or the gray level difference value and the area parameter of the background difference, if the template matching reaches 0.7 in all the execution of the actions, selecting the part to use a template matching method, or the gray level difference value of the background difference is more than 20 and the area is 200 pixels, selecting the part to use a background difference method, and if the requirements are met, selecting one of the two parts at will;
acquiring an image of the movable part; determining a target position of the movable part by using a template matching algorithm according to the image; comparing the target position with a preset position, scoring the matching degree through a matching function, and selecting template matching for modeling if the score reaches a first threshold;
the formula of the matching function is as follows:
wherein (i, j) represents a movement offset value when traversing the target image; t (m, n) represents a gray value at coordinates (m, n) in the template; s (i+m, j+n) represents the gray value of the target image at (i+m, j+n);representing a gray average value of the template area; />Representing the gray-scale average of the target image subregion.
CN202211307489.8A 2022-10-25 2022-10-25 Method and system for assisting expression of common expressions of aphasia and aphasia writers Active CN115601575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211307489.8A CN115601575B (en) 2022-10-25 2022-10-25 Method and system for assisting expression of common expressions of aphasia and aphasia writers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211307489.8A CN115601575B (en) 2022-10-25 2022-10-25 Method and system for assisting expression of common expressions of aphasia and aphasia writers

Publications (2)

Publication Number Publication Date
CN115601575A CN115601575A (en) 2023-01-13
CN115601575B true CN115601575B (en) 2023-10-31

Family

ID=84848639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211307489.8A Active CN115601575B (en) 2022-10-25 2022-10-25 Method and system for assisting expression of common expressions of aphasia and aphasia writers

Country Status (1)

Country Link
CN (1) CN115601575B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002323956A (en) * 2001-04-25 2002-11-08 Nippon Telegr & Teleph Corp <Ntt> Mouse alternating method, mouse alternating program and recording medium recording the same program
JP2005100366A (en) * 2003-08-18 2005-04-14 Yamaguchi Univ Visual line input communication method using eyeball motion
RU2007110863A (en) * 2007-03-26 2008-10-10 Георгий Владимирович Голубенко (RU) METHOD FOR INPUT AND OUTPUT OF INFORMATION AND COMPUTER SYSTEM FOR ITS IMPLEMENTATION
JP2011060167A (en) * 2009-09-14 2011-03-24 Secom Co Ltd Moving object tracking device
US8700392B1 (en) * 2010-09-10 2014-04-15 Amazon Technologies, Inc. Speech-inclusive device interfaces
CN111259802A (en) * 2020-01-16 2020-06-09 东北大学 Head posture estimation-based auxiliary aphasia paralytic patient demand expression method
CN111583311A (en) * 2020-05-14 2020-08-25 重庆理工大学 PCBA rapid image matching method
CN111931579A (en) * 2020-07-09 2020-11-13 上海交通大学 Automatic driving assistance system and method using eye tracking and gesture recognition technology
CN113822125A (en) * 2021-06-24 2021-12-21 华南理工大学 Processing method and device of lip language recognition model, computer equipment and storage medium
WO2022062884A1 (en) * 2020-09-27 2022-03-31 华为技术有限公司 Text input method, electronic device, and computer-readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140335487A1 (en) * 2013-05-13 2014-11-13 Lumos Labs, Inc. Systems and methods for response inhibition switching task incorporating motion for enhancing cognitions
US9829984B2 (en) * 2013-05-23 2017-11-28 Fastvdo Llc Motion-assisted visual language for human computer interfaces
US11275945B2 (en) * 2020-03-26 2022-03-15 Varjo Technologies Oy Imaging system and method for producing images with virtually-superimposed functional elements
US11656723B2 (en) * 2021-02-12 2023-05-23 Vizio, Inc. Systems and methods for providing on-screen virtual keyboards
US11567569B2 (en) * 2021-04-08 2023-01-31 Google Llc Object selection based on eye tracking in wearable device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002323956A (en) * 2001-04-25 2002-11-08 Nippon Telegr & Teleph Corp <Ntt> Mouse alternating method, mouse alternating program and recording medium recording the same program
JP2005100366A (en) * 2003-08-18 2005-04-14 Yamaguchi Univ Visual line input communication method using eyeball motion
RU2007110863A (en) * 2007-03-26 2008-10-10 Георгий Владимирович Голубенко (RU) METHOD FOR INPUT AND OUTPUT OF INFORMATION AND COMPUTER SYSTEM FOR ITS IMPLEMENTATION
JP2011060167A (en) * 2009-09-14 2011-03-24 Secom Co Ltd Moving object tracking device
US8700392B1 (en) * 2010-09-10 2014-04-15 Amazon Technologies, Inc. Speech-inclusive device interfaces
CN111259802A (en) * 2020-01-16 2020-06-09 东北大学 Head posture estimation-based auxiliary aphasia paralytic patient demand expression method
CN111583311A (en) * 2020-05-14 2020-08-25 重庆理工大学 PCBA rapid image matching method
CN111931579A (en) * 2020-07-09 2020-11-13 上海交通大学 Automatic driving assistance system and method using eye tracking and gesture recognition technology
WO2022062884A1 (en) * 2020-09-27 2022-03-31 华为技术有限公司 Text input method, electronic device, and computer-readable storage medium
CN113822125A (en) * 2021-06-24 2021-12-21 华南理工大学 Processing method and device of lip language recognition model, computer equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Speech Intelligence Using Machine Learning for Aphasia Individual;J. K.R., M. V.L., S. B. B.,P. Yawalkar;《2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE)》;664-667 *
张秀彬,(巴基)曼苏乐,叶尔江•哈力木.《轨道交通智能技术导论》.上海交通大学出版社,2021,第73、74页. *
浅谈图像感烟探测产品在飞机货仓火灾探测中的应用;高锴;《品牌与标准化》;第76页最后1段~第77页第1段 *
赵小川编著.《MATLAB图像处理》.北京航空航天大学出版社,2019,第211页. *

Also Published As

Publication number Publication date
CN115601575A (en) 2023-01-13

Similar Documents

Publication Publication Date Title
US20240168625A1 (en) Simulated handwriting image generator
Trejo et al. Recognition of yoga poses through an interactive system with kinect device
Henry Drawing for product designers
DE60225170T2 (en) METHOD AND DEVICE FOR DECODING HANDWRITCH SIGNS
Kishore et al. Video audio interface for recognizing gestures of indian sign
CN111626297A (en) Character writing quality evaluation method and device, electronic equipment and recording medium
CN109871851A (en) A kind of Chinese-character writing normalization determination method based on convolutional neural networks algorithm
Ferrer et al. Static and dynamic synthesis of Bengali and Devanagari signatures
CN103632672A (en) Voice-changing system, voice-changing method, man-machine interaction system and man-machine interaction method
Alvina et al. Expressive keyboards: Enriching gesture-typing on mobile devices
US20190303422A1 (en) Method and system for generating handwritten text with different degrees of maturity of the writer
Verma et al. A comprehensive review on automation of Indian sign language
CN112800936A (en) Calligraphy copy intelligent evaluation and guidance method based on computer vision
CN105426882A (en) Method for rapidly positioning human eyes in human face image
Krishnaraj et al. A Glove based approach to recognize Indian Sign Languages
CN111985184A (en) Auxiliary writing font copying method, system and device based on AI vision
CN115936944A (en) Virtual teaching management method and device based on artificial intelligence
CN115601575B (en) Method and system for assisting expression of common expressions of aphasia and aphasia writers
WO2020105349A1 (en) Information processing device and information processing method
Jahangir et al. Towards developing a voice-over-guided system for visually impaired people to learn writing the alphabets
Lyons Facial gesture interfaces for expression and communication
Kaur et al. Conversion of Hindi Braille to speech using image and speech processing
JP7171145B2 (en) Character data generator, method and program
KR20060115700A (en) Child language teaching system of flash type to be ease to change face of character
Hayakawa et al. Air Writing in Japanese: A CNN-based character recognition system using hand tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant