CN117831132A - Sign language identification method, system, terminal and storage medium capable of realizing interactive communication - Google Patents

Sign language identification method, system, terminal and storage medium capable of realizing interactive communication Download PDF

Info

Publication number
CN117831132A
CN117831132A CN202410042235.0A CN202410042235A CN117831132A CN 117831132 A CN117831132 A CN 117831132A CN 202410042235 A CN202410042235 A CN 202410042235A CN 117831132 A CN117831132 A CN 117831132A
Authority
CN
China
Prior art keywords
information
sign language
sign
communication
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410042235.0A
Other languages
Chinese (zh)
Inventor
陈师翰
吴月明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202410042235.0A priority Critical patent/CN117831132A/en
Publication of CN117831132A publication Critical patent/CN117831132A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention belongs to the technical field of intelligent auxiliary equipment for hearing impaired and deaf-mute people, and particularly relates to a sign language identification method, system, terminal and storage medium capable of realizing interactive communication; the sign language identification method capable of realizing interactive communication comprises the following steps of: s1, acquiring sign information of a sign language expressive person; s2, generating text display information and communication voice information corresponding to the sign information according to the sign information; s3, according to the communication voice information, the communication person feeds back response voice information corresponding to the communication voice information; and S4, generating text feedback information according to the response voice information. The invention can effectively ensure that the sign language expressive person can clearly know the communication information of the communication person, thereby improving the effective communication efficiency and speed.

Description

Sign language identification method, system, terminal and storage medium capable of realizing interactive communication
Technical Field
The invention belongs to the technical field of intelligent auxiliary equipment for hearing impaired and deaf-mute people, and particularly relates to a sign language identification method, system, terminal and storage medium capable of realizing interactive communication.
Background
Sign language is an effective language of limbs for information exchange, and hand actions can transmit rich semantic information. The number of the Chinese deaf-mutes is about one third of the disabled population, and the sign language is taken as a main mode of the deaf-mutes communicating with the outside, so that the Chinese deaf-mutes are indispensable basic elements of daily life and study in the aspects of acquiring knowledge, communicating with hearing-care people, improving life quality and the like. Thus, sign language plays a vital role in helping the deaf-mute integrate into all aspects of society. With the deep understanding of the special population of the deaf-mute, the attention to sign language is also more extensive.
However, existing current sign language recognition strategies have certain limitations: 1. the algorithm optimization based on the current sign language recognition system can only infinitely improve the sign language recognition accuracy, but cannot ensure that the expression intention of a sign language person is correctly expressed; 2. most healthy people do not understand sign language expression, so that full interaction of multiple rounds cannot be initiated; especially for the deaf and dumb people with disabilities, healthy people who do not understand sign language can not effectively interact with the deaf and dumb sign language people basically; therefore, the unidirectional sign language recognition cannot ensure the communication efficiency between the sign language person and the healthy person who does not understand the sign language.
Disclosure of Invention
The invention aims at: aiming at the defects of the prior art, the sign language recognition method capable of realizing interactive communication is provided, and aims to improve the efficiency and the speed of effective communication.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
an interactive sign language identification method comprises the following steps:
s1, acquiring sign information of a sign language expressive person;
s2, generating text display information and communication voice information corresponding to the sign information according to the sign information;
s3, according to the communication voice information, the communication person feeds back response voice information corresponding to the communication voice information;
and S4, generating text feedback information according to the response voice information.
Preferably: the step of acquiring sign information of the sign language expressive person comprises the following steps:
wearing the interactive glasses for the communication person so that the collection component of the interactive glasses faces the sign language expressive person;
and acquiring sign information of the sign language expressive person by using the acquisition component.
Preferably: the step of acquiring sign information of the sign language expressive person by using the acquisition component comprises the following steps:
a hand collecting camera of the collecting component is used for collecting hand action information of the sign language expressive person;
collecting facial information of the sign language expressive person by using a facial collecting camera of the collecting component; the facial information comprises facial emotion action information of a sign language expressive person;
and generating the sign information with emotion mood according to the hand motion information and the face information.
Preferably: the step of generating text display information and communication voice information corresponding to the sign information according to the sign information comprises the following steps:
according to the sign information, the generated text display information corresponding to the sign information;
the sign language information is converted and displayed with the text display information so as to be used for the sign language expressive person to check;
according to the sign information, the generated communication voice information;
the communication voice information is transmitted to the earphone of the interactive glasses so as to be known by the communication person.
Preferably: the step of converting sign language information to display the text display information for viewing by the sign language expressive person comprises the following steps:
and converting and displaying all the text display information within a first preset time for the sign language expressive person to view.
Preferably: the step of generating text feedback information according to the response voice information comprises the following steps:
generating character feedback information with modified language according to the response voice information;
and the communication information conversion display screen displays the text feedback information with the modified language for the sign language expressive person to check.
Preferably: the step of displaying the text feedback information with the modified language by the communication information conversion display screen comprises the following steps:
and the communication information conversion display screen displays all the text feedback information with the modified language in a second preset time so as to be used for the sign language expressive person to check.
The invention also discloses a sign language recognition method system capable of realizing interactive communication, which comprises the following steps:
the collection module is used for collecting sign information of the sign language expressive person and feeding back response voice information corresponding to the communication voice information by the communication person;
the conversion display module is used for generating text display information corresponding to the sign information, exchanging voice information and generating text feedback information according to the response voice information.
The invention also discloses a sign language identification terminal capable of realizing interactive communication, which comprises: the method comprises the steps of a memory, a processor and an interactive communication sign language identification program which is stored in the memory and can run on the processor, wherein the interactive communication sign language identification program is executed by the processor to realize the interactive communication sign language identification method.
The invention also discloses a storage medium, wherein the storage medium is stored with an interactive sign language identification program, and the interactive sign language identification program realizes the steps of the interactive sign language identification method when being executed by a processor.
The sign information acquisition method has the beneficial effects that the sign information of the sign language expressive person is acquired firstly, so that the expression intention of the sign language expressive person can be effectively known, and the subsequent communication efficiency is facilitated; then, according to the sign information, generating text display information and communication voice information corresponding to the sign information, so that a sign language expressive person can effectively and clearly acquire whether the converted information is consistent with the intention of the sign language expressive person or not, and the intention of the sign language expressive person can be quickly known by a person unfamiliar with the sign language, thereby being beneficial to communication efficiency; the communication person feeds back response voice information corresponding to the communication voice information according to the communication voice information; finally, according to the response voice information, character feedback information is generated, so that the sign language expressive person can clearly know the communication information of the communication person, the effective communication efficiency and speed are improved, and the occurrence of errors in information transfer or the fact that the sign language expressive person cannot find the sign language expression errors in time can be avoided, and serious consequences can be caused.
Drawings
Features, advantages, and technical effects of exemplary embodiments of the present invention will be described below with reference to fig. 1 to 5.
FIG. 1 is a flow chart of a method for sign language recognition capable of interactive communication according to an embodiment of the invention;
FIG. 2 is a schematic structural diagram of an interactive glasses according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an interactive glasses according to an embodiment of the present invention;
FIG. 4 is a block diagram illustrating an interactive sign language recognition system according to an embodiment of the present invention;
fig. 5 is a block diagram illustrating a structure of a sign language recognition terminal capable of interactive communication according to an embodiment of the present invention.
In the figure: 1-a glasses body; 11-a mirror frame; 12-a lens; 13-temples; 131-mounting holes; 2-headphones; 3-display means; 31-sign language information conversion display screen; 32-a communication information conversion display screen; 4-a control part; 41-a control chip; 42-a Bluetooth module; 43-an audio module; 5-acquisition components; 51-a hand acquisition camera; 52-a face acquisition camera; 610-a conversion display module; 620-an acquisition module; 1001-a processor; 1002-a communication bus; 1003-user interface; 1004-a network interface; 1005-memory.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description and claims of the present application and in the description of the figures above are intended to cover non-exclusive inclusions.
In the description of the embodiments of the present application, the technical terms "first," "second," etc. are used merely to distinguish between different objects and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated, a particular order or a primary or secondary relationship. In the description of the embodiments of the present application, the meaning of "plurality" is two or more unless explicitly defined otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the description of the embodiments of the present application, the term "and/or" is merely a sign language recognition method, system, terminal and storage medium capable of interactive communication, which describe association relationships of associated objects, and the representation may have three relationships, for example, a and/or B may represent: a alone, both a and B, and a plurality of cases alone. In addition, the character "/" herein generally indicates that the related objects are related to a sign language recognition method, system, terminal and storage medium or relationship capable of interactive communication.
In the description of the embodiments of the present application, the term "plurality" refers to two or more (including two), and similarly, "plural sets" refers to two or more (including two), and "plural sheets" refers to two or more (including two).
In the description of the embodiments of the present application, the technical terms "center", "longitudinal", "transverse", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical";
"horizontal", "top", "bottom", "inner", "outer", "clockwise", "counter-clockwise", "axial"
The orientation or positional relationship indicated by "radial", "circumferential", etc. is based on the orientation or positional relationship shown in the drawings, and is merely for convenience of description and to simplify the description, and is not indicative or implying that the apparatus or elements referred to must have a particular orientation, be configured and operated in a particular orientation, and therefore should not be construed as limiting the embodiments of the present application.
In the description of the embodiments of the present application, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured" and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally formed; or may be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the embodiments of the present application will be understood by those of ordinary skill in the art according to the specific circumstances.
The invention provides a sign language identification method, a sign language identification system, a sign language identification terminal and a sign language storage medium capable of realizing interactive communication.
As shown in fig. 5, fig. 5 is a schematic diagram of a terminal structure of a hardware running environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, or can be a mobile terminal device with a display function, such as a smart phone, a tablet personal computer, an electronic book reader, an MP3 (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio layer 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio layer 3) player, a portable computer and the like.
As shown in fig. 5, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the terminal may also include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on. Among other sensors, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can monitor the acceleration in all directions (generally three axes), can monitor the gravity and the direction when the mobile terminal is stationary, and can be used for recognizing the application of the gesture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, which are not described herein.
It will be appreciated by those skilled in the art that the terminal structure shown in the drawings does not constitute a limitation of the terminal and may include more or less components than those illustrated, or may combine certain components, or may be arranged in different components.
Referring to fig. 1, one embodiment of a sign language recognition method, system, terminal and medium capable of interactive communication according to the present invention provides a sign language recognition method capable of interactive communication, as shown in fig. 1, the sign language recognition method capable of interactive communication includes:
the method comprises the following steps:
s1, acquiring sign information of a sign language expressive person;
s2, generating text display information and communication voice information corresponding to the sign information according to the sign information;
s3, according to the communication voice information, the communication person feeds back response voice information corresponding to the communication voice information;
and S4, generating text feedback information according to the response voice information.
The technical scheme of the invention firstly acquires sign information of the sign language expressive person, so that the expression intention of the sign language expressive person can be effectively known, and the follow-up communication efficiency is facilitated; then, according to the sign information, generating text display information and communication voice information corresponding to the sign information, so that a sign language expressive person can effectively and clearly acquire whether the converted information is consistent with the intention of the sign language expressive person or not, and the intention of the sign language expressive person can be quickly known by a person unfamiliar with the sign language, thereby being beneficial to communication efficiency; the communication person feeds back response voice information corresponding to the communication voice information according to the communication voice information; finally, according to the response voice information, character feedback information is generated, so that the sign language expressive person can clearly know the communication information of the communication person, the effective communication efficiency and speed are improved, and the occurrence of errors in information transfer or the fact that the sign language expressive person cannot find the sign language expression errors in time can be avoided, and serious consequences can be caused.
Specifically, in some embodiments, sign information of sign language expressive is acquired through interactive glasses acquisition. Further, as shown in fig. 2, the interactive glasses include a glasses body 1, an earphone 2, a display part 3, a control part 4 and a collection part 5; the control part 4 is connected to the top of the glasses body 1; the display part 3 is movably connected (hinged) to the side end of the control part 4 and is arranged opposite to the glasses body 1; the earphone 2 is connected to the glasses body 1; the acquisition component 5 is connected to the glasses body 1; and the control part 4 is electrically connected with the display part 3; the control part 4 is electrically connected with the earphone 2; the control part 4 is electrically connected with the acquisition part 5.
Still further, the eyeglass body 1 comprises a frame 11, lenses 12 and temples 13; the number of the lenses 12 is two, and the lenses are connected in the connecting holes on the glasses frame 11; the number of the glasses legs 13 is two, and the glasses legs are connected to the back of the glasses frame 11. As shown in fig. 2, the earpiece 13 is provided with a mounting hole 131 corresponding to the earphone 2. The structure can effectively ensure the mounting and placing stability of the earphone 2 and avoid the loss of the earphone 2.
Still further, as shown in fig. 2 and 3, the acquisition unit 5 includes a hand acquisition camera 51 and a face acquisition camera 52; the hand acquisition camera 51 is used for acquiring hand action information of a sign language expressive person; the face collection camera 52 is used for collecting face information of the sign language expressive person. That is, after the hand action information of the sign language expressive person is collected by the hand collecting camera 51, the face information of the sign language expressive person is collected by the face collecting camera 52, so that the voice information with the corresponding emotion mood can be presented to the communication person, thereby improving the restoration authenticity of communication and improving the communication efficiency.
Still further, as shown in fig. 2 and 3, the earphone 2 is a bluetooth earphone; and the control part 4 includes a control chip 41, a bluetooth module 42, and an audio module 43; the control chip 41 is electrically connected with the audio module 43; the audio module 43 is electrically connected with the Bluetooth module 42; the bluetooth module 42 is electrically connected to the bluetooth headset.
Still further, as shown in fig. 2 and 3, the display section 3 includes a sign language information conversion display screen 31 and a communication information conversion display screen 32; the sign language information conversion display 31 and the communication information conversion display 32 are electrically connected to the control chip 41, respectively.
Specifically, in some embodiments, in the step S1, the step of acquiring the sign information of the sign language expressive person includes:
wearing the interactive glasses for the communication person so that the collection part 5 of the interactive glasses faces the sign language expressive person;
the sign information of the sign language expressive person is acquired by using the acquisition component 5.
Specifically, in some embodiments, in the step S1, the step of acquiring the sign information of the sign language expressive person using the acquisition component 5 includes:
the hand action information of the sign language presenter is acquired by using the hand acquisition camera 51 of the acquisition part 5;
the face information of the sign language presenter is acquired using the face acquisition camera 52 of the acquisition section 5; wherein the facial information comprises facial emotion action information and the like of the sign language expressive person;
and generating the sign information with emotion mood according to the hand motion information and the face information.
That is, after the hand motion information of the sign language expressive is collected by the hand collecting camera 51, the face information (the face information of the happy, grippy, etc.) of the sign language expressive is collected by the face collecting camera 52, so that the voice information with the corresponding emotion mood can be presented to the communication person, thereby improving the restoration authenticity of the communication and the communication efficiency.
Specifically, in some embodiments, in the step S2, the step of generating text display information and communication voice information corresponding to the sign information according to the sign information includes:
according to the sign information, the generated text display information corresponding to the sign information;
the sign language information is converted and displayed with the text display information so as to be used for the sign language expressive person to check;
according to the sign information, the generated communication voice information;
the communication voice information is transmitted to the earphone 2 of the interactive glasses, so as to be known to the communication person.
That is, the sign language information conversion display screen displays the meaning corresponding to the sign language of the sign language expressive person to check, so that the sign language expressive person can judge whether the meaning is consistent with the meaning to be expressed; if deviation occurs, after the sign language person party obtains the screen information feedback, the sign language person actively corrects the sign language person by himself, so that the expression efficiency of the sign language person and the communication efficiency with the outside are greatly improved. At the same time, the communication voice information is transmitted to the earphone 2 of the interactive glasses for the communication person to know, so as to realize the effective two-way communication effect at one time.
Specifically, in some embodiments, the step of converting sign language information to display the text display information for viewing by the sign language presenter includes:
and converting and displaying all the text display information within a first preset time for the sign language expressive person to view. Wherein the first preset time is 3min to 5min. That is, the prior text display information and the real-time text display information on the sign language information conversion display screen can only be kept for 3min to 5min, so that the sign language expressive person can effectively review the prior conversation, and the confidentiality of privacy can be prevented from being influenced due to overlong time reservation, and the use safety and the communication efficiency are improved.
Specifically, in some embodiments, in S3, according to the communication voice information, the communication person feeds back response voice information corresponding to the communication voice information; that is, the communication person obtains the corresponding mood information of the sign language expressive person according to the rich-color mood voice information of the normal person, so as to make a corresponding proper voice reply, and further ensure the consistency and efficiency of communication.
Specifically, in some embodiments, in S4, the step of generating text feedback information according to the response voice information includes:
generating character feedback information with modified language according to the response voice information; such as: the language of the communication person is light and fast, and then the language is converted into characters as follows: he/she lightly speaks xxx (content); the language of the communication person is urgent, and then the language is converted into characters as follows: he/she is rushing or in the first place speaking xxx (content); therefore, the sign language expressive person can conveniently acquire information such as the mood of the communication person at the moment;
the communication information conversion display 32 displays the text feedback information with modified speech for viewing by the sign language presenter.
Specifically, in some embodiments, the communication information conversion display screen 32 displays the text feedback information with modified language for the sign language expressive to view, including:
the communication information conversion display 32 displays all the text feedback information with modified speech in a second preset time for the sign language expressive person to view. Wherein the second preset time is 3min to 5min. That is, the previous text feedback information with modified language and the real-time text feedback information with modified language on the sign language information conversion display screen can only be kept for 3min to 5min, so that a sign language expressive person can effectively review the previous conversation, the confidentiality of privacy can be prevented from being influenced due to overlong time reservation, further, the voice information of a communication object can be identified, and the text and the visualization of the voice information can be realized through screen projection, so that the external communication efficiency of deaf and dumb people is greatly improved.
The invention also provides a sign language recognition system capable of realizing interactive communication.
Specifically, as shown in fig. 3, the sign language recognition system capable of interactive communication includes:
the collection module 620 is configured to collect and obtain sign information of a sign language expressive person and response voice information corresponding to the communication voice information fed back by an interchange person, where the collection module 620 is configured to collect and obtain the sign information of the sign language expressive person;
the conversion display module 610 is configured to generate text display information and communication voice information corresponding to the sign information, and generate text feedback information according to the response voice information.
In addition, an embodiment of the present invention further provides a computer readable storage medium, where a sign language recognition program capable of interactive communication is stored, where the sign language recognition program capable of interactive communication realizes the following operations when executed by a processor:
collecting sign information of sign language expressive person;
generating text display information and communication voice information corresponding to the sign information according to the sign information;
according to the communication voice information, the communication person feeds back response voice information corresponding to the communication voice information;
and generating text feedback information according to the response voice information.
Further, the step of acquiring sign information of the sign language expressive person includes:
wearing the interactive glasses for the communication person so that the collection part 5 of the interactive glasses faces the sign language expressive person;
the sign information of the sign language expressive person is acquired by using the acquisition component 5.
Further, the step of acquiring sign information of the sign language presenter by using the acquisition unit 5 includes:
the hand action information of the sign language presenter is acquired by using the hand acquisition camera 51 of the acquisition part 5;
the face information of the sign language presenter is acquired using the face acquisition camera 52 of the acquisition section 5; wherein the facial information comprises facial emotion action information and the like of the sign language expressive person;
and generating the sign information with emotion mood according to the hand motion information and the face information.
Further, the step of generating text display information and communication voice information corresponding to the sign information according to the sign information includes:
according to the sign information, the generated text display information corresponding to the sign information;
the sign language information is converted and displayed with the text display information so as to be used for the sign language expressive person to check;
according to the sign information, the generated communication voice information;
the communication voice information is transmitted to the earphone 2 of the interactive glasses, so as to be known to the communication person.
Further, the step of converting sign language information to display the text display information for viewing by the sign language expressive person includes:
and converting and displaying all the text display information within a first preset time for the sign language expressive person to view.
Further, the step of generating text feedback information according to the response voice information includes:
generating character feedback information with modified language according to the response voice information;
the communication information conversion display 32 displays the text feedback information with modified speech for viewing by the sign language presenter.
Further, the step of displaying the text feedback information with modified language for the sign language presenter to view on the communication information conversion display 32 includes:
the communication information conversion display 32 displays all the text feedback information with modified speech in a second preset time for the sign language expressive person to view.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the embodiments of the disclosure may be suitably combined to form other embodiments as will be understood by those skilled in the art.
Variations and modifications of the above embodiments will occur to those skilled in the art to which the invention pertains from the foregoing disclosure and teachings. Therefore, the present invention is not limited to the above-described embodiments, but is intended to be capable of modification, substitution or variation in light thereof, which will be apparent to those skilled in the art in light of the present teachings. In addition, although specific terms are used in the present specification, these terms are for convenience of description only and do not limit the present invention in any way.

Claims (10)

1. A sign language identification method capable of realizing interactive communication is characterized in that: the method comprises the following steps:
s1, acquiring sign information of a sign language expressive person;
s2, generating text display information and communication voice information corresponding to the sign information according to the sign information;
s3, according to the communication voice information, the communication person feeds back response voice information corresponding to the communication voice information;
and S4, generating text feedback information according to the response voice information.
2. The interactive sign language recognition method according to claim 1, wherein: the step of acquiring sign information of the sign language expressive person comprises the following steps:
wearing the interactive glasses for the communication person so that the collection component of the interactive glasses faces the sign language expressive person;
and acquiring sign information of the sign language expressive person by using the acquisition component.
3. The interactive sign language recognition method according to claim 2, wherein: the step of acquiring sign information of the sign language expressive person by using the acquisition component comprises the following steps:
a hand collecting camera of the collecting component is used for collecting hand action information of the sign language expressive person;
collecting facial information of the sign language expressive person by using a facial collecting camera of the collecting component; the facial information comprises facial emotion action information of a sign language expressive person;
and generating the sign information with emotion mood according to the hand motion information and the face information.
4. The interactive sign language recognition method according to claim 1, wherein: the step of generating text display information and communication voice information corresponding to the sign information according to the sign information comprises the following steps:
according to the sign information, the generated text display information corresponding to the sign information;
the sign language information is converted and displayed with the text display information so as to be used for the sign language expressive person to check;
according to the sign information, the generated communication voice information;
the communication voice information is transmitted to the earphone of the interactive glasses so as to be known by the communication person.
5. The interactive sign language recognition method according to claim 4, wherein: the step of converting sign language information to display the text display information for viewing by the sign language expressive person comprises the following steps:
and converting and displaying all the text display information within a first preset time for the sign language expressive person to view.
6. The interactive sign language recognition method according to claim 1, wherein: the step of generating text feedback information according to the response voice information comprises the following steps:
generating character feedback information with modified language according to the response voice information;
and the communication information conversion display screen displays the text feedback information with the modified language for the sign language expressive person to check.
7. The interactive sign language identification method according to claim 6, wherein: the step of displaying the text feedback information with the modified language by the communication information conversion display screen comprises the following steps:
and the communication information conversion display screen displays all the text feedback information with the modified language in a second preset time so as to be used for the sign language expressive person to check.
8. An interactive sign language recognition method system is characterized in that: comprising the following steps:
the collection module is used for collecting sign information of the sign language expressive person and feeding back response voice information corresponding to the communication voice information by the communication person;
the conversion display module is used for generating text display information corresponding to the sign information, exchanging voice information and generating text feedback information according to the response voice information.
9. The sign language identification terminal capable of realizing interactive communication is characterized in that: the sign language identification terminal capable of interactive communication comprises: a memory, a processor and an interactively-able sign language recognition program stored on the memory and executable on the processor, which when executed by the processor, performs the steps of the interactively-able sign language recognition method of any one of claims 1 to 7.
10. A storage medium, characterized by: the storage medium has stored thereon an interactive sign language recognition program which, when executed by a processor, implements the steps of the interactive sign language recognition method according to any one of claims 1 to 7.
CN202410042235.0A 2024-01-10 2024-01-10 Sign language identification method, system, terminal and storage medium capable of realizing interactive communication Pending CN117831132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410042235.0A CN117831132A (en) 2024-01-10 2024-01-10 Sign language identification method, system, terminal and storage medium capable of realizing interactive communication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410042235.0A CN117831132A (en) 2024-01-10 2024-01-10 Sign language identification method, system, terminal and storage medium capable of realizing interactive communication

Publications (1)

Publication Number Publication Date
CN117831132A true CN117831132A (en) 2024-04-05

Family

ID=90517383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410042235.0A Pending CN117831132A (en) 2024-01-10 2024-01-10 Sign language identification method, system, terminal and storage medium capable of realizing interactive communication

Country Status (1)

Country Link
CN (1) CN117831132A (en)

Similar Documents

Publication Publication Date Title
US9436887B2 (en) Apparatus and method for automatic action selection based on image context
WO2020020063A1 (en) Object identification method and mobile terminal
CN110825226A (en) Message viewing method and terminal
CN113963659A (en) Adjusting method of display equipment and display equipment
KR101830908B1 (en) Smart glass system for hearing-impaired communication
US20170010849A1 (en) Control method and apparatus thereof
US20220405375A1 (en) User identity verification method and electronic device
CN110013260B (en) Emotion theme regulation and control method, equipment and computer-readable storage medium
CN112799508A (en) Display method and device, electronic equipment and storage medium
KR20150118893A (en) Method, apparatus and system for controlling emission
CN111103975A (en) Display method, electronic equipment and system
CN110111795A (en) A kind of method of speech processing and terminal device
WO2016065889A1 (en) Multifunctional glasses
CN111665950A (en) Intelligent interaction system integrated head-mounted equipment
CN117831132A (en) Sign language identification method, system, terminal and storage medium capable of realizing interactive communication
CN210109744U (en) Head-mounted alternating current device and head-mounted alternating current system
CN115757906A (en) Index display method, electronic device and computer-readable storage medium
Hussein Wearable computing: Challenges of implementation and its future
WO2022267978A1 (en) Backlight value adjustment method, and processor, display terminal and storage medium
KR200475529Y1 (en) Medical Equipment System including HMD Glasses for Person with Visual and Hearing Impairment
JP2003234842A (en) Real-time handwritten communication system
CN213934463U (en) Intelligent audio-visual bone conduction glasses
CN114531582B (en) Augmented reality function control method and electronic equipment
CN109769069B (en) Reminding method, wearable device and computer readable storage medium
CN112489619A (en) Voice processing method, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination