CN111223369A - Deaf language translator and communication method thereof - Google Patents

Deaf language translator and communication method thereof Download PDF

Info

Publication number
CN111223369A
CN111223369A CN202010043237.3A CN202010043237A CN111223369A CN 111223369 A CN111223369 A CN 111223369A CN 202010043237 A CN202010043237 A CN 202010043237A CN 111223369 A CN111223369 A CN 111223369A
Authority
CN
China
Prior art keywords
deaf
information
mute
unit
sign language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010043237.3A
Other languages
Chinese (zh)
Inventor
王苏兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xinyu Information Technology Co Ltd
Original Assignee
Shanghai Xinyu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xinyu Information Technology Co Ltd filed Critical Shanghai Xinyu Information Technology Co Ltd
Priority to CN202010043237.3A priority Critical patent/CN111223369A/en
Publication of CN111223369A publication Critical patent/CN111223369A/en
Priority to CN202010783851.3A priority patent/CN111785138A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/04Devices for conversing with the deaf-blind

Abstract

The invention discloses a deaf language translator, which comprises a deaf-mute client, a normal client and a server, wherein the server is in communication connection with the deaf-mute client and the normal client; the client side of the deaf-mute comprises a deaf-mute input module and a deaf-mute output module, wherein the deaf-mute input module is used for acquiring input information of the deaf-mute, and the deaf-mute output module is used for outputting input information of normal people; the normal person client comprises a normal person input module and a normal person output module, wherein the normal person input module is used for acquiring input information of normal persons, and the normal person output module is used for outputting input information of deaf-mutes. The deaf-mute communication system realizes the communication between the deaf-mute and the common normal person by the deaf-mute client, the normal person client and the server which are in communication connection with each other, wherein the server is provided with a deaf speech database.

Description

Deaf language translator and communication method thereof
Technical Field
The invention relates to the technical field of translators, in particular to a deaf language translator of a deaf language translator and an exchange method thereof.
Background
At present, deaf-dumb persons and hearing-impaired people exist, but no device capable of assisting deaf-dumb persons and hearing-impaired old persons to communicate in public places in an auxiliary way is available in the market, for example, the deaf-dumb persons and the hearing-impaired old persons can assist in communicating with related service personnel (such as a tour guide, a doctor, a police, a ticket seller and the like). With the progress and development of scientific technology, video recognition and voice recognition are mature and applied to multiple fields, and a deaf language translator is scientifically integrated and designed by the two technologies and related electronic technologies and is applied to communication between deaf dumb people and people with hearing impairment and ordinary people.
Disclosure of Invention
The present invention addresses one or more of the above-identified problems in the art by providing a deaf language translator.
According to one aspect of the invention, a deaf language translator is provided, which comprises a deaf-mute client, a normal client and a server which is in communication connection with the deaf-mute client and the normal client, wherein a deaf language database is configured on the server, and sign language information of deaf language and character information corresponding to the sign language information are stored in the deaf language database;
the client side of the deaf-mute comprises a deaf-mute input module and a deaf-mute output module, wherein the deaf-mute input module is used for collecting input information of the deaf-mute, and the deaf-mute output module is used for outputting input information of normal people;
the normal person client comprises a normal person input module and a normal person output module, the normal person input module is used for collecting input information of normal persons, and the normal person output module is used for outputting input information of deaf-mutes.
In some embodiments, the deaf-mute input module comprises a sign language video acquisition unit and a deaf-mute character acquisition unit;
the server comprises a deaf-mute information conversion module;
the normal person output module comprises a normal person character display unit and a normal person voice output unit;
the deaf-mute information conversion module is used for converting the information collected by the sign language video collection unit or the deaf-mute character collection unit into the information and outputting the information to the normal character display unit or the normal voice output unit.
In some embodiments, the deaf-mute information conversion module comprises a sign language video to text unit and a text to voice unit, and the sign language video to text unit is in communication connection with the text to voice unit.
In some embodiments, the normal person input module comprises a normal person voice acquisition unit and a normal person character acquisition unit;
the server comprises a normal person information conversion module;
the deaf-mute output module comprises a deaf-mute character display unit and a deaf-mute sign language video output unit; the deaf-mute information conversion module is used for converting the information collected by the normal person voice collection unit or the normal person character collection unit and outputting the converted information to the deaf-mute character display unit or the deaf-mute sign language video output unit.
In some embodiments, the normal person information conversion module includes a text-to-sign language video unit and a voice-to-text unit.
The invention relates to a communication method between deaf dumb people and normal people, which is based on the deaf language translator of claims 1 to 7 and comprises the following steps:
establishing a deafness database of a deafness translator, wherein the database is provided with a server;
sign language information of the deaf language and character information corresponding to the sign language information are stored in the deaf language database;
the deaf-mute client collects sign language information or character information and sends the sign language information or the character information to the server;
and the server calls the information in the deaf language database and the collected sign language information or character information to analyze and compare, and outputs corresponding information to the normal client.
In some embodiments, the collecting sign language information by the deaf-mute client and sending the sign language information to the server comprises:
acquiring a video stream containing a gesture, and extracting a key image frame containing a gesture action from the video stream to be used as the target image;
extracting feature points of the gesture according to the target image;
according to the feature points of the extracted gestures and a pre-established deafness database, the server acquires sign language information matched with the deafness database according to the feature points of the gestures;
and determining and outputting a text form and/or a voice form corresponding to the sign language information according to the matched sign language information.
In some embodiments, the server comprises a deaf-mute information conversion module, the deaf-mute information conversion module comprises a sign language video to character conversion unit and a character to voice conversion unit, the server outputs a text form to a normal person character display unit through the sign language video to character conversion unit according to sign language information, and the server also outputs a voice form to a normal person voice output unit through the sign language video to character conversion unit and the character to voice conversion unit according to the sign language information.
In some embodiments, extracting a keyframe frame containing a gesture motion from the video stream comprises:
removing image frames which do not contain gesture actions from all image frames of the video stream to obtain a first image set;
and removing the images with the image definition lower than a preset threshold value from the first image set to obtain a key image frame containing the gesture action.
In some embodiments, the pre-established deafness database includes a feature point model including a plurality of preset sign languages, and the feature point model is generated by performing feature point extraction on sample images including the plurality of preset sign languages and at different shooting angles by using a deep learning algorithm.
In some embodiments, the voice-to-text server is configured to convert voice information collected by the microphone into text and send the text to a display screen for the deaf-mute and a display screen for service staff.
The invention has the advantages that: the sign language information or the character information is collected and sent to a server, the server calls information comparison through a preset deafness database, corresponding characters are analyzed through a sign language video character conversion unit, the server can output the characters to a character display unit of a normal person, the character display unit of the normal person can be a display screen, the analyzed corresponding characters can be output to a voice output unit of the normal person through a character voice conversion unit, and the voice output unit of the normal person can be a loudspeaker, a loudspeaker and the like;
the normal person can input information to the server through the normal person voice acquisition unit by adopting a microphone, the normal person character acquisition unit can adopt a keyboard or a computer writing pad, the server calls the deafness database, corresponding sign language information is analyzed through the character-to-sign language video unit in the normal person information conversion module and is output to the deaf-mute sign language video output unit, corresponding sign language information is analyzed through the voice information by the voice-to-character unit in the normal person information conversion module and the character-to-sign language video unit in the character-to-sign language video conversion module and is output to the deaf-mute character display unit, and communication between the deaf-mute and the normal person can be realized through the method.
Drawings
FIG. 1 is a block diagram of a deaf language translator;
FIG. 2 is a flow chart of a communication method between deaf dumb and normal persons based on the deaf language translator.
Detailed Description
The technical scheme of the application is further explained in detail with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meanings of the above terms in the present invention can be understood by those of ordinary skill in the art according to specific situations.
According to one aspect of the present invention, as shown in fig. 1-2, a deaf language translator is provided, which comprises a deaf-mute client, a normal client, and a server in communication connection with the deaf-mute client and the normal client, wherein a deaf language database is configured on the server, and sign language information of deaf language and character information corresponding to the sign language information are stored in the deaf language database; the client side of the deaf-mute comprises a deaf-mute input module and a deaf-mute output module, wherein the deaf-mute input module is used for acquiring input information of the deaf-mute, and the deaf-mute output module is used for outputting input information of normal people; the normal person client comprises a normal person input module and a normal person output module, wherein the normal person input module is used for acquiring input information of normal persons, and the normal person output module is used for outputting input information of deaf-mutes.
In the embodiment, the deaf language database comprises a deaf person language sequence feature library and a common person language sequence feature library which are configured on the server, and the server compares the received information of the deaf person input module and the normal person input module with the deaf person language sequence feature library and the common person language sequence library to obtain the translated language sequence.
As an optional technical scheme of the application: the deaf-mute input module comprises a sign language video acquisition unit and a deaf-mute character acquisition unit;
the server comprises a deaf-mute information conversion module;
the normal person output module comprises a normal person character display unit and a normal person voice output unit; the deaf-mute information conversion module is used for converting the information collected by the sign language video collection unit or the deaf-mute character collection unit into the information and outputting the information to the normal character display unit or the normal voice output unit.
As an optional technical scheme of the application: the deaf-mute information conversion module comprises a sign language video to character unit and a character to voice unit, and the sign language video to character unit is in communication connection with the character to voice unit.
As an optional technical scheme of the application: the sign language video acquisition unit can adopt a camera, the deaf-mute character acquisition unit can adopt a keyboard or a computer writing pad, the normal character display unit can adopt a display, and the normal voice output unit can adopt a loudspeaker.
As an optional technical scheme of the application: the normal person input module comprises a normal person voice acquisition unit and a normal person character acquisition unit;
the server comprises a normal person information conversion module;
the deaf-mute output module comprises a deaf-mute character display unit and a deaf-mute sign language video output unit;
the deaf-mute information conversion module is used for converting the information collected by the normal person voice collection unit or the normal person character collection unit and outputting the converted information to the deaf-mute character display unit or the deaf-mute sign language video output unit.
As an optional technical scheme of the application: the normal person information conversion module comprises a text-to-sign language video unit and a voice-to-text unit.
As an optional technical scheme of the application: the normal person voice acquisition unit can adopt a microphone, the normal person character acquisition unit can adopt a keyboard or a computer writing board, and the deaf-mute character display unit and the deaf-mute sign language video output unit can adopt a display for outputting.
The invention also provides a communication method between the deaf dumb and the normal person, which is based on the deaf language translator and comprises the following steps:
establishing a deafness database of a deafness translator, wherein the database is provided with a server;
sign language information of the deaf language and character information corresponding to the sign language information are stored in a deaf language database;
the method comprises the steps that a deaf-mute client collects sign language information or character information and sends the sign language information or the character information to a server;
the server calls the information in the deaf language database and the collected sign language information or character information to analyze and compare, and outputs corresponding information to the normal client.
As an optional technical scheme of the application: the method for the deaf-mute to collect sign language information and send the sign language information to the server comprises the following steps:
acquiring a video stream containing a gesture, and extracting a key image frame containing a gesture action from the video stream to be used as a target image;
extracting feature points of the gesture according to the target image;
according to the feature points of the extracted gesture and a pre-established deafness database, the server acquires sign language information matched with the deafness database according to the feature points of the gesture;
and determining and outputting a text form and/or a voice form corresponding to the sign language information according to the matched sign language information.
As an optional technical scheme of the application: the server comprises a deaf-mute information conversion module, the deaf-mute information conversion module comprises a sign language video to character conversion unit and a character to voice conversion unit, the server outputs the text form to a normal character display unit through the sign language video to character conversion unit according to sign language information, and the server also comprises a normal voice output unit which outputs the voice form to the normal character output unit through the sign language video to character conversion unit and the character to voice conversion unit according to the sign language information. In this embodiment, the sign language video to character conversion unit may further transmit the output information to a character display unit of the deaf-mute at the output end of the deaf-mute. The normal person module conversion unit can also output the information to a normal person character display unit or a normal person voice output unit of the normal person output module.
As an optional technical scheme of the application: extracting a key image frame containing gesture actions from a video stream, comprising:
removing image frames which do not contain gesture actions from all image frames of the video stream to obtain a first image set;
and removing the images with the image definition lower than a preset threshold value from the first image set to obtain a key image frame containing the gesture action.
As an optional technical scheme of the application: the pre-established deafness database comprises a feature point model containing a plurality of preset sign languages, and the feature point model is generated by extracting feature points of sample images containing the preset sign languages and at different shooting angles by adopting a deep learning algorithm.
The invention utilizes and collects sign language information or word information and sends to the server, the server is through the deaf language database of the pre-arrangement that is disposed, call information contrast and analyze out the corresponding word through the sign language video changes the word unit, the server can output the word to the word display element of the normal person, the word display element of the normal person can be the display screen, can also output the corresponding word analyzed out to the voice output element of the normal person through the word changes the voice unit, the voice output element of the normal person can be loudspeaker, etc.;
the normal person can input information to the server through the normal person voice acquisition unit by adopting a microphone, the normal person character acquisition unit can adopt a keyboard or a computer writing pad, the server calls the deafness database, corresponding sign language information is analyzed through the character-to-sign language video unit in the normal person information conversion module and is output to the deaf-mute sign language video output unit, corresponding sign language information is analyzed through the voice information by the voice-to-character unit in the normal person information conversion module and the character-to-sign language video unit in the character-to-sign language video conversion module and is output to the deaf-mute character display unit, and communication between the deaf-mute and the normal person can be realized through the method.
The foregoing are only some embodiments of the invention. It will be apparent to those skilled in the art that various changes and modifications can be made without departing from the inventive concept thereof, and these changes and modifications can be made without departing from the spirit and scope of the invention.

Claims (10)

1. The deaf language translator is characterized by comprising
The system comprises a deaf-mute client, a normal client and a server which is in communication connection with the deaf-mute client and the normal client, wherein a deaf language database is configured on the server, and sign language information of deaf language and character information corresponding to the sign language information are stored in the deaf language database;
the client side of the deaf-mute comprises a deaf-mute input module and a deaf-mute output module, wherein the deaf-mute input module is used for collecting input information of the deaf-mute, and the deaf-mute output module is used for outputting input information of normal people;
the normal person client comprises a normal person input module and a normal person output module, the normal person input module is used for collecting input information of normal persons, and the normal person output module is used for outputting input information of deaf-mutes.
2. The deaf-language translator of claim 1,
the deaf-mute input module comprises a sign language video acquisition unit and a deaf-mute character acquisition unit;
the server comprises a deaf-mute information conversion module;
the normal person output module comprises a normal person character display unit and a normal person voice output unit;
the deaf-mute information conversion module is used for converting the information collected by the sign language video collection unit or the deaf-mute character collection unit into the information and outputting the information to the normal character display unit or the normal voice output unit.
3. The deaf-language translator of claim 2,
the deaf-mute information conversion module comprises a sign language video to character unit and a character to voice unit, and the sign language video to character unit is in communication connection with the character to voice unit.
4. The deaf-language translator of claim 1,
the normal person input module comprises a normal person voice acquisition unit and a normal person character acquisition unit;
the server comprises a normal person information conversion module;
the deaf-mute output module comprises a deaf-mute character display unit and a deaf-mute sign language video output unit; the deaf-mute information conversion module is used for converting the information collected by the normal person voice collection unit or the normal person character collection unit and outputting the converted information to the deaf-mute character display unit or the deaf-mute sign language video output unit.
5. The deaf-language translator of claim 4,
the normal person information conversion module comprises a text-to-sign language video unit and a voice-to-text unit, and the text-to-sign language video unit and the voice-to-text unit.
6. A communication method between deaf dumb persons and normal persons, characterized in that, based on the deaf language translator of claims 1-5, comprising:
establishing a deafness database of a deafness translator, wherein the database is provided with a server;
sign language information of the deaf language and character information corresponding to the sign language information are stored in the deaf language database;
the deaf-mute client collects sign language information or character information and sends the sign language information or the character information to the server;
and the server calls the information in the deaf language database and the collected sign language information or character information to analyze and compare, and outputs corresponding information to the normal client.
7. The communication method of the deaf-dumb people and the normal people according to claim 6, wherein the collecting sign language information of the client of the deaf-dumb people and sending the sign language information to the server comprises:
acquiring a video stream containing a gesture, and extracting a key image frame containing a gesture action from the video stream to be used as the target image;
extracting feature points of the gesture according to the target image;
according to the feature points of the extracted gestures and a pre-established deafness database, the server acquires sign language information matched with the deafness database according to the feature points of the gestures;
and determining and outputting a text form and/or a voice form corresponding to the sign language information according to the matched sign language information.
8. The method as claimed in claim 7, wherein the server comprises a deaf-mute information conversion module, the deaf-mute information conversion module comprises a sign language video to text unit and a text to speech unit, the server outputs the text form to a normal person text display unit through the sign language video to text unit according to the sign language information, and the server outputs the speech form to a normal person speech output unit through the sign language video to text unit and the text to speech unit according to the sign language information.
9. The method as claimed in claim 7, wherein extracting key image frames containing gesture actions from said video stream comprises:
removing image frames which do not contain gesture actions from all image frames of the video stream to obtain a first image set;
and removing the images with the image definition lower than a preset threshold value from the first image set to obtain a key image frame containing the gesture action.
10. The method as claimed in claim 6, wherein the pre-established deaf language database comprises feature point models with a plurality of preset sign languages, and the feature point models are generated by extracting feature points from sample images with different shooting angles and containing a plurality of preset sign languages by using a deep learning algorithm.
CN202010043237.3A 2020-01-15 2020-01-15 Deaf language translator and communication method thereof Withdrawn CN111223369A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010043237.3A CN111223369A (en) 2020-01-15 2020-01-15 Deaf language translator and communication method thereof
CN202010783851.3A CN111785138A (en) 2020-01-15 2020-08-06 Deaf language translator and communication method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010043237.3A CN111223369A (en) 2020-01-15 2020-01-15 Deaf language translator and communication method thereof

Publications (1)

Publication Number Publication Date
CN111223369A true CN111223369A (en) 2020-06-02

Family

ID=70831847

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010043237.3A Withdrawn CN111223369A (en) 2020-01-15 2020-01-15 Deaf language translator and communication method thereof
CN202010783851.3A Pending CN111785138A (en) 2020-01-15 2020-08-06 Deaf language translator and communication method thereof

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010783851.3A Pending CN111785138A (en) 2020-01-15 2020-08-06 Deaf language translator and communication method thereof

Country Status (1)

Country Link
CN (2) CN111223369A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506410A (en) * 2020-12-09 2021-03-16 福州大学 Deaf-mute barrier-free online video interaction device
CN112686132A (en) * 2020-12-28 2021-04-20 南京工程学院 Gesture recognition method and device
CN113476168A (en) * 2021-06-25 2021-10-08 深圳市妇幼保健院 Prompting method in oral treatment process and related equipment
CN116032679A (en) * 2023-03-28 2023-04-28 合肥坤语智能科技有限公司 Intelligent host interaction control system for intelligent hotel

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101115088A (en) * 2007-08-07 2008-01-30 周运南 Mobile phone dedicated for deaf-mutes
CN101605158A (en) * 2008-06-13 2009-12-16 鸿富锦精密工业(深圳)有限公司 Mobile phone dedicated for deaf-mutes
KR100953979B1 (en) * 2009-02-10 2010-04-21 김재현 Sign language learning system
CN105957514B (en) * 2016-07-11 2019-07-26 吉林宇恒光电仪器有限责任公司 A kind of portable deaf-mute's alternating current equipment
CN106686223A (en) * 2016-12-19 2017-05-17 中国科学院计算技术研究所 A system and method for assisting dialogues between a deaf person and a normal person, and a smart mobile phone
CN109754677A (en) * 2019-02-26 2019-05-14 华南理工大学 A kind of double mode deaf-mute alternating current equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506410A (en) * 2020-12-09 2021-03-16 福州大学 Deaf-mute barrier-free online video interaction device
CN112686132A (en) * 2020-12-28 2021-04-20 南京工程学院 Gesture recognition method and device
CN113476168A (en) * 2021-06-25 2021-10-08 深圳市妇幼保健院 Prompting method in oral treatment process and related equipment
CN113476168B (en) * 2021-06-25 2022-07-22 深圳市妇幼保健院 Prompting method and related equipment in oral treatment process
CN116032679A (en) * 2023-03-28 2023-04-28 合肥坤语智能科技有限公司 Intelligent host interaction control system for intelligent hotel
CN116032679B (en) * 2023-03-28 2023-05-30 合肥坤语智能科技有限公司 Intelligent host interaction control system for intelligent hotel

Also Published As

Publication number Publication date
CN111785138A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111223369A (en) Deaf language translator and communication method thereof
US11138903B2 (en) Method, apparatus, device and system for sign language translation
CN107918771B (en) Person identification method and wearable person identification system
CN109429522A (en) Voice interactive method, apparatus and system
KR101988037B1 (en) Method for providing sign language regognition service for communication between disability and ability
CN112016367A (en) Emotion recognition system and method and electronic equipment
CN110431524A (en) Information processing system, information processing unit, message handling program and information processing method
CN109063624A (en) Information processing method, system, electronic equipment and computer readable storage medium
CN110321544A (en) Method and apparatus for generating information
CN112768070A (en) Mental health evaluation method and system based on dialogue communication
CN108510988A (en) A kind of speech recognition system and method for deaf-mute
CN110399810B (en) Auxiliary roll-call method and device
CN110457712A (en) A kind of English translation system
KR102527589B1 (en) Public opinion acquisition and word viscosity model training methods and devices, server, and medium
CN114239610A (en) Multi-language speech recognition and translation method and related system
CN113743267A (en) Multi-mode video emotion visualization method and device based on spiral and text
CN210516214U (en) Service equipment based on video and voice interaction
Reda et al. SVBiComm: Sign-Voice Bidirectional Communication System for Normal,“Deaf/Dumb” and Blind People based on Machine Learning
CN112256827A (en) Sign language translation method and device, computer equipment and storage medium
CN111583932A (en) Sound separation method, device and equipment based on human voice model
CN111326142A (en) Text information extraction method and system based on voice-to-text and electronic equipment
Bharathi et al. Signtalk: Sign language to text and speech conversion
CN109345652A (en) Work attendance device and implementation method based on speech recognition typing text
CN113221990B (en) Information input method and device and related equipment
CN111985231B (en) Unsupervised role recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200602