CN114141108A - Blind-aiding voice-aided reading equipment and method - Google Patents

Blind-aiding voice-aided reading equipment and method Download PDF

Info

Publication number
CN114141108A
CN114141108A CN202111470788.9A CN202111470788A CN114141108A CN 114141108 A CN114141108 A CN 114141108A CN 202111470788 A CN202111470788 A CN 202111470788A CN 114141108 A CN114141108 A CN 114141108A
Authority
CN
China
Prior art keywords
blind
upper body
reading
binocular vision
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111470788.9A
Other languages
Chinese (zh)
Inventor
李智军
光启宏
李国欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202111470788.9A priority Critical patent/CN114141108A/en
Publication of CN114141108A publication Critical patent/CN114141108A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/001Teaching or communicating with blind persons
    • G09B21/006Teaching or communicating with blind persons using audible presentation of the information

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention provides blind-aiding voice auxiliary reading equipment and a blind-aiding voice auxiliary reading method, which mainly comprise a binocular vision camera, a voice broadcasting device, a strap-up processing core, a light-weight CTPN-based character detection and a light-weight retinaface-and improved facenet-based image recognition. The binocular vision camera is fixed at a proper position of the head by adding the head fixing device, so that the binocular vision camera is convenient to wear and detect the surrounding environment; the upper belt processing core can receive and process visual aspect pictures and texts provided by the binocular vision camera. All the obtained information can be broadcasted through the Bluetooth voice communication device. The invention can be well applied to reading books and newspapers, reading characters and reading books in the life of the blind, and also adds the function of acquaintance identification, thereby greatly improving the life convenience of the blind.

Description

Blind-aiding voice-aided reading equipment and method
Technical Field
The invention relates to the technical field of intelligent electronic equipment, in particular to blind-helping voice-assisted reading equipment and a blind-helping voice-assisted reading method.
Background
With the rapid progress of the times, various devices are intelligentized, and the progress brings great convenience to human life. The blind people are special groups, the blind people face a lot of difficulties in daily life and traveling due to visual obstacles, and if the intelligent devices are applied to the daily life of the blind people, great convenience is brought to the life of the blind people, and the life quality of the blind people is greatly improved.
Recently, the research of blind guiding equipment is still in the bud stage, and the blind mainly uses an intelligent blind guiding walking stick, blind guiding glasses and the like as auxiliary tools when going out, or depends on the guidance of a blind guiding dog. The ultrasonic ranging technology utilized by the blind guiding stick is not stable enough in the using process and is not easy to popularize; the blind guiding glasses can only detect obstacles above the waist, and have space limitation; the blind guiding dog has complex training and higher cost. And equipment such as a blind guiding vehicle and the like can only stay on the conceptual model due to the fact that the equipment is not easy to carry or is greatly influenced by the outside world. The factors limit the environment perception ability, the autonomous action ability and the communication and interaction ability with people of the blind, so that the blind has difficulty in realizing basic self-care and working ability and can cause negative influence on the psychology of the blind.
The blind guide auxiliary cause is a cause with high advocation and a cause with high knowledge and technology content. By means of the new discovery and the new development of the scientific technology, a new method and a new technology for helping the blind to improve the living and working capacity are explored, a high-performance and low-cost blind guiding auxiliary appliance product is researched and developed, and the blind can enjoy the technological gospel. The functions of the blind are recovered and improved, so that the blind can recover the living and working abilities to the maximum extent, the living quality is improved, and the burden of families and society is reduced. The huge social demand of the intelligent blind-assisting equipment promotes the increase of scientific and technological investment and the development of scientific and technological technology. At present, the latest theories and technologies of subjects such as ultrasonic detection technology, neuroscience and engineering, neural information decoding and intelligent control engineering and the like are utilized to discuss the technology and method for assisting and enhancing the motor function after visual loss and further research and develop advanced intelligent blind assisting instruments and systems, and the intelligent blind assisting instruments and systems become research hotspots in the fields of international neuroscience, biomedical engineering, computer science and the like.
Aiming at the bottleneck of the prior art, the wearable intelligent blind assisting system is developed from mechanical optimization design, various key technologies such as visual information perception and voice interaction are researched, and a natural, flexible, safe and reliable wearable intelligent blind assisting system is finally constructed. The implementation of the project is beneficial to promoting the research and development and application of related products, improving the existing blind assisting service capability, and improving the environment perception capability, the autonomous action capability and the human communication interaction capability of the blind, so that the self-care of life is basically realized. Meanwhile, the method meets the requirements of key fields which are lacked at present and needed in future development and are urgently needed by the blind guiding and assisting industry, and is a strategy selection with high prospect in national technical innovation research.
Patent document CN110347978A discloses a method for assisting reading of an electronic book, when a user reads an electronic book, the user can read the text content of the sensitive words that are not well shielded and are subjected to interface optimization, and obtain the suspected words that may exist and include the teacher and the parents of the user who accompany the reading user to prejudge the user, the identification of the edited knowledge point, and the annotation scheme of the content of personalized annotation, the identification annotation scheme of the user can be pushed to the platform to be shared by all users, the electronic book is linked in full text, the user clicks the words themselves, the system allows the user to obtain effective assistance for the system to read through automatic identification of chinese and english, explanation of chinese words and idioms, and explanation and pronunciation of english words. There is still room for improvement in structure and performance.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide blind-aiding voice-assisted reading equipment and a blind-aiding voice-assisted reading method.
The invention provides blind-helping voice-assisted reading equipment, which comprises: the binocular vision camera comprises a binocular vision camera 1, a camera fixing component 2, a camera information transmission connecting device 3, a Bluetooth voice communication device 4, an upper body strap processing core 5 and an upper body strap fixing component 6; the binocular vision camera 1 is fixed on the head of a human body through the camera fixing component 2; the camera information transmission connecting device 3 is used for connecting the binocular vision camera 1 with the upper body strap processing core 5 to complete information transmission; the Bluetooth voice communication device 4 is connected with the upper body strap processing core 5 by Bluetooth to complete the voice broadcast of the blind; the upper body harness processing core 5 is fastened and connected by the upper body harness fixing member 6 to prevent falling.
Preferably, the camera fixing component 2 adopts a flexible telescopic binding band to ensure that the binocular vision camera 1 is fixed on the head of the blind without causing serious oppression.
Preferably, binocular vision camera 1 carries out novel structural design to the spectacle frame is made to 3D printing mode, satisfies the weight demand for the blind person head is not excessive tired.
The binocular vision camera 1 includes: a spectacle frame;
the weight of the spectacle frame is less than a set threshold.
The upper body strap processing core 5 is reliably connected with the camera information transmission connecting device 3, and smooth information transmission is guaranteed.
Preferably, the upper body harness processing core 5 includes: two parallel intelligent processing network modules.
The software core of the upper body strap processing core 5 consists of two parallel intelligent processing networks, so that the image characters in the book electronic equipment can be identified, and some identification problems of real-life 3D stereoscopic images can be effectively realized.
Preferably, all the upper body harness processing cores 5 detect any one or more of the following information:
-a text message;
-picture information;
the results detected by the upper body harness processing core 5, such as various text and picture information, can be reported to the blind through the Bluetooth transmission device, so as to achieve the feedback effect.
Preferably, the upper body harness processing core 5 is flexibly and tightly connected to the back of the blind through an upper body harness fixing member 6.
The upper body strap fixing component 6 adopts a flexible design, so that the muscle of the blind is prevented from being injured.
Preferably, the blind-aiding speech-aided reading device comprises:
step S1: generating electronic files such as a voice auxiliary reading printed matter, a mobile phone electronic file, a computer electronic file, a network and the like by adopting blind-assisting voice auxiliary reading equipment;
step S2: within the range of 1 meter from the head of the blind, the recognition accuracy rate of Chinese and English characters, symbols and the like smaller than the size of a No. 1 character is more than or equal to 95 percent, and the time delay is less than 1 second; the character detection is performed using lightweight CTPN, which is a character detection algorithm proposed in recent years. The CTPN is combined with the CNN and the LSTM deep network, can effectively detect the transversely distributed characters of a complex scene, and is a better character detection algorithm at present
Preferably, the method further comprises the following steps:
step S3: the technical requirement equipment can help the blind to identify not less than 50 mature people, the face detection accuracy is more than or equal to 99%, the 3-meter internal surface identification accuracy is more than or equal to 80%, and the identification result is generated into voice to assist the blind to communicate with people; the face recognition is carried out by using the lightweight retinaface and the improved facenet, the lightweight retinaface uses the mobilene 0.25 as a main feature extraction network, the recognition speed is ensured, meanwhile, the extremely excellent detection performance is provided, the face data shot by the network under normal weather and illumination conditions are concentrated, and the detection rate can reach more than 99.5%.
Preferably, the method further comprises the following steps:
step S4: the improved facenet uses efficientnet-B0 as a main feature extraction network, and efficientnet-B0 is an extremely excellent feature extraction network proposed by Google in 2019 and has extremely excellent feature extraction capability. In the course of training for improving facenet, we combine the advantages of facenet and Softmax training, and greatly improve the training performance of facenet.
Preferably, the method further comprises the following steps:
step S5: character recognition is realized by using densenert, densenert is a modified version of Renset,
the step S5 includes:
step S5.1: dense connection between all the layers in the front and the layers behind is established, and the dense connection enables the Densenet to have a strong feature reuse function and is particularly suitable for the situation that character recognition requires both shallow semantic information and deep semantic information.
Step S5.2: the Synthetic Chinese String data set open source data is used as a training set, the data set is divided into a training set and a verification set according to the ratio of 99:1, 364 ten thousand pictures in total, the data are randomly generated by using Chinese corpus news and Chinese, through changes such as fonts, sizes, gray levels, fuzziness, perspective and stretching, and the accuracy of 98.3% on the verification set is achieved.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention uses various excellent intelligent deep neural network structures to complete the detection of various types of targets, actively changes the original network structure, greatly reduces the parameter quantity, increases the operation speed, and ensures that the whole system can meet the operation capability of a high-performance image processing main control computer.
2. The head-mounted equipment adopts a novel structural design, the actual specific weight is less than 80g, and the flexible fixing equipment is adopted, so that the movement is convenient under the condition of meeting the fixing requirement, and the blind person has better wearing experience.
3. The invention adopts the Bluetooth voice communication device for interaction, and the result detected by the lifting strap processing core is broadcasted through the Bluetooth voice communication device.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a schematic view of the overall structure of the present invention.
Fig. 2 is a schematic view of a head assembly in an embodiment of the invention.
Fig. 3 is a block diagram illustrating a software structure of an upper body harness processing core according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of an improved facenet network structure in the embodiment of the present invention.
FIG. 5 is a schematic diagram of CTPN in the embodiment of the present invention
In the figure, 1-binocular vision camera, 2-camera fixing component, 3-camera information transmission connecting device, 4-Bluetooth voice communication device, 5-upper body strap processing core and 6-upper body strap fixing component
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The invention relates to the technical field of intelligent electronic equipment, in particular to blind-helping voice-assisted reading equipment. The equipment mainly comprises a binocular vision camera, a voice broadcasting device, an upper strap processing core, a light-weight CTPN-based character detection, a light-weight retinafece and an improved facenet for image recognition. The binocular vision camera is fixed at a proper position of the head by adding the head fixing device, so that the binocular vision camera is convenient to wear and detect the surrounding environment; the upper belt processing core can receive and process visual aspect pictures and texts provided by the binocular vision camera. All the obtained information can be broadcasted through the Bluetooth voice communication device. The invention can be well applied to reading books and newspapers, reading characters and reading books in the life of the blind person, and greatly improves the life convenience of the blind person.
As shown in fig. 1-5, a blind-aiding speech-aided reading device comprises a binocular vision camera, a camera fixing component, a camera information transmission connecting device, a bluetooth speech communication device, an upper body strap processing core, an upper body strap fixing component, and a binocular vision camera fixed on the head of a blind person through the camera fixing component, wherein the camera information transmission connecting device is used for connecting the binocular vision camera and the upper body strap processing core to complete information transmission; the Bluetooth voice communication device is connected with the upper body strap processing core by utilizing Bluetooth to complete the voice broadcast of the blind; the upper body strap treatment core is fixed by the upper body strap fixing device to prevent falling off;
preferably, the camera mounting assembly should be designed with a flexible, retractable strap to ensure that the binocular vision camera is mounted on the blind's head without causing severe pressure.
Preferably, binocular vision camera carries out novel structural design to the spectacle frame is made to 3D printing mode, satisfies the weight demand, makes blind person's head not excessively tired.
Preferably, the upper body strap processing core is reliably connected with the camera information transmission connecting device, and smooth information transmission is guaranteed.
Preferably, all the results detected by the upper body harness processing core, such as real world 3D images, two-dimensional images of books and text, can be processed and transmitted to the blind person himself via bluetooth.
Preferably, the upper body harness processing core is fixed on the back of the blind person by an upper body harness fixing device. The upper body strap fixing device adopts a flexible design, and the muscles of the blind are prevented from being injured.
Preferably, the text is targeted to recognition technology that requires the device to generate speech to assist in reading printed matter and electronic files such as cell phones, computers, networks, etc. Within the range of 1 meter from the head of the blind, the recognition accuracy rate of Chinese and English characters, symbols and the like smaller than the size of a No. 1 character is more than or equal to 95 percent, and the time delay is less than 1 second; we use lightweight CTPN for word detection, which is a word detection algorithm proposed in recent years. The CTPN is combined with the CNN and the LSTM deep network, can effectively detect the transversely distributed characters of a complex scene, and is a better character detection algorithm at present.
According to the blind-assisting voice-assisted reading equipment provided by the invention, the technical requirement equipment can help the blind to identify not less than 50 acquaintances, the face detection accuracy is more than or equal to 99%, the 3-meter internal surface identification accuracy is more than or equal to 80%, and the voice generated by the identification result can assist the blind to communicate with people; we use lightweight retinaface and improved facenet for face recognition. The lightweight retinafeace uses mobilene 0.25 as a main feature extraction network, provides extremely excellent detection performance while ensuring the recognition speed, and the network has the detection rate of over 99.5 percent in face data set shot under normal weather and illumination conditions
According to the blind-aiding voice auxiliary reading device provided by the invention, the improved facenet uses efficientnet-B0 as a main feature extraction network, and efficientnet-B0 is an extremely excellent feature extraction network proposed by Google in 2019 and has extremely excellent feature extraction capability. In the training process of improving the facenet, the advantages of the facenet and Softmax training are combined, and the training performance of the improved facenet is greatly improved
According to the blind-aiding voice auxiliary reading device provided by the invention, character recognition is realized by using a densenert, the densenert is an improved version of the rennet, and the specific method is to establish dense connection between all layers in the front and the rear layer, so that the densenert has a strong characteristic reuse function and is particularly suitable for the condition of character recognition which requires both shallow semantic information and deep semantic information. The Synthetic Chinese String data set open source data is used as a training set, the data set is divided into a training set and a verification set according to a ratio of 99:1, 364 ten thousand pictures in total, the data is randomly generated by using a Chinese language corpus (news and Chinese), and the accuracy of the verification set is 98.3 percent through changes of fonts, sizes, gray levels, fuzziness, perspective, stretching and the like.
As shown in fig. 1, a blind-helping speech-assisted reading device is characterized in that: the binocular vision camera comprises a binocular vision camera 1, a camera fixing component 2, a camera information transmission connecting device 3, a Bluetooth voice communication device 4, an upper body strap processing core 5 and an upper body strap fixing component 6. The binocular vision camera 1 is fixed on the head of the blind through the camera fixing component 2, and the camera information transmission connecting device 3 is used for connecting the binocular vision camera 1 with the upper body strap processing core 5 to complete information transmission; the Bluetooth voice communication device 4 is connected with the upper body strap processing core 5 through Bluetooth to complete blind voice broadcast; the upper body harness processing core 5 is fixed by the upper body harness fixing member 6 to prevent falling. As shown in fig. 1 and 2, the blind-aiding speech-aided reading device is characterized in that: binocular vision camera 1 passes through camera fixed subassembly 2 to be fixed at the blind person head, and camera fixed subassembly 2 adopts flexible assembly to prevent to wear man and to become the damage, and camera information transmission connecting device 3 is used for connecting binocular vision camera 1 and upper part of the body braces processing core 5 and accomplishes information transmission.
As shown in fig. 1 and 4, the blind-aiding speech-aided reading device is characterized in that: the software core of the upper body strap processing core 5 is composed of two parallel intelligent processing networks, the lightweight retinafece uses mobilene 0.25 as a main feature extraction network, the recognition speed is guaranteed, meanwhile, the extremely excellent detection performance is provided, and the lightweight CTPN carries out character detection, wherein the CTPN is a character detection algorithm proposed in recent years. The CTPN is combined with the CNN and the LSTM deep network, can effectively detect the transversely distributed characters of a complex scene, is a better character detection algorithm at present, and ensures the processing efficiency by processing all modules through a high-performance image processing main control computer.
The hardware of the binocular vision camera mainly comprises a camera and a projector, and the structured light is active structure information projected to the surface of a measured object through the projector, such as laser stripes, Gray codes, sine stripes and the like; then, shooting the measured surface through a camera to obtain a structured light image; and finally, performing three-dimensional analytic calculation on the image based on a triangulation principle to realize three-dimensional reconstruction.
Compared with a depth camera based on binocular stereo vision, the binocular vision camera has the advantages that the three-dimensional reconstruction capability and the robustness of distance detection are greatly increased.
In the perspective projection model of the camera, the calculation formula of any point P (x, y, z) in space is:
Figure RE-GDA0003460463420000071
Figure RE-GDA0003460463420000072
Figure RE-GDA0003460463420000073
wherein:
Figure RE-GDA0003460463420000074
those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A blind-aiding speech-assisted reading apparatus, comprising: the binocular vision camera comprises a binocular vision camera (1), a camera fixing component (2), a camera information transmission connecting device (3), a Bluetooth voice communication device (4), an upper body strap processing core (5) and an upper body strap fixing component (6);
the binocular vision camera (1) is fixed on the head of a human body through the camera fixing component (2);
the camera information transmission connecting device (3) is used for connecting the binocular vision camera (1) with the upper body strap processing core (5) to complete information transmission;
the Bluetooth voice communication device (4) is connected with the upper body strap processing core (5) by Bluetooth to complete the voice broadcast of the blind;
the upper body harness processing core (5) completes fastening connection by utilizing an upper body harness fixing component (6).
2. A blind-aiding speech-aided reading apparatus according to claim 1, wherein the camera-holding assembly (2) is a flexible, retractable strap.
3. A blind-aiding speech-aided reading apparatus according to claim 1, wherein the binocular vision camera (1) comprises: a spectacle frame;
the weight of the spectacle frame is less than a set threshold.
4. A blind-aiding speech-aided reading apparatus according to claim 1, wherein the upper body harness processing core (5) comprises: two parallel intelligent processing network modules.
5. A blind-aiding speech-aided reading apparatus according to claim 1, wherein all upper body harness processing cores (5) detect any one or more of the following:
-a text message;
-picture information.
6. The blind-aiding speech-aided reading apparatus of claim 1, wherein the upper body harness processing core (5) is flexibly and tightly connected to the back of the blind by an upper body harness fixing component (6).
7. A method for assisting blind-aided speech reading, which is characterized in that the blind-aided speech reading assisting device of any one of claims 1 to 6 is adopted, and comprises the following steps:
step S1: generating a voice auxiliary reading printed matter, a mobile phone electronic file, a computer electronic file and a network electronic file by adopting blind-assisting voice auxiliary reading equipment;
step S2: within the range of 1 meter from the head of the blind, the recognition accuracy rate of Chinese, English and symbol with the size smaller than 1 number is more than or equal to 95 percent, and the time delay is less than 1 second; text detection was performed using lightweight CTPN.
8. The method for assisting blind speech reading according to claim 7, further comprising:
step S3: and carrying out face recognition by using the lightweight retinaface and the improved facenet, and using the mobile 0.25 of the lightweight retinaface as a main feature extraction network.
9. The method for assisting blind speech reading according to claim 8, further comprising:
step S4: the improved facenet uses efficientnet-B0 as the backbone feature extraction network.
10. The method for assisting blind speech reading according to claim 9, further comprising:
step S5: character recognition is achieved using densenet.
CN202111470788.9A 2021-12-03 2021-12-03 Blind-aiding voice-aided reading equipment and method Pending CN114141108A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111470788.9A CN114141108A (en) 2021-12-03 2021-12-03 Blind-aiding voice-aided reading equipment and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111470788.9A CN114141108A (en) 2021-12-03 2021-12-03 Blind-aiding voice-aided reading equipment and method

Publications (1)

Publication Number Publication Date
CN114141108A true CN114141108A (en) 2022-03-04

Family

ID=80387874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111470788.9A Pending CN114141108A (en) 2021-12-03 2021-12-03 Blind-aiding voice-aided reading equipment and method

Country Status (1)

Country Link
CN (1) CN114141108A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046886A (en) * 2019-12-12 2020-04-21 吉林大学 Automatic identification method, device and equipment for number plate and computer readable storage medium
CN111643324A (en) * 2020-07-13 2020-09-11 江苏中科智能制造研究院有限公司 Intelligent glasses for blind people
CN111932866A (en) * 2020-08-11 2020-11-13 中国科学技术大学先进技术研究院 Wearable blind person outdoor traffic information sensing equipment
CN112731688A (en) * 2020-12-31 2021-04-30 星微科技(天津)有限公司 Intelligent glasses system suitable for people with visual impairment
WO2021096324A1 (en) * 2019-11-14 2021-05-20 Samsung Electronics Co., Ltd. Method for estimating depth of scene in image and computing device for implementation of the same
CN113065396A (en) * 2021-03-02 2021-07-02 国网湖北省电力有限公司 Automatic filing processing system and method for scanned archive image based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021096324A1 (en) * 2019-11-14 2021-05-20 Samsung Electronics Co., Ltd. Method for estimating depth of scene in image and computing device for implementation of the same
CN111046886A (en) * 2019-12-12 2020-04-21 吉林大学 Automatic identification method, device and equipment for number plate and computer readable storage medium
CN111643324A (en) * 2020-07-13 2020-09-11 江苏中科智能制造研究院有限公司 Intelligent glasses for blind people
CN111932866A (en) * 2020-08-11 2020-11-13 中国科学技术大学先进技术研究院 Wearable blind person outdoor traffic information sensing equipment
CN112731688A (en) * 2020-12-31 2021-04-30 星微科技(天津)有限公司 Intelligent glasses system suitable for people with visual impairment
CN113065396A (en) * 2021-03-02 2021-07-02 国网湖北省电力有限公司 Automatic filing processing system and method for scanned archive image based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
余阿祥: "多注意力机制的口罩检测网络", 《南京师范大学学报( 工程技术版)》 *
廖海斌: "基于性别和年龄因子分析的鲁棒性人脸表情识别", 《计算机研究与发展》 *
蒋良卫等: "基于深度学习技术的图片文字提取技术的研究", 《信息系统工程》 *
随玉腾等: "基于 RetinaFace 的人脸多属性检测算法研究", 《铁路计算机应用》 *

Similar Documents

Publication Publication Date Title
JP7130057B2 (en) Hand Keypoint Recognition Model Training Method and Device, Hand Keypoint Recognition Method and Device, and Computer Program
CN107308638B (en) A kind of entertaining rehabilitation training of upper limbs system and method for virtual reality interaction
CN110349081B (en) Image generation method and device, storage medium and electronic equipment
CN112182166A (en) Text matching method and device, electronic equipment and storage medium
Tolba et al. Recent developments in sign language recognition systems
Stenslie Virtual touch: A study of the use and experience of touch in artistic, multimodal and computer-based environments
Sosa-García et al. “Hands on” visual recognition for visually impaired users
CN111242273A (en) Neural network model training method and electronic equipment
CN109408655A (en) The freehand sketch retrieval method of incorporate voids convolution and multiple dimensioned sensing network
Hsieh et al. Outdoor walking guide for the visually-impaired people based on semantic segmentation and depth map
Chao et al. Sign language recognition based on cbam-resnet
Prakash et al. Educating and communicating with deaf learner’s using CNN based Sign Language Prediction System
CN105892627A (en) Virtual augmented reality method and apparatus, and eyeglass or helmet using same
CN114141108A (en) Blind-aiding voice-aided reading equipment and method
WO2021025279A1 (en) System, method, and computer-readable storage medium for optimizing expression of virtual character through ai-based expression classification and retargeting
CN108986191B (en) Character action generation method and device and terminal equipment
CN116996703A (en) Digital live broadcast interaction method, system, equipment and storage medium
US10891922B1 (en) Attention diversion control
Trujillo-Romero et al. Mexican Sign Language corpus: Towards an automatic translator
CN111611812A (en) Translating into braille
CN106781248A (en) A kind of safety reminding device and method based on wearable device
WO2022019692A1 (en) Method, system, and non-transitory computer-readable recording medium for authoring animation
CN112528760B (en) Image processing method, device, computer equipment and medium
Wang et al. Dense attention network for facial expression recognition in the wild
CN109635709B (en) Facial expression recognition method based on significant expression change area assisted learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220304