CN112515928A - Intelligent blind assisting system, method, computer equipment and storage medium - Google Patents

Intelligent blind assisting system, method, computer equipment and storage medium Download PDF

Info

Publication number
CN112515928A
CN112515928A CN202011348096.2A CN202011348096A CN112515928A CN 112515928 A CN112515928 A CN 112515928A CN 202011348096 A CN202011348096 A CN 202011348096A CN 112515928 A CN112515928 A CN 112515928A
Authority
CN
China
Prior art keywords
data
information
sensory
reconstruction
output control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011348096.2A
Other languages
Chinese (zh)
Inventor
李凌
宋凯旋
辜嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhongke Advanced Technology Research Institute Co Ltd
Original Assignee
Suzhou Zhongke Advanced Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhongke Advanced Technology Research Institute Co Ltd filed Critical Suzhou Zhongke Advanced Technology Research Institute Co Ltd
Priority to CN202011348096.2A priority Critical patent/CN112515928A/en
Publication of CN112515928A publication Critical patent/CN112515928A/en
Priority to PCT/CN2021/132316 priority patent/WO2022111443A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • A61H2003/063Walking aids for blind persons with electronic detecting or guiding means with tactile perception
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • A61H2201/5043Displays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • A61H2201/5048Audio interfaces, e.g. voice or music controlled
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Rehabilitation Therapy (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Pain & Pain Management (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application belongs to the technical field of artificial intelligence, and relates to an intelligent blind assisting system which comprises a multi-sense information acquisition system, a multi-sense information acquisition system and a blind assisting system, wherein the multi-sense information acquisition system acquires sense information; the multi-sensory information processing module performs data analysis integration operation on the sensory information based on a pre-constructed multi-sensory information fusion model to obtain integration data; the multi-channel output control module selects an output control channel suitable for outputting data based on the data structure of the integrated data to reconstruct the data of the integrated data to obtain reconstructed data; the information output module determines an output form of the reconstruction data based on the corresponding output control channel type of the reconstruction data, and outputs the reconstruction data to a user according to the output form. The application also provides an artificial intelligence method, computer equipment and a storage medium. According to the application method and the application system, the application requirements of the actual scene can be weakened, the user experience feeling and the substitution feeling are enhanced through the reconstruction optimization of the virtual scene, the data calculation complexity is reduced, and the blind assisting effect is improved.

Description

Intelligent blind assisting system, method, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an intelligent blind assisting system, method, computer device, and storage medium.
Background
The vision is the main means for acquiring information by human, the visual system is the most important perception system of human, and the normal vision can help people to deal with various visual and cognitive challenges in complex and changeable external environments, so that the life and the working quality of people are determined. Visual impairment is usually manifested by a decrease in visual acuity or visual clarity, a reduction in visual field size, abnormalities in dark spots or light adaptation, and difficulty viewing objects in bright or dim light. The visual disturbance can have obvious influence on the life of a patient, and serious visual disturbance such as low vision, blindness and the like can cause the loss of the performance, thereby further influencing the labor capacity and the life quality and bringing direct medical cost and social security burden. According to the data statistics of the world vision report newly released by the world health organization, more than 22 million people are estimated to have some degree of visual impairment globally, wherein the problem of visual impairment of at least 10 million people can be avoided or properly treated by visual impairment aid prevention. By 2019, China is one of the countries with the largest number of visually impaired people in the world, and with the arrival of the silver society, the total number of visually impaired people will continuously increase.
The traditional vision impairment auxiliary equipment or technology such as fitting glasses, blind sticks and electronic action tools focuses on solving the specific requirements in specific scenes, and is difficult to meet the application requirements of complex and changeable visual scenes such as occlusion, various targets and movement such as photopic vision, scotopic vision and the like.
Disclosure of Invention
An object of the embodiments of the present application is to provide an intelligent blind assisting system, method, computer device, and storage medium, so as to at least solve the problem that the application requirement of the conventional blind assisting technology on a specific scene is high.
In order to solve the above technical problem, an embodiment of the present application provides an intelligent blind assisting system, which adopts the following technical scheme:
the intelligent blind assisting system comprises a multi-sense information acquisition system, a multi-sense information processing module, a multi-channel output control module and an information output module; wherein:
the multi-sense information acquisition system is used for acquiring sense information and sending the sense information to the multi-sense information processing module for data processing;
the multi-sensory information processing module is used for carrying out data analysis and integration operation on the sensory information based on a pre-constructed multi-sensory information fusion model after receiving the sensory information to obtain integrated data and sending the integrated data to the multi-channel output control module for processing;
the multi-channel output control module is used for selecting an output control channel suitable for outputting data to reconstruct the data of the integrated data based on the data structure of the integrated data after receiving the integrated data to obtain reconstructed data and sending the reconstructed data to the information output module;
and the information output module is used for determining the output form of the reconstruction data based on the corresponding output control channel type of the reconstruction data after receiving the reconstruction data, and outputting the reconstruction data to a user according to the output form.
Further, the multi-sensory information acquisition system comprises an environment sensing sensor for acquiring environment information, a visual sensor for acquiring image information, an auditory sensor for acquiring sound information and a touch sensor for acquiring touch device information.
Further, the multi-sensory information processing module comprises a multi-sensory information fusion model for analyzing and processing the acquired sensory information and a feedback visual compensation model for sensory replacement processing of the integration data.
Further, a feedback visual compensation model is constructed based on the deep learning multi-sensory expression technology.
Further, the feedback visual compensation model comprises a physical layer for providing multi-sensory basic information, a data link layer for frame data transmission, a network layer for providing network transmission, a transmission layer for providing reliable data transmission, a session layer for establishing logical correspondence, a presentation layer for data conversion, and an application layer for providing process communication.
In order to solve the above technical problem, an embodiment of the present application further provides an intelligent blind assisting method, which adopts the following technical scheme:
collecting sensory information when the user wears the information collecting device;
performing data analysis integration operation on sensory information based on a pre-constructed multi-sensory information fusion model to obtain integration data;
selecting a proper output control channel based on the data structure of the integrated data to reconstruct the data of the integrated data to obtain reconstructed data;
determining an output form of the reconstruction data based on the corresponding output control channel type of the reconstruction data;
and outputting the reconstructed data to a user according to an output form.
Further, the method further comprises:
performing data format conversion on the integrated data by adopting a feedback visual compensation model to obtain visual compensation data;
selecting an output control channel suitable for outputting data based on the data structure of the integrated data to reconstruct the data of the integrated data, wherein the step of obtaining reconstructed data specifically comprises the following steps:
and performing data reconstruction on the visual compensation data through a visual output control channel to obtain reconstructed data.
Further, the feedback visual compensation model comprises a physical layer for providing multi-sensory basic information, a data link layer for frame data transmission, a network layer for providing network transmission, a transmission layer for providing reliable data transmission, a session layer for establishing logical correspondence, a presentation layer for data conversion, and an application layer for providing process communication.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
comprising a memory in which a computer program is stored and a processor which, when executing the computer program, carries out the steps of the intelligent blind assisting method as described above.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the intelligent blind assisting method as described above.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the application provides an intelligence helps blind system includes: the system comprises a multi-sense information acquisition system, a multi-sense information processing module, a multi-channel output control module and an information output module; the multi-sense information acquisition system is used for acquiring sense information and sending the sense information to the multi-sense information processing module for data processing; the multi-sensory information processing module is used for carrying out data analysis and integration operation on the sensory information based on a pre-constructed multi-sensory information fusion model after receiving the sensory information to obtain integrated data and sending the integrated data to the multi-channel output control module for processing; the multi-channel output control module is used for selecting an output control channel suitable for outputting data to reconstruct the data of the integrated data based on the data structure of the integrated data after receiving the integrated data to obtain reconstructed data and sending the reconstructed data to the information output module; and the information output module is used for determining the output form of the reconstruction data based on the corresponding output control channel type of the reconstruction data after receiving the reconstruction data, and outputting the reconstruction data to a user according to the output form. Comprehensively acquiring sensory information in the current environment of a user through a multi-sensory information acquisition system based on a complex reality scene, carrying out information identification, data analysis, information integration and other operations on the sensory information based on a pre-constructed multi-sensory information fusion model, and integrating the information into integrated data which can be identified by a program; and further, performing data reconstruction on the integrated data based on an output control channel matched with the data structure of the integrated data, and covering or fusing the reconstructed data on the original image for optimized presentation on the information output module. The application requirements of the actual scene can be weakened, the user experience feeling and the substitution feeling are enhanced through the reconstruction optimization of the virtual scene, the data calculation complexity is reduced, and the blind assisting effect is improved.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is a schematic diagram of an exemplary scenario in which the present application may be applied;
FIG. 2 is a schematic diagram of a multi-channel output control application of an intelligent blind assist system according to the present application;
FIG. 3 is a block diagram of one embodiment of an intelligent blind assist system according to the present application;
FIG. 4 is a schematic diagram illustrating the principles of an intelligent blind assisting method according to the present application;
FIG. 5 is a flow diagram of one embodiment of an intelligent blind assist method according to the present application;
FIG. 6 is a flow diagram of a data format conversion process of an intelligent blind assist method according to the present application;
FIG. 7 is a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1 to 3, the present application provides an embodiment of an intelligent blind assisting system, which can be applied to various electronic devices.
As shown in fig. 3, the intelligent blind assisting system 100 according to the present embodiment includes:
the system comprises a multi-sense information acquisition system 101, a multi-sense information processing module 102, a multi-channel output control module 103 and an information output module 104;
the multi-sensory information acquisition system 101 is used for acquiring sensory information and sending the sensory information to the multi-sensory information processing module 102 for data processing;
in this embodiment, the sensory information may specifically be sensory information such as image information, sound information, touch device information, and environment perception information that the user is in an environment of a current complex scene and can perceive through each sensory of the body.
In this embodiment, the collecting of the sensory information may specifically be to collect, by an information collecting device that is convenient for the user to wear or use, such as a visual sensor, an auditory sensor, a touch sensor, an environmental sensor, and the like, image information, sound information, touch device information, environmental perception information, and the like that the user can perceive through various senses of the body in the environment of the complex scene, so as to obtain the sensory information.
The multi-sensory information processing module 102 is used for performing data analysis and integration operation on the sensory information based on a pre-constructed multi-sensory information fusion model after receiving the sensory information to obtain integrated data, and sending the integrated data to the multi-channel output control module 103 for processing;
in this embodiment, the pre-constructed multi-sensory information fusion model is a deep learning model that is obtained by continuously training and optimizing the constructed deep learning information fusion network model based on a pre-collected sensory information training data set and can be used for identifying, analyzing, integrating and the like sensory information.
In this embodiment, the integration data is data classified and arranged according to data structures, such as image data, tactile data, sound data, and environmental control data, obtained by performing data analysis and integration operation on sensory information through a multi-sensory information fusion model that is constructed in advance.
In this embodiment, the received sensory information is used as input data of a pre-constructed multi-sensory information fusion model, and the pre-constructed multi-sensory information fusion model is based on a defined algorithm in the model, so that the sensory information can be accurately and quickly identified, for example, the information type of the sensory information, such as image information, sound information and the like, is determined first, and then, the image information and the sound information are identified respectively according to the identification operation preset in the model based on the information type of the image information and the sound information, such as image identification, content identification, instruction identification and the like; then, based on an integration mechanism in the model, the identified content is analyzed, sorted, structurally converted and the like, so that integrated data which contain different data structures such as image content, instruction content, sound content and the like and can be conveniently and rapidly identified by the application program is obtained.
The multi-channel output control module 103 is configured to, after receiving the integrated data, select an output control channel suitable for outputting the data based on the data structure of the integrated data to perform data reconstruction on the integrated data to obtain reconstructed data, and send the reconstructed data to the information output module 104;
in this embodiment, the conversion of data reconstruction data from one format to another format includes structure conversion, format change, type replacement, and the like, so as to solve the unification of spatial data in structure, format, and type, and implement the connection and fusion of multi-source and heterogeneous data.
In this embodiment, the reconstruction data may specifically include data such as image reconstruction data, haptic reconstruction data, sound reconstruction data, and environment control reconstruction data, and may be data that is used to cover or fuse the original image, so that the information output module 104 performs optimized presentation on the information perceived by the user based on the reconstruction data, so as to enhance the perception effect of the user on the environment, enhance the user experience and substitution feeling, thereby reducing the application requirement of the user on a complex scene, and thereby improving the blind assisting effect.
In this embodiment, referring to fig. 2, a suitable output control channel is selected based on the data structure of the integrated data to perform data reconstruction on the integrated data, which may specifically be based on a multi-channel output control mechanism, the data structure of the integrated data is identified, and then the output control channel adapted to the identified data structure is selected according to the identified data structure to perform reconstruction operations such as structure conversion, format change, type replacement, and the like on the integrated data, so as to obtain reconstructed data that can be used for covering or fusing on the original image, so that the information output module 104 performs optimized presentation on the information perceived by the user based on the reconstructed data.
The information output module 104 is configured to, after receiving the reconstruction data, determine an output form of the reconstruction data based on a corresponding output control channel type of the reconstruction data, and output the reconstruction data to a user according to the output form.
In this embodiment, the output form may specifically include a form in which a visual presentation, an auditory presentation, a tactile presentation, and an environmental adjustment, which are convenient for the user to perceive, are set according to the requirements of the actual application, and is not particularly limited herein.
In this embodiment, the output form of the reconstructed data is determined based on the corresponding output control channel type of the reconstructed data, which may specifically be determining the output form of outputting the reconstructed data through the output control channel type for data transmission, then, the reconstructed data is overlaid or fused on the original image according to the output form, and an optimized presentation is provided to the user based on the display device of the information output module 104, so as to enhance the perception effect of the user on the environment, enhance the user experience and substitution feeling, thereby reducing the application requirement of the user on a complex scene, and thereby improving the blind assisting effect.
The application provides an intelligence helps blind system includes: the system comprises a multi-sense information acquisition system, a multi-sense information processing module, a multi-channel output control module and an information output module; the multi-sense information acquisition system is used for acquiring sense information and sending the sense information to the multi-sense information processing module for data processing; the multi-sensory information processing module is used for carrying out data analysis and integration operation on the sensory information based on a pre-constructed multi-sensory information fusion model after receiving the sensory information to obtain integrated data and sending the integrated data to the multi-channel output control module for processing; the multi-channel output control module is used for selecting an output control channel suitable for outputting data to reconstruct the data of the integrated data based on the data structure of the integrated data after receiving the integrated data to obtain reconstructed data and sending the reconstructed data to the information output module; and the information output module is used for determining the output form of the reconstruction data based on the corresponding output control channel type of the reconstruction data after receiving the reconstruction data, and outputting the reconstruction data to a user according to the output form. Comprehensively acquiring sensory information in the current environment of a user through a multi-sensory information acquisition system based on a complex reality scene, carrying out information identification, data analysis, information integration and other operations on the sensory information based on a pre-constructed multi-sensory information fusion model, and integrating the information into integrated data which can be identified by a program; and further, performing data reconstruction on the integrated data based on an output control channel matched with the data structure of the integrated data, and covering or fusing the reconstructed data on the original image for optimized presentation on the information output module. The application requirements of the actual scene can be weakened, the user experience feeling and the substitution feeling are enhanced through the reconstruction optimization of the virtual scene, the data calculation complexity is reduced, and the blind assisting effect is improved.
In some optional implementations of the first embodiment of the present application, with continued reference to fig. 1, the multi-sensory information collecting system 101 includes an environmental sensing sensor for collecting environmental information, a visual sensor for collecting image information, an auditory sensor for collecting sound information, and a tactile sensor for collecting information of a touch device.
In this embodiment, based on information acquisition devices such as the environmental perception sensor, the visual sensor, the auditory sensor and the touch sensor, the information acquisition devices can be conveniently worn or used by a user, so that the user in the environment of a complex scene can quickly and accurately acquire various sense organs of the body and can perceive image information, sound information, touch device information, environmental perception information and other sense organ information, the user can combine the real world with the virtual world, so as to enhance the perception effect of the user on the environment, enhance the experience feeling and the substitution feeling of the user, thereby reducing the application demand of the user on the complex scene, and further improving the blind assisting effect.
In some optional implementations of the first embodiment of the present application, the multi-sensory information processing module 102 includes a multi-sensory information fusion model for analyzing and processing the collected sensory information, and a feedback visual compensation model for performing sensory replacement processing on the integrated data.
In this embodiment, because the first-level blind vision impairment patients and the elderly with partial serious vision impairment are usually vision impairment caused by diabetic retinopathy, central macular degeneration and the like, the present embodiment is based on the information barrier-free technology, including the visual information conversion recognition technology, the optical character recognition technology, the voice interaction technology and other technologies, and is a vision-impaired person, i.e., a patient who cannot be corrected by other vision-aid devices, the present embodiment realizes information compensation acquisition by adopting the information conversion method, i.e., sensory replacement processing is performed on the integrated data by feeding back the visual compensation model to realize information conversion, so that the daily learning, living and working requirements of the vision-impaired person can be met, the application requirements of the user on complex scenes can be reduced, meanwhile, the perception effect of the user on the environment can be enhanced, and the user experience and substitution feeling can be enhanced, thereby improving the blind assisting effect.
In some optional implementations of the first embodiment of the present application, the feedback visual compensation model is constructed based on a deep learning multi-sensory expression technique.
In this embodiment, the feedback visual compensation model is constructed based on a deep learning multi-sensory expression technology, specifically, a feedback visual compensation intelligent model capable of integrating multiple senses is constructed by using a deep learning visual image, listening and touch expression technology.
In some optional implementations of the first embodiment of the present application, the feedback visual compensation model includes a physical layer for providing multi-sensory basic information, a data link layer for frame data transmission, a network layer for providing network transmission, a transmission layer for providing reliable data transmission, a session layer for establishing logical correspondence, a presentation layer for data transformation, and an application layer for providing process communication.
In this embodiment, the feedback visual compensation model may specifically include a physical layer, a data link layer, a network layer, a transport layer, a session layer, a presentation layer, and an application layer; the physical layer is used for providing related image, sound and various tactile information; the data link layer is used for transmitting data in a frame unit without errors; the network layer is used for providing an end-to-end switching network data transmission function for the transport layer entity; the transmission layer is used for providing transparent and reliable data transmission service for the session layer entity and ensuring end-to-end data integrity; the session layer is used for finishing the correspondence between the logical name and the physical name of the communication process and providing session management service; the presentation layer is used for converting the data to be converted into abstract syntax suitable for a certain user; the application layer is used to determine the nature of the communication between processes to meet user needs and to provide interface services between the network and user applications.
To sum up, the present application provides an intelligent blind assisting system, comprising: the system comprises a multi-sense information acquisition system, a multi-sense information processing module, a multi-channel output control module and an information output module; the multi-sense information acquisition system is used for acquiring sense information and sending the sense information to the multi-sense information processing module for data processing; the multi-sensory information processing module is used for carrying out data analysis and integration operation on the sensory information based on a pre-constructed multi-sensory information fusion model after receiving the sensory information to obtain integrated data and sending the integrated data to the multi-channel output control module for processing; the multi-channel output control module is used for selecting an output control channel suitable for outputting data to reconstruct the data of the integrated data based on the data structure of the integrated data after receiving the integrated data to obtain reconstructed data and sending the reconstructed data to the information output module; and the information output module is used for determining the output form of the reconstruction data based on the corresponding output control channel type of the reconstruction data after receiving the reconstruction data, and outputting the reconstruction data to a user according to the output form. Based on information acquisition equipment such as an environment perception sensor, a visual sensor, an auditory sensor, a touch sensor and the like, the information acquisition equipment can be conveniently worn or used by a user, so that the user can quickly and accurately acquire sensory information such as image information, sound information, touch equipment information and environment perception information sensed by each sense organ of the body in the environment with a complex scene; carrying out data analysis and integration operation on sensory information through a pre-constructed multi-sensory information fusion model to obtain data which is classified and sorted according to a data structure, such as image data, touch data, sound data, environment control data and the like; further, based on a multi-channel output control mechanism, identifying a data structure of the integrated data, and selecting an output control channel adapted to the identified data structure according to the identified data structure to perform reconstruction operations such as structure conversion, format change, type replacement and the like on the integrated data to obtain reconstructed data which can be used for covering or fusing the original image; then, an output form for outputting the reconstruction data is determined through the type of an output control channel for data transmission, and the reconstruction data is covered or fused on the original image according to the output form, so that optimized presentation is provided for a user. The application requirements of the actual scene can be weakened, the user experience feeling and the substitution feeling are enhanced through the reconstruction optimization of the virtual scene, the data calculation complexity is reduced, and the blind assisting effect is improved.
Example two
With further reference to fig. 4-5, the present application provides an embodiment of an intelligent blind-aiding method, which corresponds to the system embodiment shown in fig. 3, and for ease of illustration, only the portions relevant to the present application are shown.
In step S1, sensory information is collected when the user wears the information collecting apparatus.
In this embodiment, the sensory information may specifically be sensory information such as image information, sound information, touch device information, and environment perception information that the user is in an environment of a current complex scene and can perceive through each sensory of the body.
In this embodiment, the collecting of the sensory information may specifically be to collect, by an information collecting device that is convenient for the user to wear or use, such as a visual sensor, an auditory sensor, a touch sensor, an environmental sensor, and the like, image information, sound information, touch device information, environmental perception information, and the like that the user can perceive through various senses of the body in the environment of the complex scene, so as to obtain the sensory information.
In step S2, a data analysis integration operation is performed on the sensory information based on the multi-sensory information fusion model that is constructed in advance, and integrated data is obtained.
In this embodiment, the pre-constructed multi-sensory information fusion model is a deep learning model that is obtained by continuously training and optimizing the constructed deep learning information fusion network model based on a pre-collected sensory information training data set and can be used for identifying, analyzing, integrating and the like sensory information.
In this embodiment, the integration data is data classified and arranged according to data structures, such as image data, tactile data, sound data, and environmental control data, obtained by performing data analysis and integration operation on sensory information through a multi-sensory information fusion model that is constructed in advance.
In this embodiment, the received sensory information is used as input data of a pre-constructed multi-sensory information fusion model, and the pre-constructed multi-sensory information fusion model is based on a defined algorithm in the model, so that the sensory information can be accurately and quickly identified, for example, the information type of the sensory information, such as image information, sound information and the like, is determined first, and then, the image information and the sound information are identified respectively according to the identification operation preset in the model based on the information type of the image information and the sound information, such as image identification, content identification, instruction identification and the like; then, based on an integration mechanism in the model, the identified content is analyzed, sorted, structurally converted and the like, so that integrated data which contain different data structures such as image content, instruction content, sound content and the like and can be conveniently and rapidly identified by the application program is obtained.
In step S3, an appropriate output control channel is selected based on the data structure of the integrated data, and the integrated data is subjected to data reconstruction to obtain reconstructed data.
In this embodiment, the conversion of data reconstruction data from one format to another format includes structure conversion, format change, type replacement, and the like, so as to solve the unification of spatial data in structure, format, and type, and implement the connection and fusion of multi-source and heterogeneous data.
In this embodiment, the reconstruction data may specifically include image reconstruction data, tactile reconstruction data, sound reconstruction data, environment control reconstruction data, and the like, and is data that can be used to cover or fuse the original image, so as to provide subsequent optimized presentation of information perceived by the user based on the reconstruction data, so as to enhance the perception effect of the user on the environment, enhance the user experience and substitution feeling, thereby reducing the application requirement of the user on a complex scene, and thereby improving the blind assisting effect.
In this embodiment, a suitable output control channel is selected based on the data structure of the integrated data to perform data reconstruction on the integrated data, which may specifically be based on a multi-channel output control mechanism, the data structure of the integrated data is identified, and then the output control channel adapted to the identified data structure is selected according to the identified data structure to perform reconstruction operations such as structure conversion, format change, type replacement and the like on the integrated data, so as to obtain reconstructed data that can be used for covering or fusing on the original image, so as to perform optimized presentation on information perceived by a user based on the reconstructed data.
In step S4, the output form of the reconstruction data is determined based on the corresponding output control channel type of the reconstruction data.
In this embodiment, the output form may specifically include a form in which a visual presentation, an auditory presentation, a tactile presentation, and an environmental adjustment, which are convenient for the user to perceive, are set according to the requirements of the actual application, and is not particularly limited herein.
In step S5, the reconstruction data is output to the user in the output form.
In this embodiment, the output form of the reconstructed data is determined based on the corresponding output control channel type of the reconstructed data, which may specifically be determining the output form of outputting the reconstructed data through the output control channel type for data transmission, and then, the reconstructed data is covered or fused on the original image according to the output form, so as to provide an optimized presentation to the user, so as to enhance the perception effect of the user on the environment, enhance the user experience feeling and the substitution feeling, thereby reducing the application requirement of the user on a complex scene, and thereby enhancing the blind assisting effect.
The application provides an intelligent blind assisting method, which comprises the following steps: comprehensively acquiring sensory information in the current environment of a user through a multi-sensory information acquisition system based on a complex reality scene, carrying out information identification, data analysis, information integration and other operations on the sensory information based on a pre-constructed multi-sensory information fusion model, and integrating the information into integrated data which can be identified by a program; and further, performing data reconstruction on the integrated data based on an output control channel matched with the data structure of the integrated data, and covering or fusing the reconstructed data on the original image for optimized presentation on the information output module. The application requirements of the actual scene can be weakened, the user experience feeling and the substitution feeling are enhanced through the reconstruction optimization of the virtual scene, the data calculation complexity is reduced, and the blind assisting effect is improved.
With continuing reference to fig. 6, a flowchart of the data format conversion process provided in embodiment two of the present application is shown, and for convenience of explanation, only relevant portions of the present application are shown.
In some optional implementations of the second embodiment of the present application, after step S2, the method further includes: step S601, the step S3 specifically includes: step S602.
In step S601, a feedback visual compensation model is used to perform data format conversion on the integrated data, so as to obtain visual compensation data.
In step S602, the visual compensation data is reconstructed through the visual output control channel to obtain reconstructed data.
In this embodiment, because the first-level blind vision impairment patients and the elderly with partial serious vision impairment are usually vision impairment caused by diabetic retinopathy, central macular degeneration and the like, the present embodiment is based on the information barrier-free technology, including the visual information conversion recognition technology, the optical character recognition technology, the voice interaction technology and other technologies, and is a vision-impaired person, i.e. a patient who cannot be corrected by other vision-aid devices, the present embodiment realizes information compensation acquisition by adopting an information transformation manner, i.e. sensory replacement processing is performed on the integrated data by feeding back a visual compensation model to obtain visual compensation data, thereby realizing information transformation, solving the daily learning, life and work requirements of the vision-impaired person, reducing the application requirements of the user on complex scenes, and enhancing the perception effect of the user on the environment, the user experience feeling and the substitution feeling are enhanced, and therefore the blind assisting effect is improved.
In some optional implementations of the second embodiment of the present application, the feedback visual compensation model includes a physical layer for providing multi-sensory basic information, a data link layer for frame data transmission, a network layer for providing network transmission, a transmission layer for providing reliable data transmission, a session layer for establishing logical correspondence, a presentation layer for data transformation, and an application layer for providing process communication.
In this embodiment, the feedback visual compensation model may specifically include a physical layer, a data link layer, a network layer, a transport layer, a session layer, a presentation layer, and an application layer; the physical layer is used for providing related image, sound and various tactile information; the data link layer is used for transmitting data in a frame unit without errors; the network layer is used for providing an end-to-end switching network data transmission function for the transport layer entity; the transmission layer is used for providing transparent and reliable data transmission service for the session layer entity and ensuring end-to-end data integrity; the session layer is used for finishing the correspondence between the logical name and the physical name of the communication process and providing session management service; the presentation layer is used for converting the data to be converted into abstract syntax suitable for a certain user; the application layer is used to determine the nature of the communication between processes to meet user needs and to provide interface services between the network and user applications.
In summary, the present application provides an intelligent blind-aiding method, including: collecting sensory information when the user wears the information collecting device; performing data analysis integration operation on sensory information based on a pre-constructed multi-sensory information fusion model to obtain integration data; selecting an output control channel suitable for outputting data based on the data structure of the integrated data to reconstruct the data of the integrated data to obtain reconstructed data; determining an output form of the reconstruction data based on the corresponding output control channel type of the reconstruction data; and outputting the reconstructed data to a user according to an output form. Based on information acquisition equipment such as an environment perception sensor, a visual sensor, an auditory sensor, a touch sensor and the like, the information acquisition equipment can be conveniently worn or used by a user, so that the user can quickly and accurately acquire sensory information such as image information, sound information, touch equipment information and environment perception information sensed by each sense organ of the body in the environment with a complex scene; carrying out data analysis and integration operation on sensory information through a pre-constructed multi-sensory information fusion model to obtain data which is classified and sorted according to a data structure, such as image data, touch data, sound data, environment control data and the like; further, based on a multi-channel output control mechanism, identifying a data structure of the integrated data, and selecting an output control channel adapted to the identified data structure according to the identified data structure to perform reconstruction operations such as structure conversion, format change, type replacement and the like on the integrated data to obtain reconstructed data which can be used for covering or fusing the original image; then, an output form for outputting the reconstruction data is determined through the type of an output control channel for data transmission, and the reconstruction data is covered or fused on the original image according to the output form, so that optimized presentation is provided for a user. The application requirements of the actual scene can be weakened, the user experience feeling and the substitution feeling are enhanced through the reconstruction optimization of the virtual scene, the data calculation complexity is reduced, and the blind assisting effect is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 6, fig. 7 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 7 comprises a memory 71, a processor 72, a network interface 73, which are communicatively connected to each other via a system bus. It is noted that only a computer device 7 having components 71-73 is shown, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 71 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 71 may be an internal storage unit of the computer device 7, such as a hard disk or a memory of the computer device 7. In other embodiments, the memory 71 may also be an external storage device of the computer device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 7. Of course, the memory 71 may also comprise both an internal storage unit of the computer device 7 and an external storage device thereof. In this embodiment, the memory 71 is generally used for storing an operating system installed in the computer device 7 and various types of application software, such as program codes of an intelligent blind assisting method. Further, the memory 71 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 72 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 72 is typically used to control the overall operation of the computer device 7. In this embodiment, the processor 72 is configured to execute the program code stored in the memory 71 or process data, for example, execute the program code of the intelligent blind assisting method.
The network interface 73 may comprise a wireless network interface or a wired network interface, and the network interface 73 is generally used for establishing a communication connection between the computer device 7 and other electronic devices.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing an intelligent blind-aiding program, which is executable by at least one processor to cause the at least one processor to perform the steps of the intelligent blind-aiding method as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. An intelligent blind-aiding system, comprising: the system comprises a multi-sense information acquisition system, a multi-sense information processing module, a multi-channel output control module and an information output module; wherein:
the multi-sense information acquisition system is used for acquiring sense information and sending the sense information to the multi-sense information processing module for data processing;
the multi-sensory information processing module is used for carrying out data analysis and integration operation on the sensory information based on a pre-constructed multi-sensory information fusion model after receiving the sensory information to obtain integrated data, and sending the integrated data to the multi-channel output control module for processing;
the multi-channel output control module is used for selecting a proper output control channel to carry out data reconstruction on the integrated data based on a data structure of the integrated data after receiving the integrated data to obtain reconstructed data, and sending the reconstructed data to the information output module;
and the information output module is used for determining the output form of the reconstruction data based on the corresponding output control channel type of the reconstruction data after receiving the reconstruction data, and outputting the reconstruction data to a user according to the output form.
2. The intelligent blind assisting system of claim 1, wherein the multi-sensory information collecting system comprises an environmental perception sensor for collecting environmental information, a visual sensor for collecting image information, an auditory sensor for collecting sound information, and a tactile sensor for collecting touch device information.
3. The intelligent blind assisting system of claim 1, wherein the multi-sensory information processing module comprises the multi-sensory information fusion model for analyzing and processing the acquired sensory information and a feedback visual compensation model for sensory replacement processing of integrated data.
4. The intelligent blind-aiding system of claim 3, wherein the feedback visual compensation model is constructed based on deep learning multi-sensory expression techniques.
5. The intelligent blind assisting system of claim 3, wherein the feedback visual compensation model comprises a physical layer for providing multi-sensory basic information, a data link layer for frame data transmission, a network layer for providing network transmission, a transmission layer for providing reliable data transmission, a session layer for establishing logical correspondence, a presentation layer for data conversion, and an application layer for providing process communication.
6. An intelligent blind-aiding method is characterized by comprising the following steps:
collecting sensory information when the user wears the information collecting device;
performing data analysis integration operation on the sensory information based on a pre-constructed multi-sensory information fusion model to obtain integrated data;
selecting a proper output control channel based on a data structure of the integrated data to carry out data reconstruction on the integrated data to obtain reconstructed data;
determining an output form of the reconstruction data based on the corresponding output control channel type of the reconstruction data;
and outputting the reconstruction data to the user according to the output form.
7. The intelligent blind assisting method according to claim 6, wherein after the step of performing data analysis integration operation on the sensory information based on the pre-constructed multi-sensory information fusion model to obtain integrated data, the method further comprises:
performing data format conversion on the integrated data by adopting the feedback visual compensation model to obtain visual compensation data;
the step of selecting an output control channel suitable for outputting data based on the data structure of the integrated data to perform data reconstruction on the integrated data to obtain the reconstructed data specifically includes:
and carrying out data reconstruction on the visual compensation data through a visual output control channel to obtain the reconstructed data.
8. The intelligent blind assisting method of claim 7, wherein the feedback visual compensation model comprises a physical layer for providing multi-sensory basic information, a data link layer for frame data transmission, a network layer for providing network transmission, a transmission layer for providing reliable data transmission, a session layer for establishing logical correspondence, a presentation layer for data conversion, and an application layer for providing process communication.
9. A computer device, characterized by comprising a memory in which a computer program is stored and a processor which, when executing the computer program, carries out the steps of the intelligent blind assisting method according to any one of claims 6 to 8.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the intelligent blind assisting method according to any one of claims 6 to 8.
CN202011348096.2A 2020-11-26 2020-11-26 Intelligent blind assisting system, method, computer equipment and storage medium Withdrawn CN112515928A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011348096.2A CN112515928A (en) 2020-11-26 2020-11-26 Intelligent blind assisting system, method, computer equipment and storage medium
PCT/CN2021/132316 WO2022111443A1 (en) 2020-11-26 2021-11-23 Intelligent blind assisting system and method, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011348096.2A CN112515928A (en) 2020-11-26 2020-11-26 Intelligent blind assisting system, method, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112515928A true CN112515928A (en) 2021-03-19

Family

ID=74993972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011348096.2A Withdrawn CN112515928A (en) 2020-11-26 2020-11-26 Intelligent blind assisting system, method, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112515928A (en)
WO (1) WO2022111443A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022111443A1 (en) * 2020-11-26 2022-06-02 苏州中科先进技术研究院有限公司 Intelligent blind assisting system and method, computer device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115309983B (en) * 2022-07-21 2023-05-12 国家康复辅具研究中心 Auxiliary tool adaptation method, system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015198284A1 (en) * 2014-06-26 2015-12-30 D Amico Alessio Maria Reality description system and method
CN110559127A (en) * 2019-08-27 2019-12-13 上海交通大学 intelligent blind assisting system and method based on auditory sense and tactile sense guide
CN111651035A (en) * 2020-04-13 2020-09-11 济南大学 Multi-modal interaction-based virtual experiment system and method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107223046A (en) * 2016-12-07 2017-09-29 深圳前海达闼云端智能科技有限公司 intelligent blind-guiding method and device
CN108406848A (en) * 2018-03-14 2018-08-17 安徽果力智能科技有限公司 A kind of intelligent robot and its motion control method based on scene analysis
US10900788B2 (en) * 2018-12-03 2021-01-26 Sidharth ANANTHA Wearable navigation system for the visually impaired
CN112515928A (en) * 2020-11-26 2021-03-19 苏州中科先进技术研究院有限公司 Intelligent blind assisting system, method, computer equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015198284A1 (en) * 2014-06-26 2015-12-30 D Amico Alessio Maria Reality description system and method
CN110559127A (en) * 2019-08-27 2019-12-13 上海交通大学 intelligent blind assisting system and method based on auditory sense and tactile sense guide
CN111651035A (en) * 2020-04-13 2020-09-11 济南大学 Multi-modal interaction-based virtual experiment system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周爱保等: "视听整合理论模型及其对听障者听觉训练的启示", 《中国特殊教育》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022111443A1 (en) * 2020-11-26 2022-06-02 苏州中科先进技术研究院有限公司 Intelligent blind assisting system and method, computer device, and storage medium

Also Published As

Publication number Publication date
WO2022111443A1 (en) 2022-06-02

Similar Documents

Publication Publication Date Title
Frühauf et al. Pilot study on the acceptance of mobile teledermatology for the home monitoring of high‐need patients with psoriasis
WO2022111443A1 (en) Intelligent blind assisting system and method, computer device, and storage medium
CN114898832A (en) Rehabilitation training remote control system, method, device, equipment and medium
Krösl et al. A VR-based user study on the effects of vision impairments on recognition distances of escape-route signs in buildings
Limanowski et al. Where do we stand on locating the self?
Biswas et al. Developing multimodal adaptation algorithm for mobility impaired users by evaluating their hand strength
Sullivan et al. Envisioning inclusive futures: Technology-based assistive sensory and action substitution
Gray et al. Communication difficulties with limited English proficiency patients: clinician perceptions of clinical risk and patterns of use of interpreters
US9521202B2 (en) Method for matching multiple devices, and device and server system for enabling matching
CN111428583A (en) Visual compensation method based on neural network and touch lattice
Bhatlawande et al. Electronic bracelet and vision-enabled waist-belt for mobility of visually impaired people
CN110266994A (en) Video call method, video call device and terminal
CN105204416A (en) Method for ward data acquisition
CN112114918A (en) Intelligent device, server, intelligent system and related interface display method
CN114236834A (en) Screen brightness adjusting method and device of head-mounted display equipment and head-mounted display equipment
KR20240027900A (en) Apparatus, method and program for calculating acupoint location using a learning model based on AI
TW200809220A (en) Information overlay system
Cruz-Cunha et al. Handbook of Research on Developments in E-Health and Telemedicine: Technological and Social Perspectives: Technological and Social Perspectives
CN104483754A (en) Head-wearing type multimedia terminal assisted watching system aiming at patient with dysopia
KR20240034094A (en) Rehabilitation recovery exercise recommendation system and method based on living environment sensing using digital image recoginition
CN111488500A (en) Medical problem information processing method and device and storage medium
CN116189882A (en) Human-computer interaction-based home cognitive rehabilitation training method and system for Alzheimer's disease
CN114299598A (en) Method for determining fixation position and related device
Gopalakrishnan et al. Preference of low vision devices in patients with central field loss and peripheral field loss
Vanderheiden et al. Design for people experiencing functional limitations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210319