CN215821381U - Visual field auxiliary device of AR & VR head-mounted typoscope in coordination - Google Patents

Visual field auxiliary device of AR & VR head-mounted typoscope in coordination Download PDF

Info

Publication number
CN215821381U
CN215821381U CN202023257819.6U CN202023257819U CN215821381U CN 215821381 U CN215821381 U CN 215821381U CN 202023257819 U CN202023257819 U CN 202023257819U CN 215821381 U CN215821381 U CN 215821381U
Authority
CN
China
Prior art keywords
visual field
aid
typoscope
head
cooperative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202023257819.6U
Other languages
Chinese (zh)
Inventor
章晓聪
童晓煜
陈达
徐默
顾钊铨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Jishi Intelligent Technology Co ltd
Original Assignee
Hangzhou Jishi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Jishi Intelligent Technology Co ltd filed Critical Hangzhou Jishi Intelligent Technology Co ltd
Priority to CN202023257819.6U priority Critical patent/CN215821381U/en
Application granted granted Critical
Publication of CN215821381U publication Critical patent/CN215821381U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Rehabilitation Tools (AREA)

Abstract

The utility model discloses a visual field auxiliary device cooperating with AR & VR head-mounted typoscope, comprising: the visual aid comprises a vision aid body, a head band, an image acquisition assembly, a face recognition piece, a positioning piece, an attitude angle sensor, a controller, an information transmission assembly and a voice device, wherein the head band is connected with the vision aid body and fixed on the head; according to the utility model, the three-dimensional scene is obtained by matching the first depth camera and the second depth camera with the infrared projector and the attitude angle sensor, and information is converted into images or sounds to remind the visually impaired patients in advance, so that the visual field compensation outside the blind area is realized, and the low-vision patients are helped to avoid obstacles and dangers better.

Description

Visual field auxiliary device of AR & VR head-mounted typoscope in coordination
Technical Field
The utility model relates to medical auxiliary equipment, in particular to a visual field auxiliary device cooperating with an AR & VR head-mounted typoscope.
Background
A visual aid is any device or apparatus that improves or enhances the visual ability of a low-vision patient, similar to a hearing aid, which enables a person with poor hearing to hear what he originally did not hear, and which enables the low-vision patient to see clearly what he originally did not see or was not looking.
With the different military prominence of machine vision in recent years, the existing augmented reality technology (AR) and virtual reality technology (VR) are combined, the machine vision provides a new technical breakthrough for assisting visually-impaired people to reestablish artificial vision, and visual training and reconstruction are performed on real pictures by using an image recognition technology and an image processing technology, so that the realization of the purpose that the machine vision is expected to assist in training the artificial vision completely and even replace the visual function of visually-impaired people is realized, the traditional market of visually-impaired people is also new and powerful impact, the technical innovation is not limited to auxiliary application, but the training strengthening function is really achieved from the vision.
The normal dynamic visual angle limit of human eyes is about 150 degrees in the vertical direction and 230 degrees in the horizontal direction, and the optimal visual field auxiliary range which can be achieved by the current head-mounted typoscope on the market is limited by the development level of current hardware, generally only 45 degrees exist, and the visual field range of the optimal equipment can only achieve 55 degrees. Whether the structure is an AR or VR structure, the visual field is closed to a certain extent, the visual field image acquired by the visually impaired patient is based on the visual field size range which can be displayed by the optical structure, the visual aid assistance effect is not realized for objects, environments or people outside the camera range or outside the screen picture, and dangerous conditions are easily caused due to the closed visual field indoors or outdoors; and because the restriction of camera image acquisition mode, when the person of wearing normally walks, the visual field blind area that exists in its vertical direction can influence patient's underfoot visual field greatly, can't notice the danger on foot limit in time during the walking. For the patients with visual impairment, the vision-aiding method can only be embodied in vision improvement, but is a great inhibition in visual field, and the patients need to scan the change of the peripheral conditions by rotating the head left and right and up and down frequently.
The market needs a visual aid visual field auxiliary device which can compensate visual fields outside the blind area and help low-vision patients to better avoid obstacles and dangers, and the visual aid visual field auxiliary device solves the problems.
SUMMERY OF THE UTILITY MODEL
In order to overcome the defects of the prior art, the utility model aims to provide a visual field auxiliary device cooperating with an AR & VR head-mounted typoscope, which can acquire information outside a blind area, supplement unknown visual field, convert the information into images or sound to remind a visually impaired patient in advance, and ensure that a low-vision patient can effectively avoid obstacles and dangers.
In order to achieve the above object, the present invention adopts the following technical solutions:
a visual field assistance device in cooperation with an AR & VR head-mounted typoscope, comprising: connect in the ware body of looking aid and fix the bandeau at the head, set up the image acquisition subassembly on the ware body of looking aid, set up the face identification spare on the ware body of looking aid, set up the setting element that sets up on the ware body of looking aid and have navigation function, connect in the gesture angle sensor of image acquisition subassembly, receive the controller of image acquisition subassembly, face identification spare, setting element, gesture angle sensor information, connect in the information transmission subassembly of controller, connect in the pronunciation ware of information transmission subassembly.
The aforesaid visual field auxiliary device in coordination with AR & VR head-mounted typoscope, the image acquisition subassembly includes: the first depth camera and the second depth camera of anthropomorphic dummy eye position set up on the typoscope body and are used for the infrared ranging module of supplementary collection.
The utility model provides a visual field auxiliary device of wear-type typoscope in coordination with AR & VR, infrared range finding module is the infrared projector.
In the aforementioned visual field assisting device cooperating with the AR & VR head-mounted typoscope, the face recognition device is a color camera.
In the aforementioned visual field assisting device cooperating with the AR & VR head-mounted typoscope, the positioning member is a high-precision positioning chip.
In the aforementioned visual field assisting device cooperating with the AR & VR head-mounted typoscope, the controller is an ARM processor.
The foregoing visual field assistance device in cooperation with an AR & VR head mounted viewing aid, the information transmission assembly comprising: a mobile communication network with network function and connected to the controller, and a wireless connection module connected to the controller and the voice device.
In the aforementioned visual field aid device cooperating with AR & VR head-mounted typoscope, the mobile communication network is SIMCOM.
The utility model provides a visual field auxiliary device of wear-type typoscope in coordination with AR & VR, wireless connection module is bluetooth module.
The utility model has the advantages that:
according to the utility model, the first depth camera and the second depth camera simulating the positions of human eyes are used for collecting surrounding environment three-dimensional information, an infrared projector is used for projecting invisible near-infrared static speckles, an attitude angle sensor is used for obtaining attitude angle information of the first depth camera and the second depth camera, the depth information and the attitude angle information in an image are processed, the position, height and distance of a ramp and a ground obstacle in a front three-dimensional scene are detected, the three-dimensional scene is obtained, information outside a blind area can be obtained, and an unknown visual field is supplemented;
the structural design of the utility model can divide the distance by the depth of color, the obtained image contains the depth result, and finally the detection result is transmitted to the head-mounted typoscope through a Bluetooth form to be converted into semantic sound, and simultaneously the obstacle graph prompt is displayed in the picture of the head-mounted typoscope to assist the vision-impaired patient in obstacle avoidance.
The structure design of the utility model can distinguish different human faces at far and near distances by distance screening, can track different postures of the human faces in real time, positively correct the human faces, the corrected human face images are used for training a human face recognition model in an artificial intelligent neural network, and when the human face images to be recognized are input into the trained human face recognition model, a processing chip transmits recognition results to a user in a voice mode to assist the visually impaired people to perceive.
Drawings
FIG. 1 is a schematic structural diagram of an embodiment of the present invention;
FIG. 2 is a schematic representation of the use of the present invention;
FIG. 3 is an exemplary diagram of one embodiment of a radar prompt of the present invention.
Detailed Description
The utility model is described in detail below with reference to the figures and the embodiments.
As shown in fig. 1, a visual field assisting device in cooperation with an AR & VR head-mounted typoscope, comprising: connect in the ware body of looking aid and fix the bandeau at the head, set up the image acquisition subassembly on the ware body of looking aid, set up the face identification spare on the ware body of looking aid, set up the setting element that sets up on the ware body of looking aid and have navigation function, connect in the gesture angle sensor of image acquisition subassembly, receive the controller of image acquisition subassembly, face identification spare, setting element, gesture angle sensor information, connect in the information transmission subassembly of controller, connect in the pronunciation ware of information transmission subassembly. Preferably, the face recognition component is a color camera; the color camera is mainly used for shooting environmental information and image recognition in the later period, and the images are used for face recognition, face tracking and the like. Preferably, the controller is an ARM processor; the positioning piece is a high-precision positioning chip, and as further optimization, a GPS positioning chip or a Beidou positioning chip is adopted; the high-precision positioning chip can navigate when the visually impaired patient goes out, and the safety of the visually impaired patient when going out is ensured by matching with a binocular depth algorithm. Preferably, the text is converted into voice by TTS technology and played by a voice player for information feedback.
The image acquisition assembly includes: the first depth camera and the second depth camera simulate the positions of human eyes, and the infrared distance measuring module is arranged on the typoscope body and used for assisting in acquisition; preferably, the infrared distance measuring module is an infrared projector; preferably, the first depth camera and the second depth camera are ZED cameras or RealSense cameras.
The information transmission assembly includes: a mobile communication network with network function and connected to the controller, a wireless connection module connected to the controller and the voice device; as an embodiment, the mobile communication network is SIMCOM, and the wireless connection module is a bluetooth module; it should be noted that this is only an example of an embodiment, and other mobile communication networks or other wireless connection modules may also be used, and the present invention is not limited thereto and can be applied to the structure of the present invention.
The principle of operation of the utility model is described below:
the method comprises the steps of firstly, acquiring surrounding environment three-dimensional information through a first depth camera and a second depth camera, wherein a binocular vision technology belongs to one branch of a computer vision technology, and the main principle is that three-dimensional information of an object is acquired by processing two or more two-dimensional images, a three-dimensional live-action image of a corresponding scene is restored, and then a three-dimensional model of a real world is reconstructed. The system mainly utilizes an infrared projector to project invisible near-infrared static speckles, utilizes a first depth camera and a second depth camera to collect images, utilizes an attitude angle sensor to obtain attitude angle information of the cameras, processes the depth information and the attitude angle information in the images, detects the direction, height and distance of a ramp and a ground obstacle in a front three-dimensional scene, and has the main functions of obstacle detection, passage detection, step detection, pit detection, stair detection and up-and-down slope detection.
The specific detection method comprises the following steps:
obtaining a disparity map through an SGBM algorithm to obtain the three-dimensional information of the image:
first, a left disparity map and a right disparity map are obtained through an SGBM algorithm, the data type of the left disparity map is CV _16UC1, the data type of the right disparity map is CV _16SC1 (the unreliable disparity value in the disparity map in the SGBM is set as minimum disparity (mindisp-1) × 16, the unreliable disparity value in the left disparity map is set as-16, the truncation value is 0, the unreliable disparity value in the right disparity map is set as (-numberOfdisparities-1) < 16, and the absolute value is (numberOfdisparities +1) < 16.
Secondly, since the unit of parallax is a pixel (pixel), the unit of depth is often a millimeter (mm) representation. According to the geometric relationship of the parallel binocular vision, the following conversion formula of parallax and depth can be obtained:
depth=(f*baseline)/disp
in the above formula, depth represents a depth map; f denotes the normalized focal length, i.e. fx in the internal reference; baseline is the distance between the optical centers of the two cameras, called the baseline distance; disp is the disparity value.
The distance is divided according to the color depth, the depth result is contained in the obtained image, the detection result is transmitted to the head-mounted typoscope through a Bluetooth mode and converted into semantic sound, and meanwhile, the obstacle graph prompt is displayed in the picture of the head-mounted typoscope to assist in obstacle avoidance for the visually impaired patient.
The navigation function integration SIM-COM module is used for network communication, and is matched with a high-precision GPS module to carry out positioning and path planning on current position and destination information, and angle information and displacement information of the front orientation of the visually impaired and the planned path are calculated according to the attitude angle sensor; in the walking process, when the position is close to the middle node of one map label, the data of the planned path is updated and corrected; the utility model mainly carries out data complementation through the image information collected by the GPS positioner of the communication blind stick and the head-mounted typoscope, provides accurate position calibration and direction calibration for the users of the visually impaired patients, avoids obstacles in the navigation and walking process of the visually impaired patients by means of high-precision obstacle detection and short-circuit path planning, avoids collision and further ensures personal safety.
The human face recognition technology tracks human faces by utilizing depth images of a first depth camera and a second depth camera and a color camera, preferably a color image of an RGB camera, and provides labels for the human faces, wherein the depth human face recognition adopts a triple Loss algorithm, and the optimization target of the human face recognition is that the distance between the same person is as close as possible, and the distance between different persons is as far as possible. Suppose that
Figure BDA0002868504760000041
Figure BDA0002868504760000042
The Anchor, positive and negative examples of the ith triplet are shown.
Figure BDA0002868504760000043
Respectively, represent corresponding features. The triple Loss function is as follows:
Figure BDA0002868504760000044
indicating that the penalty equals the value if the value in the box is greater than zero and equals 0 if it is less than zero.
Figure BDA0002868504760000045
Is a super parameter, and indicates that the distance between the Anchor and Positive features should be larger than that between the Anchor and Negative features. Minimizing the loss function can ensure that the distance between the same face features is less than the distances between the face features of different people.
Different human faces at far and near distances are distinguished through distance screening, so that the human face at far distance behind is blurred, the human face model at near distance at present is more concerned, and compared with the existing human face recognition technology, different postures of the human face can be tracked in real time by matching with a depth image processing algorithm of a camera, and the human face is corrected in a positive mode. The corrected face image is used for training a face recognition model in an artificial intelligence neural network, when the face image to be recognized is input into the trained face recognition model, the ARM processor transmits a recognition result to a user in a voice mode, and the user is assisted in perception of visually impaired people.
As shown in FIG. 2, the communication blind stick is formed by the cooperation of the first depth camera, the second depth camera and the attitude sensor, the first depth camera and the second depth camera obtain the three-dimensional information of the surrounding environment to help the visually impaired patients to know the environment information, the ARM processor is internally provided with a binocular depth algorithm and an image recognition algorithm to process the obtained three-dimensional information, judge obstacles and tell the visually impaired patients how to avoid the obstacles through sound feedback. The principle of obstacle avoidance is that stereo information of the surrounding environment, namely the distance, the direction and the size of an obstacle, is acquired through a binocular camera and a millimeter wave radar, and different sounds are fed back according to different scenes. In addition, the carried face recognition system performs color depth information fusion, target tracking, neural network and other technologies by using information acquired by the color camera, so that the face recognition system can gradually acquire and register faces frequently encountered by a user in the process that the visually impaired user uses the intelligent visual auxiliary equipment, recognize when the visually impaired patient meets the registered faces again, send the recognition result to the head-wearing typoscope in a Bluetooth mode, and display or play the recognition result on the typoscope in a graphic or sound mode so as to transmit the recognition result to the visually impaired user; the reminding information comprises information such as the direction and distance of an object or a person relative to the wearer, as shown in fig. 3, the direction is guided by the direction of the radar waveform of the reminding point, the distance is represented by the flashing frequency, and when the wearer rotates the head, the object or the person appears in the visual field of the wearer, the reminding is finished.
Here, it is to be emphasized that: the utility model claims the structure of the visual field auxiliary device of the typoscope, the position relation of parts and the calculation method of software, which are already described above, are algorithms which can be realized by the prior art, and belong to the protection range of the utility model.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It should be understood by those skilled in the art that the above embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the scope of the present invention.

Claims (9)

1. A visual field aid for a cooperative AR & VR head-mounted typoscope, comprising: connect in the ware body of looking aid and fix the bandeau at the head, set up the image acquisition subassembly on the ware body of looking aid, set up the face identification spare on the ware body of looking aid, set up the setting element that sets up on the ware body of looking aid and have navigation function, connect in the gesture angle sensor of image acquisition subassembly, receive the controller of image acquisition subassembly, face identification spare, setting element, gesture angle sensor information, connect in the information transmission subassembly of controller, connect in the pronunciation ware of information transmission subassembly.
2. The visual field aid of a cooperative AR & VR head mounted typoscope according to claim 1, wherein said image capture assembly comprises: the first depth camera and the second depth camera of anthropomorphic dummy eye position set up on the typoscope body and are used for the infrared ranging module of supplementary collection.
3. The visual field aid of a cooperative AR & VR headset of claim 2, wherein the infrared ranging module is an infrared projector.
4. The visual field aid of a cooperative AR & VR head mounted typoscope according to claim 1, wherein the face recognition device is a color camera.
5. The visual field aid of a cooperative AR & VR head mounted assistor as claimed in claim 1, wherein the positioning member is a high precision positioning chip.
6. The visual field aid of a cooperative AR & VR headset as recited in claim 1, wherein the controller is an ARM processor.
7. The visual field aid of a cooperative AR & VR head mounted typoscope according to claim 1, wherein said information transfer component comprises: a mobile communication network with network function and connected to the controller, and a wireless connection module connected to the controller and the voice device.
8. The visual field aid of a cooperative AR & VR headset as recited in claim 7, wherein the mobile communications network is SIMCOM.
9. The visual field aid of a cooperative AR & VR headset as recited in claim 7, wherein the wireless connection module is a bluetooth module.
CN202023257819.6U 2020-12-29 2020-12-29 Visual field auxiliary device of AR & VR head-mounted typoscope in coordination Active CN215821381U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202023257819.6U CN215821381U (en) 2020-12-29 2020-12-29 Visual field auxiliary device of AR & VR head-mounted typoscope in coordination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202023257819.6U CN215821381U (en) 2020-12-29 2020-12-29 Visual field auxiliary device of AR & VR head-mounted typoscope in coordination

Publications (1)

Publication Number Publication Date
CN215821381U true CN215821381U (en) 2022-02-15

Family

ID=80186654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202023257819.6U Active CN215821381U (en) 2020-12-29 2020-12-29 Visual field auxiliary device of AR & VR head-mounted typoscope in coordination

Country Status (1)

Country Link
CN (1) CN215821381U (en)

Similar Documents

Publication Publication Date Title
US11796309B2 (en) Information processing apparatus, information processing method, and recording medium
CN106214437B (en) A kind of intelligent blind auxiliary eyeglasses
CN104536579B (en) Interactive three-dimensional outdoor scene and digital picture high speed fusion processing system and processing method
CN106056092A (en) Gaze estimation method for head-mounted device based on iris and pupil
WO2012142202A1 (en) Apparatus, systems and methods for providing motion tracking using a personal viewing device
Sáez et al. Aerial obstacle detection with 3-D mobile devices
KR101885473B1 (en) Smart glass to help visually impaired
CN108245385A (en) A kind of device for helping visually impaired people&#39;s trip
CN106840112A (en) A kind of space geometry measuring method of utilization free space eye gaze point measurement
WO2022047828A1 (en) Industrial augmented reality combined positioning system
CN109059929A (en) Air navigation aid, device, wearable device and storage medium
EP4164565A1 (en) Blind assist eyewear with geometric hazard detection
Ghaderi et al. A wearable mobility device for the blind using retina-inspired dynamic vision sensors
CN106920260B (en) Three-dimensional inertial blind guiding method, device and system
CN112489138B (en) Target situation information intelligent acquisition system based on wearable equipment
CN112188059B (en) Wearable device, intelligent guiding method and device and guiding system
Kawai et al. A support system for visually impaired persons to understand three-dimensional visual information using acoustic interface
CN215821381U (en) Visual field auxiliary device of AR &amp; VR head-mounted typoscope in coordination
Botezatu et al. Development of a versatile assistive system for the visually impaired based on sensor fusion
McMurrough et al. 3D point of gaze estimation using head-mounted RGB-D cameras
Bourbakis et al. A 2D vibration array for sensing dynamic changes and 3D space for Blinds' navigation
CN113050917A (en) Intelligent blind-aiding glasses system capable of sensing environment three-dimensionally
Sujith et al. Computer Vision-Based Aid for the Visually Impaired Persons-A Survey And Proposing New Framework
CN214122904U (en) Dance posture feedback device
CN109583372A (en) Augmented reality system and its apparatus for nighttime driving

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant