CN110708533B - Visual assistance method based on augmented reality and intelligent wearable device - Google Patents

Visual assistance method based on augmented reality and intelligent wearable device Download PDF

Info

Publication number
CN110708533B
CN110708533B CN201911289930.2A CN201911289930A CN110708533B CN 110708533 B CN110708533 B CN 110708533B CN 201911289930 A CN201911289930 A CN 201911289930A CN 110708533 B CN110708533 B CN 110708533B
Authority
CN
China
Prior art keywords
image
information
user
light
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911289930.2A
Other languages
Chinese (zh)
Other versions
CN110708533A (en
Inventor
钟张翼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Rongmeng Intelligent Technology Co Ltd
Original Assignee
Hangzhou Rongmeng Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Rongmeng Intelligent Technology Co Ltd filed Critical Hangzhou Rongmeng Intelligent Technology Co Ltd
Priority to CN201911289930.2A priority Critical patent/CN110708533B/en
Publication of CN110708533A publication Critical patent/CN110708533A/en
Application granted granted Critical
Publication of CN110708533B publication Critical patent/CN110708533B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes

Abstract

The invention relates to the technical field of augmented reality, in particular to a visual assistance method and intelligent wearing equipment based on augmented reality. Therefore, the method can transform the original style of the external image into the preset style which is easier to recognize, form the content information of the preset style into the first virtual image, and then combine the first virtual image and the external scene to present in front of the user.

Description

Visual assistance method based on augmented reality and intelligent wearable device
Technical Field
The invention relates to the technical field of augmented reality, in particular to a visual assistance method based on augmented reality and intelligent wearable equipment.
Background
Many people with visual impairment exist in the society, for example, many people with visual impairment and low-vision people who suffer from ophthalmic diseases cannot correct the visual impairment through glasses, medicines and operations, the people cannot clearly see objects, the phenomenon of tunnel vision (peripheral vision loss) exists, or unrecoverable blind spots exist in a visual field area, and the like. Visual impairment seriously affects the quality of life and the working ability of the patient.
The visual auxiliary instruments on the market at present comprise a magnifying glass, an electronic low-vision auxiliary instrument and the like, but the instruments have the defects of incapability of seeing the face and the far place, small visual field range and inconvenience in use.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a visual assistance method based on augmented reality and an intelligent wearable device, which are capable of visually assisting a user so that the user can more clearly identify a surrounding environment.
In order to solve the above technical problems, embodiments of the present invention provide the following technical solutions:
in a first aspect, an embodiment of the present invention provides a visual assistance method based on augmented reality, applied to an intelligent wearable device, including:
acquiring an external image, wherein the external image contains content information;
processing the external image using an image analysis algorithm to transform a native pattern of the content information into a preset pattern;
emitting a first light ray, wherein the first light ray can form a first virtual image, and the first virtual image comprises the content information of the preset pattern;
receiving a second light ray, wherein the second light ray can form a real image, and the real image comprises an external scene;
and synthesizing the first light ray and the second light ray to present a synthesized image.
In some embodiments, the image analysis algorithm comprises an image edge detection algorithm, and the processing the external image using the image analysis algorithm to transform the native pattern of the content information into a preset pattern comprises:
processing the external image by using the image edge detection algorithm, and determining a first connected domain of each character of the content information in the external image;
and rendering the first communication domain of each character so as to transform the original style of the content information into a preset style.
In some embodiments, the rendering the first connected field of each of the texts to transform the native style of the content information into a preset style includes:
performing reverse selection processing on the first connected domains of all characters in the external image to obtain a second connected domain of a non-character region in the content information;
rendering the native style of the second connected domain to the preset style.
In some embodiments, the presenting the synthesized image comprises:
tracking the position of the vision defect of eyeballs of a user wearing the intelligent wearable device;
and presenting the first virtual image in a first visual field range corresponding to the non-vision defect position of the user.
In some embodiments, said presenting said first virtual image in a first field of view corresponding to a non-vision deficiency location of said user comprises:
determining coordinate information of the vision defect position of the user in a coordinate system of the intelligent wearable device;
determining a first visual field range corresponding to the non-vision defect position of the user according to the coordinate information;
presenting the first virtual image at the first field of view.
In some embodiments, the acquiring the external image comprises:
tracking a head rotation angle and/or an eyeball rotation angle of the user;
and acquiring an external image in a second visual field range corresponding to the head rotation angle and/or the eyeball rotation angle.
In some embodiments, the method further comprises:
acquiring an environment image;
when an object image matched with a preset shape is extracted from the environment image, detecting the distance between the intelligent wearable device and the object;
generating a third light ray according to the distance, wherein the third light ray can form a second virtual image, and the second virtual image contains distance prompt information;
and synthesizing the third light ray and the second light ray.
In some embodiments, the method further comprises:
processing the environment image by using a human face analysis algorithm to obtain character characteristic information;
judging whether preset characteristic information matched with the figure characteristic information exists in a preset database or not;
if yes, selecting identity information corresponding to the preset feature information, generating fourth light according to the identity information, and performing synthesis processing on the fourth light and the second light, wherein the fourth light can form a third virtual image, and the third virtual image contains the identity information.
In some embodiments, the method further comprises:
acquiring the current position and the destination position of the intelligent wearable device;
generating voice navigation information and navigation indication information according to the current position and the destination position;
generating a fifth light according to the voice navigation information and the navigation instruction information, wherein the fifth light can form a fourth virtual image, and the fourth virtual image comprises the navigation instruction information;
and combining the fifth light ray and the second light ray.
In some embodiments, the method further comprises:
acquiring physiological sign information of a user wearing the intelligent wearing equipment and pictures and/or videos of the surrounding environment;
judging whether the physiological characteristic information meets a preset alarm condition or not;
and if so, sending the geographical position information of the user and the picture and/or the video to a target party.
In a second aspect, an embodiment of the present invention provides an intelligent wearable device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform an augmented reality based visual assistance method as described above.
The embodiment of the invention has the beneficial effects that: different from the prior art, in the embodiment of the present invention, an external image including content information is first obtained, then the external image is processed by using an image analysis algorithm to transform an original style of the content information into a preset style, a first light is then emitted, wherein the first light can form a first virtual image including the content information of the preset style, a second light capable of forming a live-action image is then received, the live-action image includes an external scene, and finally the first light and the second light are synthesized to present the synthesized image. Therefore, the method can transform the original style of the external image into the preset style which is easier to recognize, form the content information of the preset style into the first virtual image, and then combine the first virtual image with the external scene to present in front of the user.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1a is a schematic structural diagram of an intelligent wearable device according to an embodiment of the present invention;
fig. 1b is a schematic structural diagram of an intelligent wearable device according to another embodiment of the present invention;
FIG. 1c is a schematic view of the see-through light guide element of FIG. 1a disposed on a head-mount frame;
FIG. 1d is a first graph of the side view angle and the display brightness of the display module shown in FIG. 1 a;
FIG. 1e is a second graph of the side view angle and the display brightness of the display module shown in FIG. 1 a;
FIG. 1f is a third relationship between the side view angle and the display brightness of the display module shown in FIG. 1 a;
FIG. 2a is a schematic diagram of the position relationship between the display module and the face of the user when the intelligent wearable device shown in FIG. 1a is worn;
FIG. 2b is a schematic view of the display module shown in FIG. 1a being rotated;
FIG. 3a is a schematic imaging diagram of the smart wearable device shown in FIG. 1 a;
FIG. 3b is a schematic view of one embodiment of the smart wearable device shown in FIG. 1 a;
FIG. 3c is a schematic view of one embodiment of the smart wearable device shown in FIG. 1 a;
FIG. 3d is a schematic view of one embodiment of the smart wearable device shown in FIG. 1 a;
FIG. 3e is a schematic view of one embodiment of the smart wearable device shown in FIG. 1 a;
FIG. 4 is a schematic view of the smart wearable device shown in FIG. 1a when connected to an external device for operation;
FIG. 5 is a schematic structural diagram of a visual aid device based on augmented reality according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a method for visual assistance based on augmented reality according to an embodiment of the present invention;
FIG. 7 is a schematic flow chart of step 201 in FIG. 6;
FIG. 8 is a schematic flow chart of step 202 of FIG. 6;
FIG. 9 is a schematic flow chart of step 2022 of FIG. 8;
FIG. 10 is a schematic flow chart of step 205 of FIG. 6;
FIG. 11 is a schematic flow chart of step 2052 in FIG. 10;
FIG. 12 is a flowchart illustrating a method for augmented reality-based visual aid according to another embodiment of the present invention;
FIG. 13 is a flowchart illustrating a method for augmented reality-based visual aid according to yet another embodiment of the present invention;
FIG. 14 is a flowchart illustrating a method for augmented reality-based visual aid according to yet another embodiment of the present invention;
fig. 15 is a flowchart illustrating a visual assistance method based on augmented reality according to still another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1a, an embodiment of the present invention provides an intelligent wearable device, where a total weight of the intelligent wearable device is less than 350 g, and the intelligent wearable device includes: a head-mounted frame 11, two display modules 12, two see-through light guide elements 13. The see-through light guide element 13 is an optical composite device capable of displaying a part of an actual image and a part of a generated virtual image.
The display module 12 and the see-through light guide element 13 are both disposed on the head-mount frame 11, and the head-mount frame 11 fixes the display module 12 and the see-through light guide element 13. The display module 12 is disposed on the upper side of the see-through light guide element 13, and light emitted from the display module 12 can pass through the see-through light guide element 13 and then be transmitted to human eyes. Optionally, the display module 12 may also be located at the side of the see-through light guide element 13.
The intelligent wearable device further comprises: and the main board 17 is arranged on the head-mounted frame 11 and is positioned between the two display modules 12. The main board 17 is provided with a processor, and the processor is used for processing the virtual image signal and displaying the virtual image information on the display module 12.
Referring to fig. 1b, the head-mounted frame 11 is further provided with a monocular camera 111, a binocular/binocular camera 112, an eye tracking camera 113, a gyroscope 114, an accelerometer 115, a magnetometer 116, a depth sensor 117, an ambient light sensor 118 and/or a distance sensor 119.
The monocular camera 111, the binocular/monocular camera 112, the eye tracking camera 113, the gyroscope 114, the accelerometer 115, the magnetometer 116, the depth of field sensor 117, the ambient light sensor 118, and/or the distance sensor 119 are electrically connected to the motherboard 17.
Specifically, the monocular camera 111 is a color monocular camera, and is placed in front of the head mount frame 11. When the user wears this intelligence wearing equipment, monocular camera 111 orientation can use this camera to shoot for the opposite side of user's face.
In the embodiment of the present invention, the head-mounted frame 11 is adapted to be worn on the head of the user, and each of the see-through light guide elements 13 has an inward surface facing towards the eyes of the user. The camera 14 takes an image of a remote location outside or an image that a user wants to see clearly, and transmits the external image to the main board 17, the external image contains content information, and the processor in the main board 17 processes the external image information, specifically:
the processor processes the external image using an image analysis algorithm to transform the native pattern of the content information into a preset pattern. The content information of the external image includes information such as numbers and images, and its original style includes original fonts and original colors of the numbers, and also includes styles such as original colors and background colors of the images. In order to enable the user to see the external image more clearly, the processor transforms the original style of the external image into a preset style, the preset style can be defined by the user, for example, the preset style is a style after changing a font in the original style, or a style after changing a font size or a font color or a font background color of the original style, and the preset style is more striking and easier to recognize than the original style.
Therefore, when a user looks at an external image, the content information of the external image can be more clearly identified through the preset pattern, and when the processor converts the original pattern of the content information of the external image into the preset pattern, the processor can process the external image by adopting an image edge detection algorithm to determine the first connected domain of each character of the content information in the external image, wherein in the digital image, an edge refers to a part with the most remarkable local change of the image, the edge mainly exists between a target and a target, and the discontinuity of local characteristics of the image, such as a sudden change of gray scale, an icon of a texture structure, an icon of color and the like, exists between the target and a background. Although the reasons for generating the edge points of the image are different, they are points with discontinuous gray scale or sharp gray scale change on the graph, and the image edge is divided into a step shape, a slope shape and a roof shape. The general image edge detection method mainly comprises the following four steps:
1. image filtering
Conventional edge detection algorithms are mainly based on the first and second derivatives of the image intensity, but the calculation of the derivatives is sensitive to noise, so filters must be used to improve the performance of the noise-related edge detector. It should be noted that most filters also suffer from loss of edge strength while reducing noise, and therefore a compromise between enhancing the edges and reducing noise is required.
2. Image enhancement
The basis of the enhanced edge is to determine the variation value of the neighborhood intensity of each point of the image. The enhancement algorithm may highlight points where there is a significant change in the neighborhood (or local) intensity value. Edge enhancement is typically accomplished by calculating the magnitude of the gradient.
3. Image detection
The gradient magnitude is relatively large for many points in the image, which are not all edges in a particular application area, so some way should be used to determine which points are edge points. The simplest edge detection criterion is the gradient magnitude.
4. Image localization
If an application requires the edge location to be determined, the location of the edge can be estimated at sub-pixel resolution, and the orientation of the edge can also be estimated. Many edge detection operators have been proposed over the last 20 years, and we only discuss the common edge detection operators in focus here.
Common image edge detection algorithms include detection methods such as differential edge detection, Reborts operator, Sobel operator, Prewitt operator and the like.
In addition, the connected domain comprises a single connected region and a multi-connected region, wherein a region G on the complex plane is called as a single connected region if a simple closed curve is made in any region G, and the interior of the closed curve always belongs to G. A region is referred to as a multiply connected region if it is not a singly connected region.
Therefore, the processor can determine the first connected domain of each character of the content information in the external image through the image edge detection algorithm, namely determine the character part in the content information, and then render the first connected domain of each character, so that the original style of the content information is converted into a preset style, for example, the original style of the characters in the content information is black characters, the color of the characters can be changed into yellow through rendering of the processor, and human eyes are more sensitive to yellow, so that human eyes can more easily identify the characters of the content information. The processor can render not only the characters in the external image, but also the image parts except the characters, when the image parts except the characters are rendered, the processor performs reverse selection processing on the first connected domains of all the characters in the external image, so as to obtain the second connected domain of the non-character area in the content information, and then the processor renders the original style of the second connected domain to a preset style, for example: the external image is black characters with white background, after the rendering of the first part, the original style of the external image is changed into yellow characters with white background, then the processor acquires a second connected domain of the background except all the characters, and the white background is rendered into a black background.
The processor transforms the original pattern of the content information of the external image into a preset pattern, then transmits the content information of the external image to the display module 12, the display module 12 displays the content information, the display module 12 emits a first light to the perspective light guide element 13, the first light contains virtual image information, while the external scene emits a second light, also received by the see-through light-guiding element 13, the second light may form a live view image containing the external scene, the see-through light guide element 13 combines the first light and the second light, the combined light is then directed into the user's left eye via the inwardly facing surface of a see-through light-directing element 13, and the other resultant light ray conducted via the inward-facing surface of the other see-through light-guiding element 13 enters the right eye of the user, to form a composite image of the virtual image and the live-action image of the external scene in the mind of the user.
Referring to fig. 1c, two see-through light guide elements 13 are disposed on the head frame 11 and respectively embedded in the head frame 11 independently. Alternatively, two regions corresponding to the left and right eyes of the user may be provided on the raw material for making the see-through light guide element, the shape and size of the regions being the same as the shape and size of each of the see-through light guide elements 13 in the above-described independent setting; the final effect is that a large perspective light guide element is provided with two areas corresponding to the left and right eyes of the user. It can be understood that two regions having the same size as the shape of the see-through light guide element 13 when the two see-through light guide elements are independently installed are formed on a large piece of material of the see-through light guide element, that is, the two see-through light guide elements 13 are integrally formed. The see-through type light guide elements provided corresponding to the left and right eye regions of the user are embedded in the head mount frame 11.
It should be noted that the display module 12 is detachably mounted on the head-mounted frame 11, for example, the display module is an intelligent display terminal such as a mobile phone and a tablet computer; alternatively, the display module is fixedly mounted on the head-mounted frame, for example, the display module is integrally designed with the head-mounted frame.
Two display modules 12 may be mounted on the head-mounted frame 11, and one display module 12 is correspondingly disposed for the left eye and the right eye of the user, for example, one display module 12 is used for emitting a first light ray containing left-eye virtual image information, and the other display module 12 is used for emitting another first light ray containing right-eye virtual image information. The two display modules 12 may be respectively located above the two perspective light guide elements 13 in a one-to-one correspondence manner, and when the intelligent wearable device is worn on the head of a user, the two display modules 12 are respectively located above the left eye and the right eye of the user in a one-to-one correspondence manner; the display module 12 may also be located at a side of the perspective type light guide element, that is, two perspective type light guide elements are located between two display modules, and when the intelligent wearable device is worn on the head of the user, the two display modules are located at sides of the left eye and the right eye of the user in a one-to-one correspondence manner.
A single display module 12 may also be mounted on the head-mounted frame 11, and the single display module 12 has two display regions, one display region is used for emitting a first light ray containing left-eye virtual image information, and the other display region is used for emitting another first light ray containing right-eye virtual image information.
The Display module includes, but is not limited to, LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), LCOS (Liquid Crystal On Silicon), and other types of displays.
Referring to FIG. 1d, the lateral axis identifies the side viewing angle and the longitudinal axis represents the display brightness. When the display module 12 is an LCD, the brightness of the display module 12 varies with the angle of the viewer. For a general LCD, the side viewing angle θ at a display luminance of 50% is generally large.
When the LCD is applied to an augmented reality display system, and is suitable for a small side viewing angle, the brightness of the display module 12 is concentrated in an angular region near the center. Since the augmented reality display system mainly uses an angular region near the center, the brightness of the first light and the second light projected to the eyes of the user is higher. Referring to fig. 1e, the side viewing angle θ of the first light and the second light emitted from the LCD applied in the augmented reality display system is generally smaller when the display brightness is 50%. Moreover, the distribution of the brightness of the first light and the second light emitted by the LCD applied to the augmented reality display system is bilaterally symmetrical about the side viewing angle of 0 degree, and the side viewing angle is less than 60 degrees. That is, when the user viewing angle is perpendicular to the display module 12, the display brightness of the first light ray and the second light ray emitted by the display module 12 is the maximum, when the user viewing angle is shifted to both sides, the display brightness gradually decreases, and when the side viewing angle is smaller than 60 degrees, the display brightness is 0.
Alternatively, referring to fig. 1f, the luminance distributions of the first and second light rays emitted from the LCD applied to the augmented reality display system may not be symmetrical about the 0 degree side view angle, and the side view angle when the display luminance is brightest may not be 0 degree.
Referring to fig. 2a, the two display modules 12 are respectively located above the two perspective light guide elements 13 in a one-to-one correspondence manner, when the user wears the intelligent wearable device, the display modules 12 form an included angle a with a front plane of the head of the user, and the included angle a is 0 to 180 degrees, preferably an obtuse angle. Meanwhile, the projection of the display module 12 on the horizontal plane is perpendicular to the frontal plane.
Referring to fig. 2b, in some examples, the position of the see-through light guiding element 13 can be rotated by an angle b around a rotation axis perpendicular to the horizontal plane, wherein the angle b is 0 to 180 degrees, preferably 0 to 90 degrees. Meanwhile, the distance between the perspective light guide elements 13 corresponding to the left eye and the right eye can be adjusted through a mechanical structure on the head-mounted frame 11 to adapt to the interpupillary distance of different users, so that the comfort level and the imaging quality during use are ensured. The farthest distance between the edges of the two see-through light guiding elements 13 is less than 150 mm, i.e. the distance from the left edge of the see-through light guiding element 13 arranged corresponding to the left eye to the right edge of the see-through light guiding element 13 arranged corresponding to the right eye is less than 150 mm. Correspondingly, the display modules 12 are connected through a mechanical structure, and the distance between the display modules 12 can be adjusted, or the same effect can be achieved by adjusting the positions of the display contents on the display modules 12.
The head-mounted frame 11 may be a glasses-type frame structure for hanging on the ears and the nose bridge of a user, on which a nose pad 1110 and temples 1111 are disposed and fixed on the head of the user through the nose pad 1110 and the temples 1111, the temples 1111 are foldable structures, wherein the nose pad 1110 is correspondingly fixed on the nose bridge of the user, and the temples 1111 are correspondingly fixed on the ears of the user. Furthermore, the side arms 1111 can be connected through elastic bands, and the elastic bands tighten the side arms when the glasses are worn, so that the frame can be fixed on the head.
Optionally, the nose pad 1110 and the temple 1111 are retractable mechanisms, and the height of the nose pad 1110 and the retractable length of the temple 1111 can be adjusted respectively. Similarly, the nose piece 1110 and the temple 1111 can be detachable, and the nose piece 1110 or the temple 1111 can be replaced after the nose piece 1110 or the temple 1111 is detached.
Alternatively, the head-mounted frame 11 may include a nose pad and a flexible rubber band, and the nose pad and the flexible rubber band are fixed on the head of the user; or only comprises a telescopic rubber band which is fixed on the head of the user. Alternatively, the head-mounted frame 11 may be a helmet-type frame structure for wearing on the top of the head and the bridge of the nose of the user. In the embodiment of the present invention, the main function of the head-mounted frame 11 is to be worn on the head of the user and to provide support for the optical and electrical components such as the display module 12 and the see-through light guide element 13, and the head-mounted frame includes but is not limited to the above-mentioned modes.
Referring to fig. 1a and fig. 3a together, the content information of the processed external image includes content information of a predetermined pattern, the content information of the predetermined pattern is transmitted to the display module 12, the display module 12 emits a first light ray 121, the first light ray 121 may form a first virtual image of a left eye, the first virtual image includes the content information of the predetermined pattern, and the first light ray 121 enters the left eye 14 of the user through the first light ray 121 transmitted by the inward surface 131 of the see-through light guide element 13; similarly, the display module 12 emits another first light ray, which includes content information in a preset pattern, and can form a first virtual image of the right eye, and the another first light ray enters the right eye of the user through another first light ray conducted towards the inner surface of another perspective type light guide element, so as to form a visual experience of the virtual image in the brain of the user. Therefore, the intelligent wearable device can form a virtual image containing the content information of the preset pattern by using the image at a far place or the image which is not clearly seen by the user, so that the user can clearly identify the image at the far place.
In the embodiment of the present invention, when the intelligent wearable device realizes the function of augmented reality, each of the see-through light guide elements 13 further has an outward surface opposite to the inward surface; the second light rays containing the live-view image information of the external scene transmitted through the outward and inward facing surfaces of the see-through light guide element 13 enter both eyes of the user to form a visual sense of a mixed virtual image and real live view. Referring to fig. 1a again, one of the see-through light guide elements 13 further has an outward surface 132 opposite to the inward surface 131, and the second light ray 151 containing the live-view image information of the external scene transmitted through the outward surface 132 and the inward surface 131 of the see-through light guide element 13 enters the left eye 14 of the user.
The monocular camera 111 may also be a high-resolution camera for taking pictures or shooting videos; the video obtained by shooting can also be used for superposing virtual objects seen by the user through software, and contents seen by the user through the intelligent wearable device can be reproduced.
The binocular/multi-view camera 112 may be a monochrome or color camera, which is disposed in front of or at a side of the head mount frame 11, and is located at one side, both sides, or the periphery of the monocular camera 111. Further, the binocular/multi-view camera 112 may be provided with an infrared filter. By using the binocular camera, the depth of field information on the image can be further obtained on the basis of obtaining the environment image. By using the multi-view camera, the visual angle of the camera can be further expanded, and more environment images and depth information can be obtained.
Alternatively, each of the monocular cameras or the binocular/binocular cameras may be one of an RGB camera, a monochrome camera, or an infrared camera.
The eye tracking camera 113 is disposed on one side of the see-through light guide element 13, and when the user wears the intelligent wearable device, the eye tracking camera 113 faces the side opposite to the face of the user. The eye tracking camera 113 is used for tracking a focus of a human eye, and tracking and specially processing a virtual object watched by the human eye or a specific part in a virtual screen. For example, the specific information of the object is automatically displayed beside the object watched by the human eyes. In addition, the area watched by the human eyes can display a high-definition virtual object image, and other areas only need to display a low-definition image, so that the calculation amount of image rendering can be effectively reduced, and the user experience cannot be influenced.
In some embodiments, some users with eyesight defects may assist in improving their own eyesight by wearing the intelligent wearable device, in order to better present a mixed visual experience of virtual images and real scenes, when the user wears the intelligent wearable device, the eye tracking camera 113 tracks the position of the eyesight defect of the user's eyeball and presents the first virtual image in a first visual field range corresponding to the position of the non-eyesight defect of the user when the user presents the virtual images and the real scenes via the first light ray and the second light ray, specifically:
this intelligence wearing equipment has a coordinate system, it can sign user's whole eyeball position, whole eyeball position corresponds its field of vision scope that can see, the coordinate system can also sign virtual image in the position of field of vision scope, eyeball tracking camera 113 can track user's eyeball position, and the processor confirms the visual defect position of user at the coordinate information of this coordinate system, then according to this coordinate information, confirm the first field of vision scope that user's non-visual defect position corresponds, this first field of vision scope is the better position of user's eyesight, finally, present first virtual image in first field of vision scope, for example: the vision defect position of the user is around the eyeball, then the intelligent wearable device presents the first virtual image in the visual field range corresponding to the center of the eyeball, or the vision of the center position of the eyeball of the user is lost, then the intelligent wearable device presents the first virtual image in the visual field range corresponding to the periphery of the eyeball. Therefore, the content information of the processed external image can be presented in the visual field range of the user with better vision, the visual blind spot area of the user is avoided, and the user can better identify the content information of the external image.
In some embodiments, when the user wearing the smart wearable device rotates the head or rotates the eyeball, the eyeball tracking camera 113 may obtain the head rotation angle and/or the eyeball rotation angle of the user by fusing the data of the gyroscope 114, the accelerometer 115, and the magnetometer 116, so that the eyeball tracking camera 113 may obtain an external image in the second visual field range corresponding to the head rotation angle and/or the eyeball rotation angle, and transmit the external image to the processor, and the processor performs corresponding processing.
In some embodiments, the distance sensor 119 may be disposed at the front or the side of the head-mounted frame 11, which may detect the distance between the user wearing the smart wearable device and some objects in the external environment, so as to remind the user of avoiding, specifically:
when the monocular camera 111 acquires an external image, an environment image around the user may be acquired, the processor extracts an object image from the environment image, then traverses a preset shape in the memory, and determines whether the object information matches the preset shape, where the preset shape is some shapes stored in advance, and may be some shapes of objects common to people, for example: various shapes of vehicles, human shapes, various shapes of buildings, shapes of telephones, shapes of tea cups, shapes of computers and the like are stored in a memory of a processor in advance, when the object information is judged to be matched with the preset shapes, the processor can perform image analysis on the acquired object images, and specific attributes contained in the object can be analyzed specifically, for example: if the obtained object information matches the shape of the automobile, the processor performs image analysis on the image of the object to analyze which type of automobile the object is, and may further obtain information such as the color of the automobile, the license plate of the automobile, and even the head direction of the automobile, and then the distance sensor 119 detects the distance between the intelligent wearable device and the object, and then generates a third light according to the distance, where the third light may form a second virtual image including distance prompt information, and in some embodiments, the third light may further include attribute information of the object to further form a second virtual image including object attribute information and distance prompt information, and the display module 12 emits the third light, and the third light enters the left eye 14 of the user via the third light transmitted by the inward surface 131 of the see-through light guide element 13; similarly, the display module 12 emits another third light ray, which contains the distance prompt information and/or the attribute information of the object, and may form a second virtual image of the right eye, and the another third light ray enters the right eye of the user via another third light ray conducted towards the inside of another perspective type light guide element, so as to form the visual perception of the first virtual image and the second virtual image in the brain of the user, and this operation may remind the user of the distance to the object in front, and plan the route in advance, for example: referring to fig. 3b, as shown in fig. 3b, if a vehicle is traveling ahead of the user, the processor may extract a vehicle image from the environment image after obtaining the image of the environment around the user, then traverse the preset shape of the memory, if the preset shape includes the vehicle shape, if the vehicle image extracted from the environment image matches the preset vehicle shape, the processor analyzes the specific attribute of the object by performing image analysis on the object image, where the specific attribute may include information such as a specific model of the vehicle, a color of the vehicle, a license plate of the vehicle, and a head orientation of the vehicle, and then the detection module of the intelligent wearable device detects a distance between the intelligent wearable device and the object, where the distance is 20 meters, and then forms a second virtual image with third light rays by using the distance prompt information and/or the attribute information of the ahead object, therefore, the user can see the external real scene and the distance information between the user and the front vehicle, meanwhile, the intelligent wearable device can play related prompt audios for the user, such as 'automobile, white, license plate number XXXXXXX, distance of 20 meters in the front and avoiding', the user can hear the related prompt audios on the basis of seeing the second virtual image and the external real scene, and then can plan a route in advance or avoid the vehicle according to the self condition, so that the user experience is enhanced.
The operation of the processor can be performed not only by the processor of the intelligent wearable device, but also by processors in some terminal devices, and the terminal devices and the intelligent wearable device can be connected by cables.
When the monocular camera 111 acquires the image of the environment around the user, if there is a person around the user, the intelligent wearable device may detect the identity information of the person, and notify the user of the identity information of the person in a virtual image manner, specifically:
the monocular camera 111 acquires an external environment image around the user, transmits the external environment image to the processor, the processor can process the environment image by using a face analysis algorithm, analyzes character characteristic information contained in the environment image, specifically can extract characteristic information such as height, fat and thin, five sense organs characteristic and age of a character, and then traverses a preset database to determine whether preset characteristic information matched with the character characteristic information exists, the preset characteristic information refers to some prestored character characteristic information, the character characteristic information can be defined and stored by the user, the corresponding character characteristic information corresponds to corresponding character identity information one by one, the stored character is generally a character with known identity of the user, for example, the user can edit the characteristic information of all characters known by the user, and the character characteristic information and the corresponding identity information are stored in the preset database, and the communication information, the address information or the working information and other related information corresponding to the person can be stored in a preset database in a one-to-one correspondence manner. If the character feature information extracted from the environment image by the processor matches with a certain character feature information in the preset database, the processor extracts the identity information corresponding to the character feature information, and transmits the identity information to the two display modules 12, the two display modules 12 generate fourth light according to the identity information, the two display modules 12 transmit the fourth light to the perspective element 13, the fourth light can form a third virtual image containing the identity information, meanwhile, the external scene transmits second light which is also received by the perspective light guide element 13, the second light can form a real scene image containing the external scene, the perspective light guide element 13 synthesizes the fourth light and the second light, and then the synthesized light is transmitted to the left eye of the user through the inward surface of one perspective light guide element 13, and another synthesized light transmitted through the inward surface of another perspective light guide element 13 enters the right eye of the user And the eyes are used for forming an image formed by combining the third virtual image and the real scene image of the external scene in the head of the user. Therefore, the user can know the identity of the person according to the identity information in the third virtual image, even the communication information or the working information of the person, so that the user can make corresponding preparations in advance, and the experience of the user is improved. For example, referring to fig. 3c, as shown in fig. 3c, there is a person around the user, the smart wearable device first obtains an environment image around the user, then processes and analyzes the environment image according to the environment image by using a face analysis algorithm, specifically analyzes the person feature information of the person in the environment image, then matches the person feature information with preset person feature information in a database, further matches the name of the person as young XX, the gender as female, the age as 35, and the identity as a next door neighbor of the user, the identity information forms a third virtual image, meanwhile, the external scene forms a real image, the third virtual image and the real image are simultaneously presented in front of the user, and meanwhile, the smart wearable device may notify the user of the identity information in the form of audio, where the specific audio content is "young XX, female, age as 35, and neighbor next door". Therefore, the user can prepare in advance to deal with the situation on the basis of knowing the identity information of the person, and the user experience is improved.
In some embodiments, another distance sensor 119 may be disposed at a position where the smart wearable device contacts with the face of the user, for detecting whether the smart wearable device is worn on the head of the user. If the user takes off the intelligent wearable device, power can be saved by turning off the display module 12, the processor and the like.
The depth sensor 117 is disposed at the front of the head-mounted frame 11, and can directly obtain depth information in the environment. The depth of field sensor may obtain more accurate, higher resolution depth of field data than the binocular/multi-view camera 112.
In some embodiments, when the intelligent wearable device implements the augmented reality function, the monocular camera 111 and the computer vision technology may be combined to detect a marker whose position in the environment is known, assist the intelligent wearable device in positioning, obtain the current position of the intelligent wearable device, and the user may store the destination position of the intelligent wearable device in the memory in advance, or the user may store some addresses frequently visited in the memory in advance, and the user may select the destination position of the current destination, and after obtaining the current position of the intelligent wearable device and the destination position of the current destination of the user, the processor may generate navigation information from the current position to the destination position of the current destination, where the navigation information includes voice navigation and navigation prompt information, further, may include map information, etc., and then the processor forms image information according to the navigation information, and transmits the fifth light to the second display module 12, the second display module 12 generates a fifth light according to the image information of the navigation information, and transmits the fifth light to the see-through type light guide element 13, the fifth light includes fourth virtual image information generated according to the navigation information, meanwhile, the external scene transmits a second light which is also received by the see-through type light guide element 13, the second light can form a real image including the external scene, the see-through type light guide element 13 synthesizes the fifth light and the second light, and then transmits the synthesized light to the left eye of the user through the inward surface of one see-through type light guide element 13, and transmits another synthesized light transmitted through the inward surface of another see-through type light guide element 13 to the right eye of the user, so as to form a synthesized image of the fourth virtual image and the real image of the external scene in the mind of the user. Therefore, the user can reach the destination faster according to the navigation information, the use is convenient, and the experience is better. For example, please refer to fig. 3d, as shown in fig. 3d, the intelligent wearable device first locates a location where the user is located, then generates navigation information prompt information according to the current position and the destination position, and forms the navigation prompt information into a fourth virtual image, the fourth virtual image and the real image including the external scene are synthesized and presented to the user, meanwhile, the intelligent wearable device may inform the user of the navigation prompt information again in the form of audio, the audio content is "turn to the front by 50 steps, turn to the right", so that the user may plan his own route according to the navigation prompt information, and the use is convenient.
In some embodiments, the environmental images and distance information captured by the binocular/multi-purpose camera 112 may be used to fuse with the data of the gyroscope 114, accelerometer 115, and magnetometer 116 to obtain information on vital signs of the user wearing the smart wearable device, which are indicative of the patient's condition and criticality, as well as pictures and/or videos of the surrounding environment. There are mainly heart rate, pulse, blood pressure, respiration, pain, blood oxygen, changes in pupil and corneal conduction, etc. It mainly includes four major signs, respectively, respiration, body temperature, pulse and blood pressure, which are the pillars for maintaining the normal activity of the body, and it is not necessary, and any abnormality can also cause serious or fatal diseases, and some diseases can also cause the change or deterioration of these four major signs. Therefore, the processor acquires the vital sign information of the user through the fused data, and judges whether the acquired vital sign information is normal or not, the judgment basis is some vital sign information when the human being is normal, when the vital sign information of the user is not in accordance with the normal vital sign, the intelligent wearable device can automatically alarm, so that the user can preset a preset alarm condition, when the vital sign information of the user reaches the preset alarm condition, namely the vital sign information of the user is abnormal, or diseases or accidents occur, the intelligent wearable device can alarm for a hospital or a pre-stored target contact person, meanwhile, the processor can decode the video of the surrounding environment, acquire the geographical position information of the user, package and send the video of the geographical position of the user and the surrounding environment to a target party, wherein the target party can be the hospital or the target contact person and the like, therefore, when the user is in an accident or a sudden disease, the processor of the intelligent wearable device judges that the vital sign information of the user is abnormal, the intelligent wearable device can automatically alarm, and simultaneously sends the video of the surrounding environment shot by the camera to the target party, so that the target party can timely find the unexpected situation of the user, quickly find the user according to the video, and know the unexpected process through the video, and further can timely rescue. Therefore, the operation can timely know the unexpected condition of the user and timely give an alarm. For example: referring to fig. 3e, as shown in fig. 3e, when a car accident occurs to a user, the intelligent wearable device detects that the vital sign information of the user does not conform to the normal vital sign information, and reaches a preset alarm condition, and then the intelligent wearable device sends the car accident situation of the user and the current video of the surrounding environment, which are shot by the binocular/binocular camera 112, to the hospital 120 for first aid and the user's relatives, and the specific sent content may be "the user has severe impact with an object, the location XXXX, and the current situation please see the accessory video", so that the user can be timely rescued, and the specific details of the car accident can be quickly obtained, which is convenient to use and has good user experience.
The ambient light sensor 118 is disposed on the head-mounted frame 11, and can monitor the intensity of ambient light in real time. This intelligence wearing equipment can be according to the real-time luminance of adjustment display module 12 of ambient light's change to guarantee the uniformity of display effect under different ambient light.
Optionally, the smart wearable device further comprises: and the infrared/near infrared light LEDs are electrically connected to the main board 17 and are used for providing a light source for the binocular/multi-view camera 112. Specifically, the infrared/near-infrared LED emits infrared rays, and when the infrared rays reach an object acquired by the binocular/multi-view camera 112, the object transmits the infrared rays back, and a photosensitive element on the binocular/multi-view camera 112 receives the transmitted infrared rays and converts the infrared rays into an electrical signal, and then performs imaging processing.
Referring to fig. 4, the two display modules 12 are connected to the main board 17 through a cable.
The main board 17 is further provided with a camera, a video interface, a power interface, a communication chip and a memory.
The video interface is used for connecting a computer, a mobile phone or other equipment to receive video signals. Wherein the video interface may be: hmdi, display port, thunderbolt or usb type-c, micro usb, MHL (Mobile high-Definition Link), and the like.
The power interface is used for supplying power by an external power supply or a battery. The power interface comprises a USB interface or other interfaces.
The communication chip is used for data interaction with the outside through a communication protocol, specifically, the communication chip is connected with the internet through communication protocols such as WiFi, WCDMA, TD-LTE and the like, and then acquires data through the internet or is connected with other intelligent wearable devices; or directly connected with other intelligent wearable devices through a communication protocol.
The memory is used for storing data, and is mainly used for storing display data displayed in the display module 12.
When the intelligent wearable device only includes the head-mounted frame 11, the two display modules 12, the two perspective light guide elements 13, and the main board 17, all the rendering of the virtual scene and the generation of the image corresponding to the two eyes can be performed in the external device connected to the intelligent wearable device. The external device includes: computers, cell phones, tablet computers, and the like.
Specifically, the intelligent wearable device shoots external image information through a camera, or receives the external image information or video information through a corresponding interface, and decodes the external image and the video information and displays the decoded external image and video information on the display module 12. The external device receives data acquired by a plurality of sensors on the intelligent wearable device based on augmented reality, and after the data are processed, the image displayed by the two eyes is adjusted according to the data and is reflected on the image displayed on the display module 12. The processor on the intelligent wearing equipment based on augmented reality is only used for supporting the transmission and display of video signals and the transmission of sensor data.
Meanwhile, interaction with a user is carried out through application software on external equipment such as a computer, a mobile phone and a tablet personal computer, and interaction with the intelligent wearable equipment can be carried out through a mouse keyboard, a touch pad or buttons on the external equipment. Examples of applications for such a basic structure include, but are not limited to, large screen portable displays. The smart wearable device may project the display screen at a fixed location within the user's field of view. The user needs to adjust the size, the position and the like of the projection screen through software on the device connected with the intelligent wearable device.
Further, when the acquired external real scene image and the virtual image are synthesized and then displayed by the intelligent wearable device based on augmented reality, the display mode comprises a first display mode, a second display mode or a third display mode; the first display mode is a display mode in which the relative angle and the relative position between the virtual image and the real image are not fixed; the second display mode is a display mode in which the relative angle and the relative position between the virtual image and the real image are fixed. The third display mode is a display mode in which the relative angle between the virtual image and the real image is fixed and the relative position is not fixed.
The relationship between the first, second and third display modes and the real environment and the head of the user is shown in the following table:
position relative to the environment Angle relative to environment Relative position to head Relative angle with head
A first display mode Is not fixed Is not fixed Fixing Fixing
Second display mode Fixing Fixing Is not fixed Is not fixed
Third display mode Is not fixed Fixing Fixing Is not fixed
It should be noted that the "first display mode", "second display mode", or "third display mode" may be used in combination with different virtual images, and may be determined by system software or set by a user.
The embodiment of the invention provides an intelligent wearing device based on augmented reality, which obtains an external image through a camera, converts an original style of content information in the external image into a preset style, displays and emits first light through a display module, the first light comprises the content information of the preset style and can form a first virtual image, the first light comprising left eye virtual image information and right eye virtual image information is respectively transmitted into two eyes of a user through the inward surfaces of two perspective light guide elements, meanwhile, the second light capable of forming a real image is received through the two perspective light guide elements, the real image comprises an external scene, the first light and the second light are subjected to synthesis processing, so that a visual feeling that the first virtual image is fused with the external real scene is formed in the brain of the user, and visual assistance can be provided for the user, the visual field range is large, the user experience is good, and the use is convenient.
As another aspect of the embodiments of the present invention, the embodiments of the present invention provide an augmented reality-based visual assistance apparatus, which is a software system and can be stored in a processor of the intelligent wearable device in fig. 1a to 4, where the augmented reality-based visual assistance apparatus includes a plurality of instructions stored in a memory, and the processor can access the memory and call the instructions to execute the instructions to complete the control logic of the augmented reality-based visual assistance.
As shown in fig. 5, the augmented reality-based visual aid 300 includes a first obtaining module 301 for obtaining an external image, wherein the external image includes content information; a first processing module 302, configured to process the external image using an image analysis algorithm to transform a native pattern of the content information into a preset pattern; an emitting module 303, configured to emit a first light ray, where the first light ray may form a first virtual image, and the first virtual image includes the content information of the preset pattern; a receiving module 304, configured to receive a second light ray, where the second light ray may form a live-action image, and the live-action image includes an external scene; a first combining module 305, configured to combine the first light ray and the second light ray, and present a combined image.
The augmented reality-based visual aid 300 can convert an original pattern of content information in an external image into a preset pattern by acquiring the external image photographed by a camera, and then emit a first light, the first light ray includes content information of a preset pattern and can form a first virtual image, the first light ray including left eye virtual image information and right eye virtual image information is respectively transmitted into two eyes of a user, and meanwhile, a second light ray capable of forming a real image is received, the live-action image comprises an external scene, the first light and the second light are subjected to synthesis processing, so that a visual feeling that the first virtual image is fused with the external real scene is formed in the brain of a user, therefore, the visual assistance can be performed on the user to assist the user in identifying the surrounding environment or objects more clearly, the visual field range is large, the user experience is good, and the use is convenient.
In some embodiments, please continue to refer to fig. 5, the first obtaining module 301 includes a tracking unit 3011 for tracking the head rotation angle and/or the eyeball rotation angle of the user; a first obtaining unit 3012, configured to obtain an external image in a second visual field range corresponding to the head rotation angle and/or the eyeball rotation angle.
With continuing reference to fig. 5, the image analysis algorithm includes an image edge detection algorithm, and the first processing module 302 includes a determining unit 3021 configured to process the external image using the image edge detection algorithm to determine a first connected component of each text of the content information in the external image; a first rendering unit 3022, configured to render the first connection field of each text, so that the original style of the content information is transformed into a preset style.
With reference to fig. 5, the first rendering unit 3022 includes a reverse selecting subunit 30221, configured to perform reverse selecting processing on the first connected components of all the characters in the external image, to obtain a second connected component of the non-character area in the content information; a rendering subunit 30222, configured to render the native style of the second connected component to the preset style.
With continued reference to fig. 5, the first synthesizing module 305 includes a tracking unit 3051 for tracking a position of a visual defect of an eyeball of a user wearing the intelligent wearable device; a presentation unit 3052, configured to present the first virtual image in a first view range corresponding to the non-vision-deficient position of the user.
With continued reference to fig. 5, the presentation unit 3052 includes a first determining subunit 30521, configured to determine coordinate information of the vision defect position of the user in a coordinate system of the smart wearable device; the second determining subunit 30522, configured to determine, according to the coordinate information, a first view range corresponding to the non-vision-defect position of the user; a first presentation subunit 30523, configured to present the first virtual image in the first view range.
In some embodiments, with continued reference to fig. 5, the augmented reality-based visual aid 300 further comprises a second obtaining module 306 for obtaining an environmental image; a detection module 307, configured to detect a distance between the smart wearable device and the object when an object image matching a preset shape is extracted from the environment image; a first generating module 308, configured to generate a third light ray according to the distance, where the third light ray may form a second virtual image, and the second virtual image includes distance prompt information; a second combining module 309, configured to combine the third light ray with the second light ray.
In some embodiments, with continuing reference to fig. 5, the augmented reality-based visual aid 300 further includes a second processing module 310 for processing the environment image using a face analysis algorithm to obtain character feature information; a first judging module 311, configured to judge whether preset feature information matching the person feature information exists in a preset database; a second generating module 312, configured to select identity information corresponding to the preset feature information, and generate a fourth light according to the identity information; a third synthesizing module 313, configured to synthesize the fourth light ray and the second light ray, where the fourth light ray may form a third virtual image, and the third virtual image includes the identity information.
In some embodiments, with continuing reference to fig. 5, the augmented reality-based visual assistance apparatus 300 further includes a third obtaining module 314 configured to obtain the current location and the destination location of the smart wearable device; a fourth generating module 315, configured to generate voice navigation information and navigation instruction information according to the current location and the destination location; a fifth generating module 316, configured to generate a fifth light according to the voice navigation information and the navigation instruction information, where the fifth light may form a fourth virtual image, and the fourth virtual image includes the navigation instruction information; a fourth combining module 317, configured to combine the fifth light ray with the second light ray.
In some embodiments, please continue to refer to fig. 5, the augmented reality-based visual assistance apparatus 300 further includes a fourth obtaining module 318 for obtaining physiological sign information of the user wearing the intelligent wearable device and a picture and/or a video of the surrounding environment; a second judging module 319, configured to judge whether the physiological characteristic information meets a preset alarm condition; a first sending module 320, configured to send the geographic location information of the user and the picture and/or video to a target party.
As another aspect of the embodiment of the present invention, an embodiment of the present invention provides a visual assistance method based on augmented reality, which is applied to an intelligent wearable device. The functions of the augmented reality-based visual assistance method according to the embodiment of the present invention can be implemented by a hardware platform, in addition to the software system of the augmented reality-based visual assistance apparatus described in fig. 5. For example: the augmented reality based visual assistance method may be performed in an electronic device of a suitable type having a processor with computational capabilities, for example: a single chip, a Digital Signal Processing (DSP), a Programmable Logic Controller (PLC), and so on.
Functions corresponding to the visual assistance method based on augmented reality of the following embodiments are stored in the form of instructions on a memory of the electronic device, and when the functions corresponding to the visual assistance method based on augmented reality of the following embodiments are to be executed, a processor of the electronic device accesses the memory, retrieves and executes the corresponding instructions to implement the functions corresponding to the visual assistance method based on augmented reality of the following embodiments.
The memory, as a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules (e.g., the respective modules and units illustrated in fig. 5) corresponding to the augmented reality based visual assistance apparatus 300 in the above embodiments, or steps corresponding to the augmented reality based visual assistance method in the following embodiments. The processor executes various functional applications and data processing of the augmented reality-based visual assistance apparatus 300 by running the nonvolatile software program, instructions and modules stored in the memory, that is, functions of the respective modules and units of the augmented reality-based visual assistance apparatus 300 according to the following embodiments or functions of the steps corresponding to the augmented reality-based visual assistance method according to the following embodiments.
The memory may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be coupled to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules stored in the memory, when executed by the one or more processors, perform the augmented reality based visual aid method of any of the above method embodiments, e.g., perform the steps shown in fig. 6-15 described in the following embodiments; the functions of the various modules and units described with respect to fig. 5 may also be implemented.
As shown in fig. 6, the augmented reality-based visual assistance method 200 includes:
step 201, obtaining an external image, wherein the external image contains content information;
step 202, processing the external image by using an image analysis algorithm to transform the original style of the content information into a preset style;
step 203, emitting a first light ray, wherein the first light ray can form a first virtual image, and the first virtual image includes the content information of the preset pattern;
step 204, receiving a second light ray, wherein the second light ray can form a live-action image, and the live-action image comprises an external scene;
step 205, combining the first light and the second light to present a combined image.
By adopting the method, the external image can be obtained, the original style of the content information in the external image is converted into the preset style, then the first light is emitted, the first light contains the content information of the preset style and can form a first virtual image, the first light containing the left-eye virtual image information and the right-eye virtual image information is respectively transmitted into the two eyes of the user, meanwhile, the second light capable of forming the real image is received, the real image contains the external scene, and the first light and the second light are subjected to synthesis processing, so that the visual perception that the first virtual image and the external real scene are fused is formed in the brain of the user, and the visual aid can be provided for the user to assist the user to more clearly identify the surrounding environment or objects, so that the visual field range is large, the user experience is good, and the use is convenient.
In some embodiments, as shown in FIG. 7, step 201 comprises
Step 2011, tracking the head rotation angle and/or eyeball rotation angle of the user;
step 2012, an external image in a second visual field range corresponding to the head rotation angle and/or the eyeball rotation angle is acquired.
In some embodiments, as shown in fig. 8, step 202 further comprises:
step 2021, processing the external image by using the image edge detection algorithm, and determining a first connected domain of each character of content information in the external image;
step 2022, rendering the first connected domain of each text to transform the original style of the content information into a preset style.
In some embodiments, as shown in fig. 9, step 2022 comprises:
20221, performing reverse selection processing on the first connected domains of all the characters in the external image to obtain a second connected domain of the non-character region in the content information;
step 20222, rendering the original style of the second connected domain to the preset style.
In some embodiments, as shown in FIG. 10, step 205 comprises:
step 2051, tracking the position of the vision defect of the eyeball of the user wearing the intelligent wearable device;
and step 2052, presenting the first virtual image in a first view field range corresponding to the non-vision defect position of the user.
In some embodiments, as shown in fig. 11, step 2052 includes:
step 20521, determining coordinate information of the vision defect position of the user in a coordinate system of the intelligent wearable device;
step 20522, determining a first view range corresponding to the non-vision defect position of the user according to the coordinate information;
step 20523, presenting the first virtual image in the first field of view.
In some embodiments, as shown in fig. 12, the method 200 of augmented reality based visual assistance further comprises:
step 206, obtaining an environment image;
step 207, when an object image matched with a preset shape is extracted from the environment image, detecting the distance between the intelligent wearable device and the object;
step 208, generating a third light ray according to the distance, wherein the third light ray can form a second virtual image, and the second virtual image contains distance prompt information;
step 209, the third light and the second light are combined.
In some embodiments, as shown in fig. 13, the method 200 of augmented reality based visual assistance further comprises:
step 210, processing the environment image by using a face analysis algorithm to obtain character characteristic information;
step 211, judging whether preset characteristic information matched with the character characteristic information exists in a preset database;
step 212, if so, selecting identity information corresponding to the preset feature information, generating a fourth light according to the identity information, and combining the fourth light and the second light, wherein the fourth light can form a third virtual image, and the third virtual image includes the identity information.
In some embodiments, as shown in fig. 14, the method 200 of augmented reality based visual assistance further comprises:
step 213, obtaining a current position and a destination position of the intelligent wearable device;
step 214, generating voice navigation information and navigation instruction information according to the current position and the destination position;
step 215, generating a fifth light according to the voice navigation information and the navigation instruction information, wherein the fifth light can form a fourth virtual image, and the fourth virtual image includes the navigation instruction information;
step 216, the fifth light and the second light are combined.
In some embodiments, as shown in fig. 15, the method 200 of augmented reality based visual assistance further comprises:
step 217, acquiring physiological sign information of a user wearing the intelligent wearing equipment and pictures and/or videos of the surrounding environment;
step 218, judging whether the physiological characteristic information meets a preset alarm condition;
and 219, if so, sending the geographical position information of the user and the picture and/or the video to a target party.
Since the apparatus embodiment and the method embodiment are based on the same concept, the contents of the method embodiment may refer to the apparatus embodiment on the premise that the contents do not conflict with each other, and are not described herein again.
As yet another aspect of the embodiments of the present invention, the embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer-executable instructions for causing a microwave device to perform the magnetron state detection method as described in any of the above, for example, perform the magnetron state detection method in any of the above method embodiments, for example, perform the magnetron state detection apparatus in any of the above apparatus embodiments.
By adopting the method, the external image can be obtained, the original style of the content information in the external image is converted into the preset style, then the first light is emitted, the first light contains the content information of the preset style and can form a first virtual image, the first light containing the left-eye virtual image information and the right-eye virtual image information is respectively transmitted into the two eyes of the user, meanwhile, the second light capable of forming the real image is received, the real image contains the external scene, and the first light and the second light are subjected to synthesis processing, so that the visual perception that the first virtual image and the external real scene are fused is formed in the brain of the user, and the visual aid can be provided for the user to assist the user to more clearly identify the surrounding environment or objects, so that the visual field range is large, the user experience is good, and the use is convenient.
It should be noted that the description of the present invention and the accompanying drawings illustrate preferred embodiments of the present invention, but the present invention may be embodied in many different forms and is not limited to the embodiments described in the present specification, which are provided as additional limitations to the present invention and to provide a more thorough understanding of the present disclosure. Moreover, the above technical features are combined with each other to form various embodiments which are not listed above, and all the embodiments are regarded as the scope of the present invention described in the specification; further, modifications and variations will occur to those skilled in the art in light of the foregoing description, and it is intended to cover all such modifications and variations as fall within the true spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A visual assistance method based on augmented reality is applied to intelligent wearable equipment and is characterized in that the method comprises the following steps:
acquiring an external image, wherein the external image contains content information;
processing the external image using an image analysis algorithm to transform a native pattern of the content information into a preset pattern;
emitting a first light ray, wherein the first light ray can form a first virtual image, and the first virtual image comprises the content information of the preset pattern;
receiving a second light ray, wherein the second light ray can form a real image, and the real image comprises an external scene;
synthesizing the first light ray and the second light ray to present a synthesized image;
the presenting the synthesized image includes:
tracking a vision defect position of eyeballs of a user wearing the intelligent wearable device, wherein the vision defect position is a vision blind spot area;
presenting the first virtual image in a first visual field range corresponding to the non-vision defect position of the user;
the presenting the first virtual image in a first visual field range corresponding to a non-vision-deficient location of the user comprises:
determining coordinate information of the vision defect position of the user in a coordinate system of the intelligent wearable device;
determining a first visual field range corresponding to the non-vision defect position of the user according to the coordinate information;
presenting the first virtual image at the first field of view.
2. The method of claim 1, wherein the image analysis algorithm comprises an image edge detection algorithm, and wherein the processing the external image using the image analysis algorithm to transform the native pattern of the content information into a preset pattern comprises:
processing the external image by using the image edge detection algorithm, and determining a first connected domain of each character of the content information in the external image;
and rendering the first communication domain of each character so as to transform the original style of the content information into a preset style.
3. The method of claim 2, wherein the rendering the first communication field of each text to transform the native style of the content information into a preset style comprises:
performing reverse selection processing on the first connected domains of all characters in the external image to obtain a second connected domain of a non-character region in the content information;
rendering the native style of the second connected domain to the preset style.
4. The method of any of claims 1 to 3, wherein the acquiring an external image comprises:
tracking a head rotation angle and/or an eyeball rotation angle of the user;
and acquiring an external image in a second visual field range corresponding to the head rotation angle and/or the eyeball rotation angle.
5. The method of any of claims 1 to 3, further comprising:
acquiring an environment image;
when an object image matched with a preset shape is extracted from the environment image, detecting the distance between the intelligent wearable device and the object;
generating a third light ray according to the distance, wherein the third light ray can form a second virtual image, and the second virtual image contains distance prompt information;
and synthesizing the third light ray and the second light ray.
6. The method of claim 5, further comprising:
processing the environment image by using a human face analysis algorithm to obtain character characteristic information;
judging whether preset characteristic information matched with the figure characteristic information exists in a preset database or not;
if yes, selecting identity information corresponding to the preset feature information, generating fourth light according to the identity information, and performing synthesis processing on the fourth light and the second light, wherein the fourth light can form a third virtual image, and the third virtual image contains the identity information.
7. The method of any of claims 1 to 3, further comprising:
acquiring the current position and the destination position of the intelligent wearable device;
generating voice navigation information and navigation indication information according to the current position and the destination position;
generating a fifth light according to the voice navigation information and the navigation instruction information, wherein the fifth light can form a fourth virtual image, and the fourth virtual image comprises the navigation instruction information;
and combining the fifth light ray and the second light ray.
8. The method of any of claims 1 to 3, further comprising:
acquiring physiological sign information of a user wearing the intelligent wearing equipment and pictures and/or videos of the surrounding environment;
judging whether the physiological sign information meets a preset alarm condition or not;
and if so, sending the geographical position information of the user and the picture and/or the video to a target party.
9. An intelligence wearing equipment which characterized in that includes:
at least one processor; and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the augmented reality based visual aid method of any one of claims 1 to 8.
CN201911289930.2A 2019-12-16 2019-12-16 Visual assistance method based on augmented reality and intelligent wearable device Active CN110708533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911289930.2A CN110708533B (en) 2019-12-16 2019-12-16 Visual assistance method based on augmented reality and intelligent wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911289930.2A CN110708533B (en) 2019-12-16 2019-12-16 Visual assistance method based on augmented reality and intelligent wearable device

Publications (2)

Publication Number Publication Date
CN110708533A CN110708533A (en) 2020-01-17
CN110708533B true CN110708533B (en) 2020-04-14

Family

ID=69193243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911289930.2A Active CN110708533B (en) 2019-12-16 2019-12-16 Visual assistance method based on augmented reality and intelligent wearable device

Country Status (1)

Country Link
CN (1) CN110708533B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111343449B (en) * 2020-03-06 2022-06-07 杭州融梦智能科技有限公司 Augmented reality-based display method and intelligent wearable device
CN111310713B (en) * 2020-03-06 2023-05-30 杭州融梦智能科技有限公司 Goods sorting method based on augmented reality and intelligent wearable equipment
CN111127822B (en) * 2020-03-27 2020-06-30 杭州融梦智能科技有限公司 Augmented reality-based fire-fighting auxiliary method and intelligent wearable equipment
CN112426333A (en) * 2020-11-24 2021-03-02 杭州集视智能科技有限公司 Auxiliary stereoscopic vision electronic equipment for hemianopsia patients and control method thereof
CN113163134A (en) * 2021-04-21 2021-07-23 山东新一代信息产业技术研究院有限公司 Harsh environment vision enhancement method and system based on augmented reality
CN115240820A (en) * 2021-04-23 2022-10-25 中强光电股份有限公司 Wearable device and method for adjusting display state based on environment
CN113377210B (en) * 2021-07-19 2023-12-01 艾视雅健康科技(苏州)有限公司 Image display method, head-mounted electronic auxiliary vision device and readable medium
CN113556517A (en) * 2021-09-06 2021-10-26 艾视雅健康科技(苏州)有限公司 Portable vision auxiliary device, intelligent equipment and head-mounted vision auxiliary equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598180A (en) * 2014-04-02 2015-05-06 北京智谷睿拓技术服务有限公司 Display control method, display control device and near-to-eye display devices
CN108519676A (en) * 2018-04-09 2018-09-11 杭州瑞杰珑科技有限公司 A kind of wear-type helps view apparatus
CN110062101A (en) * 2018-04-10 2019-07-26 努比亚技术有限公司 A kind of wearable device control method, wearable device and computer readable storage medium
CN209297034U (en) * 2018-12-22 2019-08-23 深圳梦境视觉智能科技有限公司 A kind of augmented reality display equipment
CN110419063A (en) * 2017-03-17 2019-11-05 麦克赛尔株式会社 AR display device and AR display methods

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180293980A1 (en) * 2017-04-05 2018-10-11 Kumar Narasimhan Dwarakanath Visually impaired augmented reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598180A (en) * 2014-04-02 2015-05-06 北京智谷睿拓技术服务有限公司 Display control method, display control device and near-to-eye display devices
CN110419063A (en) * 2017-03-17 2019-11-05 麦克赛尔株式会社 AR display device and AR display methods
CN108519676A (en) * 2018-04-09 2018-09-11 杭州瑞杰珑科技有限公司 A kind of wear-type helps view apparatus
CN110062101A (en) * 2018-04-10 2019-07-26 努比亚技术有限公司 A kind of wearable device control method, wearable device and computer readable storage medium
CN209297034U (en) * 2018-12-22 2019-08-23 深圳梦境视觉智能科技有限公司 A kind of augmented reality display equipment

Also Published As

Publication number Publication date
CN110708533A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN110708533B (en) Visual assistance method based on augmented reality and intelligent wearable device
CN112130329B (en) Head-mounted display device and method for controlling head-mounted display device
US20220147139A1 (en) User interface interaction paradigms for eyewear device with limited field of view
WO2014128749A1 (en) Shape recognition device, shape recognition program, and shape recognition method
US20240007733A1 (en) Eyewear determining facial expressions using muscle sensors
CN105684074A (en) Image display device and image display method, image output device and image output method, and image display system
US11575877B2 (en) Utilizing dual cameras for continuous camera capture
WO2014128752A1 (en) Display control device, display control program, and display control method
WO2014128747A1 (en) I/o device, i/o program, and i/o method
CN110998666B (en) Information processing device, information processing method, and program
JP6250024B2 (en) Calibration apparatus, calibration program, and calibration method
CN111710050A (en) Image processing method and device for virtual reality equipment
US11774764B2 (en) Digital glasses having display vision enhancement
WO2014128751A1 (en) Head mount display apparatus, head mount display program, and head mount display method
CN111127822B (en) Augmented reality-based fire-fighting auxiliary method and intelligent wearable equipment
JP2021021889A (en) Display device and method for display
KR20230025697A (en) Blind Assistance Eyewear with Geometric Hazard Detection
KR20180109669A (en) Smart glasses capable of processing virtual objects
JP2017191546A (en) Medical use head-mounted display, program of medical use head-mounted display, and control method of medical use head-mounted display
US20190028690A1 (en) Detection system
US11789294B2 (en) Eyewear frame as charging contact
CN111343449B (en) Augmented reality-based display method and intelligent wearable device
US11200713B2 (en) Systems and methods for enhancing vision
JP2017189498A (en) Medical head-mounted display, program of medical head-mounted display, and control method of medical head-mounted display
WO2016051429A1 (en) Input/output device, input/output program, and input/output method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant