CN110706164A - Tubular visual field image deformation display method and glasses based on augmented reality - Google Patents

Tubular visual field image deformation display method and glasses based on augmented reality Download PDF

Info

Publication number
CN110706164A
CN110706164A CN201910829040.XA CN201910829040A CN110706164A CN 110706164 A CN110706164 A CN 110706164A CN 201910829040 A CN201910829040 A CN 201910829040A CN 110706164 A CN110706164 A CN 110706164A
Authority
CN
China
Prior art keywords
image
live
user
visual field
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910829040.XA
Other languages
Chinese (zh)
Inventor
张志扬
苏进
于勇
李琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aibo Tongxin Medical Technology Co Ltd
Original Assignee
Beijing Aibo Tongxin Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aibo Tongxin Medical Technology Co Ltd filed Critical Beijing Aibo Tongxin Medical Technology Co Ltd
Priority to CN201910829040.XA priority Critical patent/CN110706164A/en
Publication of CN110706164A publication Critical patent/CN110706164A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/80
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • G06T5/73
    • G06T5/90
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Abstract

The invention provides a tubular visual field image deformation display method based on augmented reality and augmented reality glasses, wherein the method comprises the following steps: acquiring and obtaining a live-action image reflecting the view of a user; calling a visual field image defect mode of a defect area reflecting the visual field of the user; carrying out deformation processing on the live-action image by adopting the visual field image defect mode to obtain a processing image which can be completely seen by a user; and outputting the processed image. The method and the device perform deformation processing on the live-action image based on the visual field image defect mode, so that the patient can see all contents of the live-action image, the visual experience of the patient is improved, and the life quality of the patient is greatly improved.

Description

Tubular visual field image deformation display method and glasses based on augmented reality
Technical Field
The invention relates to the technical field of image processing, in particular to a tubular visual field image deformation display method based on augmented reality and augmented reality glasses.
Background
Augmented Reality (AR) technology is a technology for fusing a virtual world and a real world by calculating the position and angle of an image in real time and superimposing a corresponding image, video and a 3D model on the image. The AR client can combine with the picture identification material directly stored in the local AR client to perform real-time image identification on the offline environment of the user, and corresponding display data are displayed in an enhanced mode according to the pre-configured display effect on the position of the identified specific offline target in the real scene.
Visual field impairment (defect of visual field) refers to an impaired visual field. The visual field is the visual range beyond the fovea of the macula, which is the spatial range that the eyeball is fixed and looks forward a little. Visual field defects are a disease condition that may indicate a patient has a disease, and may be considered a disease with visual field defects, which is a severe condition in ophthalmology. The visual field defects of all people are different in feeling, slight in feeling, serious in feeling and greatly different in degree. It has been found that glaucomatous visual field defects appear as peripheral defects in the early stages and central defects in the later stages, but some studies have shown that the paracentetic dark spot can also occur in the early stages of glaucoma, as in the ocular hypertension treatment study, 16% of patients have visual field defects that are initially paracentetic. Tubular visual field defects may cause blind zones in the visual field, which are often seen in patients with high myopia and old people.
In order to avoid the trouble of visual field defect for patients, the detection of the defect area is very critical, and no detection scheme about the defect area is found in the prior art.
Disclosure of Invention
In view of the above problems in the prior art, the present invention aims to provide an augmented reality-based tubular visual field image deformation display method and augmented reality glasses, so as to implement processing of a tubular visual field defect region of a continuous image and provide better visual experience for a patient.
The invention discloses a tubular visual field image deformation display method based on augmented reality, which comprises the following steps:
acquiring and obtaining a live-action image reflecting the view of a user;
calling a visual field image defect mode of a defect area reflecting the visual field of the user;
performing deformation processing on the live-action image according to the visual field image defect mode to obtain a processing image which can be completely seen by a user;
and outputting the processed image.
Further, the acquiring and obtaining a live-action image reflecting the view of the user comprises:
and collecting the live-action image by taking the natural sight line of the user as a center.
Further, the deforming the image according to the visual field image defect mode to obtain a processed image which can be completely seen by a user includes:
and the live-action image is deformed to a visible area which is wholly positioned outside the defect area of the live-action image.
Further, the deforming the image according to the visual field image defect mode to obtain a processed image which can be completely seen by a user includes:
and carrying out size compression on the live-action image to obtain a compressed image, and moving the compressed image to the visible area outside the defect area in the live-action image.
Further, acquiring and obtaining a live-action image reflecting what the user's field of view sees includes:
continuously acquiring and obtaining a plurality of live-action images reflecting the view of a user;
the deformation processing is performed on the live-action image according to the visual field image defect mode, so as to obtain a processed image which can be completely seen by a user, and the method comprises the following steps:
and carrying out deformation processing on the continuous multiple live-action images according to the visual field image defect mode, so as to obtain the continuous multiple processed images which can be completely seen by a user.
Further, the method also comprises an acquisition method of the visual field image defect mode, which comprises the following steps:
collecting and obtaining a detection image reflecting the visual field of a user;
displaying the detection image;
labeling a defect area in the detection image seen by a user;
and saving the labeling result as a visual field image defect mode.
Further, the labeling a defective region in the image seen by the user includes:
and under the condition of enlarging the image, marking the edge of the defect area in the image.
In addition, the invention also discloses augmented reality glasses, which comprise:
the image acquisition unit is used for acquiring and obtaining a live-action image reflecting the view of the user;
the database unit is used for storing a visual field image defect mode of a defect area reflecting the visual field of the user, which can be called;
the image processing unit is used for calling the visual field image defect mode to carry out deformation processing on the live-action image so as to obtain a processing image which can be completely seen by a user;
and an image display unit outputting the processed image.
Further, the image acquisition unit is used for acquiring the live-action image by taking a natural sight line of the user as a center.
Further, the image processing unit is configured to deform the live-action image into a visible region entirely outside the defective region of the live-action image.
Further, the image processing unit performs size compression on the live-action image to obtain a compressed image, and moves the compressed image to the visible region of the live-action image.
Further, the image acquisition unit is used for continuously acquiring and obtaining a plurality of live-action images reflecting the view of the user;
and the image processing unit is used for carrying out deformation processing on the continuous multiple live-action images according to the visual field image defect mode and obtaining the continuous multiple processed images which can be completely seen by a user.
Further, the image acquisition unit is also used for acquiring and obtaining the detection image reflecting the visual field of the user;
the image display unit is used for displaying the detection image;
a control unit for labeling the defective region in the detected image seen by a user;
and the database unit is used for storing the marked result as the visual field image defect mode.
Further, an image processing unit is included for magnifying the image before the control unit performs the annotation.
Further, the control unit includes:
a touch pad for controlling movement of a cursor displayed on the image;
and the marking key is used for marking the position on the image corresponding to the cursor.
The invention has at least the following beneficial effects:
the live-action image is subjected to deformation processing based on the visual field image defect mode and used for obtaining a processing image which can be completely visible for a user, so that the defect area in the continuous live-action image is better displayed, the visual experience of a patient is improved, and the life quality of the patient is obviously improved.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention.
In the drawings:
fig. 1 is a flowchart of a method for displaying a deformed tubular-field image based on augmented reality according to an embodiment of the present invention;
FIG. 2 is a region division diagram of a live-action image captured and obtained in the method according to the embodiment of the present invention, which reflects the view of the user;
fig. 3 is a region division diagram of an image processed by the method according to the embodiment of the present invention.
Fig. 4 is a flowchart of a method for detecting deformation of an augmented reality-based tubular-view image according to an embodiment of the present invention;
FIG. 5 is a region division diagram of a detected image acquired and obtained in the method according to the embodiment of the present invention;
FIG. 6 is a region division diagram of a detected image labeled with a defect region in the method according to the embodiment of the present invention;
FIG. 7 is a region division diagram after detecting an enlarged image in the method according to the embodiment of the present invention;
fig. 8 is a schematic structural diagram of a control unit in the augmented reality glasses according to the embodiment of the present invention.
Description of the reference numerals
1-Cursor 2-touch pad
3-label key
Detailed Description
In addition, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.
Example one
As shown in fig. 1 to 3, the invention discloses a tubular visual field image deformation display method based on augmented reality, which comprises the following steps:
(1) firstly, acquiring and reflecting a live-action image seen by a user visual field;
the live-action images are synchronously transformed along with the rotation of the head of the user or the rotation of eyeballs, so that the acquired live-action images can truly reflect the actual visual field of the user, and for AR equipment, the acquired live-action images can be acquired through a camera carried by the AR equipment;
(2) calling a visual field image defect mode of a defect area reflecting the visual field of the user;
the visual field image defect mode can be pre-marked and stored by a defect area and can be called at any time;
(3) performing deformation processing on the live-action image according to the visual field image defect mode, and obtaining a processing image which can be completely visible by a user, namely all contents of the live-action image;
(4) outputting the processed image;
the patients with low eyesight can also obtain the whole content of the image and can output the image in a near-to-eye display mode.
The method and the device perform deformation processing on the live-action image based on the visual field image defect mode, and are used for obtaining the processed image which can be completely visible for the user, so that the defect area in the continuous live-action image can be better displayed, the visual experience of the patient is improved, and the life quality of the patient is obviously improved.
In some embodiments of the present invention, a single or multiple cameras may be used to acquire and obtain a live-action image reflecting the user's view field, and in order to ensure accuracy of subsequent detection, the cameras acquire the live-action image with the center line of the user's natural sight line as the center, thereby ensuring that the image can truly reflect the user's view field.
In some embodiments of the present invention, the live-action image is deformed to a visible region entirely outside the defective region of the live-action image, and specifically, the defective region may be bypassed by deforming all or part of the live-action image, so that all the contents of the live-action image are in the corresponding visible region, which is beneficial to improving the visual experience of the user.
Preferably, as shown in fig. 3, the real-world image is size-compressed to obtain a compressed image, and the compressed image is moved to the visible region outside the defective region in the real-world image, so as to obtain a processed image by performing the above process. Preferably, a compression center point is selected and compression is completed while changing the position of the processed image.
In some embodiments of the present invention, in addition to processing a still picture, the present invention can also process a video composed of continuous live-action images, so as to further improve the visual experience of a patient, including: continuously acquiring and obtaining a plurality of continuous live-action images reflecting the view of a user; and carrying out deformation processing on the plurality of continuous live-action images according to the visual field image defect mode, so as to obtain a plurality of continuous processed images which can be completely seen by a user. The invention saves the data of the marked defective area as a tubular visual field defective mode through the database unit, automatically adjusts and repairs the subsequent continuous live-action images, and intelligently improves the output of the optimal continuous images for a long time according to the use record of a user.
As shown in fig. 4, the present embodiment discloses a tubular visual field image deformation detection method based on augmented reality, by which a visual field image defect mode for deformation processing can be obtained, specifically including the following steps:
(1) collecting and obtaining a detection image reflecting the visual field of a user;
for the AR equipment, the collection can be carried out through a camera carried by the AR equipment;
(2) displaying the detection image;
preferably, the detection image is displayed in a near-eye display mode commonly used in AR devices, and at this time, the image used in the field of view includes one or more defect regions to be labeled;
(3) labeling a defect area in the detection image seen by a user;
the defect area can be marked by adopting a manual marking mode according to the actual condition of an individual, the standard result has strong individuation, and the visual field defect condition of a user can be reflected more accurately;
(4) and saving the labeling result as a visual field image defect mode.
The tubular visual field defect mode of the user is determined through the steps (1) to (4) which can enable the data processing device with image processing capability to determine the defect area of the real image to be processed, which is an indispensable condition for performing subsequent deformation processing on the real image to obtain a processed image.
In some embodiments of the present invention, as shown in fig. 5, a single or multiple cameras may be used to acquire and obtain a detection image reflecting the field of view of the user, and to ensure accuracy of subsequent detection, the cameras acquire the detection image with the center line of the natural sight line of the user as the center, thereby ensuring that the detection image can truly reflect the field of view of the user.
In some embodiments of the present invention, the defect area may be labeled in various ways as long as the defect area is identified by the relevant processing unit, for example, for a circular visual field defect, the circle center and the radius/diameter of the defect area may be labeled, but in the actual labeling, the defect area of each patient is different and mostly not a regular graph, so the present invention preferably adopts an edge labeling method, that is, the edge where the defect area is connected with the visible area in the detected image, that is, the external edge of the defect area is labeled, and a closed area, that is, the defect area, is formed by labeling all external edges. The marking mode is better adapted to defect areas with various shapes, is relatively simple to operate and is suitable for manual marking of users.
In some embodiments of the present invention, in order to further optimize the annotation process of the user and also to obtain a more accurate annotation result, the detection image may be locally enlarged. As in the embodiment shown in fig. 6 and 7, the detection image is divided into four regions, respectively: the method comprises the steps of selecting a first quadrant, a second quadrant, a third quadrant and a fourth quadrant, amplifying the third quadrant, enabling the edge needing to be marked to be more obvious and ensuring the edge to be visible to a user, preferably amplifying the edge to the size of an original detection image or fully paving a display for displaying the original detection image, and finally marking the defect area shown by the amplified third quadrant by the user. Through the embodiment, the labeling process of the user is more convenient and accurate.
In some embodiments of the present invention, because a patient has a lower vision level, if an acquired detection image is blurred due to its own imaging content, the inconvenience of labeling and the distortion of a result may also be caused, and particularly, after the detection image is locally enlarged, the resolution may be reduced, so this embodiment discloses a method for optimizing the sharpness of the detection image, specifically, the brightness, the contrast, and the sharpness of the original detection image or the enlarged detection image are adjusted, one of the brightness, the contrast, and the sharpness may be adjusted separately, or the above items may be adjusted comprehensively, and by adjusting the brightness, the contrast, and the sharpness, the edge where the defect region and the visible region are connected may be more obvious, which facilitates a user to perform more accurate labeling.
The tubular visual field image deformation display method based on augmented reality can be applied to an augmented reality device, and the method can be better realized because the augmented reality device such as AR glasses has the advantages of portability, strong data processing capability and the like.
Based on the above embodiments, disclosed herein is a specific process of a tubular visual field image deformation display method based on augmented reality glasses, specifically including:
a user firstly wears glasses (AR glasses) with natural shapes, wherein the glasses comprise an image acquisition unit, a control unit, an image processing unit and an image display unit (a light ray penetrable near-eye display);
the user faces the front of the head and the eyes to a real environment needing to be seen clearly;
the image acquisition unit acquires continuous live-action images taking the natural sight line center of the user as the center;
the image acquisition unit synchronously and continuously acquires live-action images along with the movement of the front face and eyes of the head of the user;
the continuous live-action image is processed by the image processing unit as follows: firstly, calling out a visual field image defect mode stored in the database unit, which can reflect the vision condition of the user, then carrying out deformation processing on the live-action image according to the visual field image defect mode, preferably, as shown in fig. 3, compressing the size of the live-action image so that the live-action image is displayed in a visible area of a patient in a whole, wherein the image is a processed image, and finally, outputting the processed image generated by calculation to an image display unit, preferably, a light ray penetrable near-eye display.
Through the above process, the user can continuously and integrally observe the whole content of the live-action image instead of the tubular limited continuous image. Along with the movement of the AR glasses driven by the head of the user, the image processing unit continuously outputs the processed images to the image display unit, and the aim of continuously improving the eyesight of the user is fulfilled.
Preferably, the user may further zoom the processed image to obtain a better viewing experience.
Preferably, during the use of the AR glasses, the user can use the touch panel of the control unit to call the cursor 1 at any time, correct and supplement the visual field loss pattern for the area that is not displayed perfectly, and record and update the data of the visual field loss pattern stored in the database unit. The video formed by the continuous live-action images can be processed by repeating the above process.
Example two
The embodiment discloses augmented reality glasses, also called AR glasses specifically include:
the image acquisition unit is used for acquiring and obtaining a live-action image reflecting the view of the user, and preferably, the image acquisition unit can adopt a single camera or double cameras arranged on the AR glasses body;
the database unit is used for storing a visual field image defect mode of a defect area reflecting the visual field of the user, which can be called;
the image processing unit is used for calling the visual field image defect mode to carry out deformation processing on the live-action image so as to obtain a processing image which can be completely seen by a user;
and the image display unit is used for outputting the processed image, and preferably, the image display unit adopts a near-eye display commonly used in AR glasses.
In some embodiments of the present invention, the image capturing unit may employ a single or multiple cameras to capture and obtain the live-action image reflecting the user's view field, and in order to ensure accuracy of subsequent detection, the cameras capture the live-action image with the user's natural sight line as a center, thereby ensuring that the live-action image can truly reflect the user's view field situation.
In some embodiments of the present invention, the image processing unit is configured to deform the live-action image to a visible region entirely outside the defective region of the live-action image, and specifically, the image processing unit may deform all or part of the live-action image to bypass the defective region, so that all contents of the live-action image are in the visible region for the corresponding live-action image, which is beneficial to improving the visual experience of the user.
Preferably, as shown in fig. 3, the image processing unit is configured to perform size compression on the real image to obtain a compressed real image, and move the compressed image to the visible area of the real image, so as to obtain a processed image.
In some embodiments of the present invention, in addition to processing a still picture, the present invention can also process a video composed of continuous live-action images, so as to further improve the visual experience of a patient, including: the image acquisition unit continuously acquires and obtains a plurality of continuous live-action images reflecting the view of the user; and the image processing unit is used for carrying out deformation processing on the plurality of continuous live-action images according to the visual field image defect mode and obtaining a plurality of continuous processed images which can be completely seen by a user. The invention saves the data of the marked defective area as a tubular visual field defective mode through the database unit, automatically adjusts and repairs the subsequent continuous live-action images, and intelligently improves the output of the optimal continuous processing images for a long time according to the use record of a user.
Through the augmented reality glasses disclosed by the invention, the defect region can be detected to obtain a visual field defect mode, and the method specifically comprises the following steps:
the image acquisition unit is also used for acquiring and obtaining the detection image reflecting the visual field of the user;
the image display unit is also used for displaying the detection image;
the AR glasses further comprise a control unit for labeling a defective region in the detected image seen by the user;
the database unit is also used for storing the labeling result as the visual field image defect mode.
The control unit is used for marking the defect area in the detection image seen by the user, the control unit is provided with a microprocessor or other similar chips capable of processing data, the chips can adopt any type of the existing AR glasses, and in addition, the control unit also comprises an interaction device used for realizing interaction with the user.
In some embodiments of the present invention, in order to ensure the accuracy of the subsequent detection, the image acquisition unit acquires the detection image with the center line of the natural sight line of the user as the center, so as to ensure that the detection image can truly reflect the view situation of the user.
In some embodiments of the present invention, the user may use the control unit to label the defect area in a plurality of ways as long as the defect area satisfying the criterion can be identified by the relevant processing unit, therefore, the present invention preferably labels the edge of the defect area by using the control unit, that is, labels the edge where the defect area is connected with the visible area in the detection image, that is, the external edge of the defect area, and forms a closed area, that is, the defect area, by labeling all the external edges. The marking mode is better adapted to defect areas with various shapes, is relatively simple to operate and is suitable for manual marking of users.
As shown in fig. 8, the control unit includes:
the touch pad 2 is used for controlling the movement of the cursor 1 displayed on the detection image, and the cursor 1 can synchronously move along with the sliding of the finger when a user touches the touch pad 2 by the finger;
and the marking key 3 marks the position on the detection image corresponding to the cursor 1 under the triggering condition, and a user can move the cursor 1 and mark the position while matching with the touch pad 2 so as to obtain a continuous linear marking result.
In addition, the control unit can also realize the functions of opening/closing control of the image acquisition unit, self-adaptive adjustment control of the image after the subsequent deformation processing of the tubular visual field defect region and the like through the corresponding human-computer interaction keys arranged on the control unit.
In some embodiments of the present invention, in order to further optimize the labeling process of the user and obtain a more accurate labeling result, the augmented reality glasses further include an image processing unit, which may locally enlarge the detected image. As in the embodiment shown in fig. 6 and 7, the detection image is divided into four regions, respectively: the method comprises the steps of selecting a first quadrant, a second quadrant, a third quadrant and a fourth quadrant, amplifying the third quadrant, enabling the edge needing to be marked to be more obvious and ensuring the edge to be visible to a user, preferably amplifying the edge to the size of an original detection image or fully paving a display for displaying the original detection image, and finally marking the defect area shown by the amplified third quadrant by the user. Through the embodiment, the labeling process of the user is more convenient and accurate.
In some embodiments of the present invention, because the patient has a lower vision level, if the acquired detection image is blurred due to its own imaging content, the inconvenience of labeling and the distortion of the result may also be caused, and particularly, after the detection image is locally enlarged, the resolution may be reduced, so a method for optimizing the sharpness of the detection image is required, specifically, the brightness, the contrast, and the sharpness of the original detection image or the enlarged detection image are adjusted by the image processing unit, one of the original detection image or the enlarged detection image may be adjusted separately, or the above items may be adjusted comprehensively, and the edge where the defect region and the visible region are connected may be made more obvious by adjusting the brightness, the contrast, and the sharpness, so that the user may easily perform more accurate labeling.
The embodiment discloses a specific process for obtaining a tubular visual field defect mode by using augmented reality glasses, which comprises the following steps:
a user firstly wears glasses (AR glasses) with natural shapes, wherein the glasses at least comprise an on-lens image acquisition unit (a camera, a single or a plurality of cameras), a control unit and an image display unit (a light ray penetrable near-eye display); the control unit comprises a touch pad 2 and a mark key 3 shown in fig. 8;
the detection image is acquired through the image acquisition unit, the image display unit of the AR glasses displays a static full-screen pure-color image, and at the moment, a defect area similar to that shown in figure 1 appears in the visual field of a user;
the user can mark the edge of the defect area by using the touch pad 2 and the marking key 3, specifically, the user can move the touch pad by sliding, the cursor 1 moves correspondingly by plus, the marking key 3 is pressed down, the track of the cursor 1 is recorded, the sliding correction is continuously carried out, the key is pressed down again, the complete track is marked, and the process is repeated continuously until the edges of all the defect areas are marked;
when the labeling is carried out, the local area of the detected image can be enlarged, and the mode of adjusting the brightness, the contrast and the sharpening degree is adopted, so that the labeling result is more detailed and accurate;
and storing the data of the labeling result as an initial tubular visual field defect mode in a database unit, and taking the data as personalized data of a user as the basis of calculation of subsequent processing of the live-action image when the data are used.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (15)

1. A tubular visual field image deformation display method based on augmented reality is characterized by comprising the following steps:
acquiring and obtaining a live-action image reflecting the view of a user;
calling a visual field image defect mode of a defect area reflecting the visual field of the user;
carrying out deformation processing on the live-action image by adopting the visual field image defect mode to obtain a processing image which can be completely seen by a user;
and outputting the processed image.
2. The augmented reality-based tubular-field image deformation display method according to claim 1, wherein the acquiring and obtaining of the live-action image reflecting the view of the user comprises:
and collecting the live-action image by taking the natural sight line of the user as a center.
3. The augmented reality-based tubular visual field image deformation display method according to claim 1, wherein the deforming the live-action image by using the visual field image defect mode to obtain a processed image which can be completely seen by a user comprises:
and deforming the live-action image to a visible area which is wholly positioned outside the defect area of the live-action image.
4. The augmented reality-based tubular-field image deformation display method according to claim 3, wherein the deforming the live-action image into a visible region entirely outside a defective region of the live-action image includes:
and carrying out size compression on the live-action image to obtain a compressed image, and moving the compressed image to the visible region outside the defect region.
5. The augmented reality-based tubular-field image deformation display method according to claim 1, wherein acquiring and obtaining a live-action image reflecting a view of a user comprises:
continuously acquiring and obtaining a plurality of continuous live-action images reflecting the view of a user;
the deformation processing is performed on the live-action image by adopting the visual field image defect mode to obtain a processing image which can be completely seen by a user, and the method comprises the following steps:
and carrying out deformation processing on the plurality of continuous live-action images by adopting the visual field image defect mode to obtain a plurality of continuous processing images which can be completely seen by a user.
6. The augmented reality-based tubular visual field image deformation display method according to claim 1, further comprising the visual field image loss mode acquisition method:
collecting and obtaining a detection image reflecting the visual field of a user;
displaying the detection image;
labeling a defect area in the detection image seen by a user;
and saving the labeling result as a visual field image defect mode.
7. The augmented reality-based tubular-field image deformation display method according to claim 6, wherein the labeling of the defective region in the detected image seen by the user includes:
and under the condition of enlarging the image, marking the edge of the defect area in the image.
8. An augmented reality glasses, comprising:
the image acquisition unit is used for acquiring and obtaining a live-action image reflecting the view of the user;
the database unit is used for storing a visual field image defect mode of a defect area reflecting the visual field of the user, which can be called;
the image processing unit is used for calling the visual field image defect mode to carry out deformation processing on the live-action image so as to obtain a processing image which can be completely seen by a user;
and an image display unit outputting the processed image.
9. Augmented reality glasses according to claim 8, wherein the capturing and obtaining of live-action images reflecting what the user's field of view sees comprises:
and collecting the live-action image by taking the natural sight line of the user as a center.
10. The augmented reality glasses according to claim 8, wherein the invoking of the visual field image loss mode to perform the deformation processing on the live-action image to obtain a processed image that can be completely seen by a user comprises:
and deforming the live-action image to a visible area which is wholly positioned outside the defect area of the live-action image.
11. The augmented reality glasses of claim 10 wherein the warping the live-action image to a visible region entirely outside the defective region of the live-action image comprises:
and carrying out size compression on the live-action image to obtain a compressed image, and moving the compressed image to the visible region outside the defect region.
12. Augmented reality glasses according to claim 8, wherein the capturing and obtaining of live-action images reflecting what the user's field of view sees comprises:
continuously acquiring and obtaining a plurality of continuous live-action images reflecting the view of a user;
the step of calling the visual field image defect mode to perform deformation processing on the live-action image to obtain a processed image which can be completely seen by a user comprises the following steps:
and calling the visual field image defect mode to carry out deformation processing on the plurality of continuous real scene images reflecting the visual field of the user, so as to obtain a plurality of continuous processing images which can be completely seen by the user.
13. Augmented reality glasses according to claim 8,
the image acquisition unit is also used for acquiring and obtaining a detection image reflecting the visual field of the user;
the image display unit is also used for displaying the detection image;
augmented reality glasses further include:
the control unit is used for marking the defect area in the detection image seen by the user;
the database unit is also used for storing the labeling result as a visual field image defect mode.
14. The augmented reality glasses of claim 13 further comprising an image processing unit for magnifying the image before the control unit performs the annotation;
the control unit is further configured to label an edge of the defect area in the image when the image is enlarged.
15. Augmented reality glasses according to claim 13, wherein the control unit comprises:
a touch pad for controlling movement of a cursor displayed on the image;
and the marking key is used for marking the position on the image corresponding to the cursor.
CN201910829040.XA 2019-09-03 2019-09-03 Tubular visual field image deformation display method and glasses based on augmented reality Pending CN110706164A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910829040.XA CN110706164A (en) 2019-09-03 2019-09-03 Tubular visual field image deformation display method and glasses based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910829040.XA CN110706164A (en) 2019-09-03 2019-09-03 Tubular visual field image deformation display method and glasses based on augmented reality

Publications (1)

Publication Number Publication Date
CN110706164A true CN110706164A (en) 2020-01-17

Family

ID=69193448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910829040.XA Pending CN110706164A (en) 2019-09-03 2019-09-03 Tubular visual field image deformation display method and glasses based on augmented reality

Country Status (1)

Country Link
CN (1) CN110706164A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113687513A (en) * 2021-08-26 2021-11-23 北京邮电大学 Regional visual field transfer system for people with visual field loss
CN114842173A (en) * 2022-04-15 2022-08-02 北华航天工业学院 Augmented reality system and control method thereof

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2355185C (en) * 1998-12-16 2007-11-20 Arthesys Catheter system for release of embolization coils by hydraulic pressure
CN103025228A (en) * 2010-05-06 2013-04-03 Ucl商业有限公司 A supra-threshold test and a sub-pixel strategy for use in measurements across the field of vision
JP2014131551A (en) * 2013-01-07 2014-07-17 Akira Takebayashi Navigation device for endoscope
CN104011203A (en) * 2011-08-05 2014-08-27 玛丽亚·皮娅·卡斯马 Methods Of Treatment Of Retinal Degeneration Diseases
US20150234187A1 (en) * 2014-02-18 2015-08-20 Aliphcom Adaptive optics
US20160037137A1 (en) * 2014-07-31 2016-02-04 Philip Seiflein Sensory perception enhancement device
CN105404393A (en) * 2015-06-30 2016-03-16 指点无限(美国)有限公司 Low-latency virtual reality display system
CN108198249A (en) * 2018-03-07 2018-06-22 上海交通大学医学院附属第九人民医院 A kind of defect of visual field simulator and method based on virtual reality technology
WO2018233293A1 (en) * 2017-06-23 2018-12-27 芋头科技(杭州)有限公司 Imaging display system
CN109344719A (en) * 2018-09-03 2019-02-15 国网天津市电力公司 Substation equipment information query method based on augmented reality and intelligent glasses
WO2019067779A1 (en) * 2017-09-27 2019-04-04 University Of Miami Digital therapeutic corrective spectacles
US20190184297A1 (en) * 2015-04-22 2019-06-20 Romanenko Ruslan Vertical wind tunnel skydiving simulator
CN110084811A (en) * 2019-05-08 2019-08-02 上海交通大学 Fundus photograph recognition methods, device, equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2355185C (en) * 1998-12-16 2007-11-20 Arthesys Catheter system for release of embolization coils by hydraulic pressure
CN103025228A (en) * 2010-05-06 2013-04-03 Ucl商业有限公司 A supra-threshold test and a sub-pixel strategy for use in measurements across the field of vision
CN104011203A (en) * 2011-08-05 2014-08-27 玛丽亚·皮娅·卡斯马 Methods Of Treatment Of Retinal Degeneration Diseases
JP2014131551A (en) * 2013-01-07 2014-07-17 Akira Takebayashi Navigation device for endoscope
US20150234187A1 (en) * 2014-02-18 2015-08-20 Aliphcom Adaptive optics
US20160037137A1 (en) * 2014-07-31 2016-02-04 Philip Seiflein Sensory perception enhancement device
US20190184297A1 (en) * 2015-04-22 2019-06-20 Romanenko Ruslan Vertical wind tunnel skydiving simulator
CN105404393A (en) * 2015-06-30 2016-03-16 指点无限(美国)有限公司 Low-latency virtual reality display system
WO2018233293A1 (en) * 2017-06-23 2018-12-27 芋头科技(杭州)有限公司 Imaging display system
WO2019067779A1 (en) * 2017-09-27 2019-04-04 University Of Miami Digital therapeutic corrective spectacles
CN108198249A (en) * 2018-03-07 2018-06-22 上海交通大学医学院附属第九人民医院 A kind of defect of visual field simulator and method based on virtual reality technology
CN109344719A (en) * 2018-09-03 2019-02-15 国网天津市电力公司 Substation equipment information query method based on augmented reality and intelligent glasses
CN110084811A (en) * 2019-05-08 2019-08-02 上海交通大学 Fundus photograph recognition methods, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113687513A (en) * 2021-08-26 2021-11-23 北京邮电大学 Regional visual field transfer system for people with visual field loss
CN114842173A (en) * 2022-04-15 2022-08-02 北华航天工业学院 Augmented reality system and control method thereof
CN114842173B (en) * 2022-04-15 2023-08-29 北华航天工业学院 Augmented reality system and control method thereof

Similar Documents

Publication Publication Date Title
US10129520B2 (en) Apparatus and method for a dynamic “region of interest” in a display system
US10867449B2 (en) Apparatus and method for augmenting sight
JP7252144B2 (en) Systems and methods for improved ophthalmic imaging
CN109633907B (en) Method for automatically adjusting brightness of monocular AR (augmented reality) glasses and storage medium
US20180104106A1 (en) A system and method for displaying a video image
CN110706164A (en) Tubular visual field image deformation display method and glasses based on augmented reality
CN110728651A (en) Tubular visual field image deformation detection method based on augmented reality and glasses
US7347550B2 (en) Ophthalmologic image taking apparatus
US20230244307A1 (en) Visual assistance
CN110584588A (en) Method and device for detecting visual field defect
CN110584587A (en) Method and device for compensating visual field defect
GB2612366A (en) Method and system for eye testing
CN115877573A (en) Display method, head-mounted display device, and storage medium
CN115883816A (en) Display method and device, head-mounted display equipment and storage medium
JP2012103753A (en) Image processing method, image processor and correction processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200117

RJ01 Rejection of invention patent application after publication