CN110728631A - Image dynamic contrast enhancement method based on augmented reality and augmented reality glasses - Google Patents

Image dynamic contrast enhancement method based on augmented reality and augmented reality glasses Download PDF

Info

Publication number
CN110728631A
CN110728631A CN201910833070.8A CN201910833070A CN110728631A CN 110728631 A CN110728631 A CN 110728631A CN 201910833070 A CN201910833070 A CN 201910833070A CN 110728631 A CN110728631 A CN 110728631A
Authority
CN
China
Prior art keywords
image
live
dynamic contrast
user
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910833070.8A
Other languages
Chinese (zh)
Inventor
张志扬
苏进
于勇
李琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aibo Tongxin Medical Technology Co Ltd
Original Assignee
Beijing Aibo Tongxin Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aibo Tongxin Medical Technology Co Ltd filed Critical Beijing Aibo Tongxin Medical Technology Co Ltd
Publication of CN110728631A publication Critical patent/CN110728631A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Abstract

The invention discloses an image dynamic contrast enhancement method based on augmented reality and augmented reality glasses, wherein the method comprises the following steps: acquiring and obtaining a live-action image reflecting the view of a user; determining a zoom ratio, and zooming the live-action image according to the zoom ratio; carrying out dynamic contrast enhancement processing of corresponding degree on the zoomed live-action image according to the zoom magnification to obtain an enhanced image; displaying the enhanced image in a near-eye display. For zooming images, the invention enhances the light and shade contrast of objects in the emphasized images through dynamic contrast without improving the resolution of the images, thereby improving the overall recognition degree of the images, overcoming the technical bias, solving the problem that the edges of the images after being amplified are often blurred and not clear, and particularly improving the vision condition of users with low vision or legal blind people through the invention, thereby greatly improving the life quality.

Description

Image dynamic contrast enhancement method based on augmented reality and augmented reality glasses
Technical Field
The invention relates to the technical field of augmented reality, in particular to an image dynamic contrast enhancement method based on augmented reality and augmented reality glasses.
Background
Augmented Reality (AR) technology is a technology for fusing a virtual world and a real world by calculating the position and angle of an image in real time and superimposing a corresponding image, video and a 3D model on the image. The AR client can combine with the picture identification material directly stored in the local AR client to perform real-time image identification on the offline environment of the user, and corresponding display data are displayed in an enhanced mode according to the pre-configured display effect on the position of the identified specific offline target in the real scene.
The image quality Of the AR display device mainly depends on near-eye optics, and one Of the most important parameters for near-eye optical design is the Field angle (FOV), in an optical instrument, the lens Of the optical instrument is taken as a vertex, and the angle formed by two edges Of the maximum range through which the object image Of the measured object can pass is called the FOV. The size of the field angle determines the field of view of the optical instrument, with a larger field angle providing a larger field of view and a smaller optical magnification. On the one hand, the large field angle can bring a larger field of view, more contents are displayed, and more immersion experience is achieved. For a lightweight near-eye display device such as AR glasses, most FOVs do not exceed 40 degrees, for example, the FOV of Google Glass is tens of degrees, and the FOV of microsoft benchmarking product HoloLens reaches nearly 30 °.
In summary, when the FOV is smaller than 40 °, the AR glasses are not adjusted so much in optical magnification to obtain an image display resolution effect, and generally the magnification for image magnification is not more than 2 times. Therefore, no method and corresponding device for adjusting the large magnification of the AR glasses with FOV less than 40 ° exist in the prior art; in addition, the existing AR glasses products are also designed for users with normal vision and slight myopia (with higher requirements on image resolution), while for users with low vision or legal blind people, the images are enlarged and are more difficult to clearly identify, so that there is a technical gap in the development and application of image zooming function in the field.
Disclosure of Invention
In view of the above, the present invention is directed to an image dynamic contrast enhancement method based on augmented reality and augmented reality glasses, so as to improve the recognition of a user with low eyesight on a zoomed image displayed by an AR device.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
an image dynamic contrast enhancement method based on augmented reality comprises the following steps:
acquiring and obtaining a live-action image reflecting the view of a user;
determining a zoom ratio, and zooming the live-action image according to the zoom ratio;
carrying out dynamic contrast enhancement processing of corresponding degree on the zoomed live-action image according to the zoom magnification to obtain an enhanced image;
displaying the enhanced image in a near-eye display.
Further, the acquiring and obtaining a live-action image reflecting the view of the user comprises:
and collecting the live-action image by taking the natural sight line of the user as a center.
Further, the performing, according to the zoom magnification, dynamic contrast enhancement processing of a corresponding degree on the zoomed live-action image to obtain an enhanced image includes:
the larger the zoom magnification value is, the larger the degree of enhancement of the dynamic contrast of the zoomed live-action image is.
Further, the dynamic contrast enhancement processing includes:
s1: performing wavelet transformation on the zoomed live-action image;
s2: normalizing the wavelet coefficient obtained by the wavelet decomposition;
s3: performing dynamic contrast enhancement on the zoomed live-action image according to the multi-scale gradient vector mode of wavelet transformation and histogram equalization;
s4: and performing wavelet inverse transformation to obtain the enhanced image.
Further, the acquiring and obtaining a live-action image reflecting the view of the user comprises:
continuously acquiring and obtaining a plurality of continuous live-action images reflecting the view of a user;
the dynamic contrast enhancement processing of the corresponding degree of the zoomed live-action image according to the zoom magnification comprises the following steps:
and performing dynamic contrast enhancement processing to the same degree on the plurality of continuous live-action images which are zoomed according to the same zoom magnification.
Further, after obtaining the enhanced image and before displaying the enhanced image in a near-to-eye display, the method further comprises:
acquiring a visual field image defect mode of a defect area reflecting the visual field of a user;
and carrying out deformation processing and/or movement on the enhanced image according to the visual field image defect mode, and obtaining an enhanced image of a visible region outside the visual field defect region of the user.
Further, the acquiring a defective view image pattern reflecting a defective region of the user's view includes:
collecting and obtaining a detection image reflecting the visual field of a user;
displaying the detection image;
marking a defect area in a detection image seen by a user;
and saving the labeling result as the visual field image defect mode.
The invention also discloses augmented reality glasses, comprising:
the image acquisition unit is used for acquiring and obtaining an actual image reflecting the view of the user;
a control unit for determining a zoom magnification;
an image processing unit configured to:
zooming the live-action image according to the determined zooming magnification;
carrying out dynamic contrast enhancement processing of corresponding degree on the zoomed live-action image according to the zoom magnification to obtain an enhanced image;
an image display unit for displaying the enhanced image in a near-eye display manner.
Further, the acquiring and obtaining a live-action image reflecting the view of the user comprises: and collecting the live-action image by taking the natural sight line of the user as a center.
Further, the performing, according to the zoom magnification, dynamic contrast enhancement processing of a corresponding degree on the zoomed live-action image to obtain an enhanced image includes:
the larger the zoom magnification value is, the larger the degree of enhancement of the dynamic contrast of the zoomed live-action image is.
Further, the dynamic contrast enhancement processing includes:
s1: performing wavelet transformation on the zoomed live-action image;
s2: normalizing the wavelet coefficient obtained by the wavelet decomposition;
s3: performing dynamic contrast enhancement on the zoomed live-action image according to the multi-scale gradient vector mode of wavelet transformation and histogram equalization;
s4: and performing wavelet inverse transformation to obtain the enhanced image.
Further, comprising:
the acquiring and obtaining of the live-action image reflecting the view of the user comprises: continuously acquiring and obtaining a plurality of continuous live-action images reflecting the view of a user;
the dynamic contrast enhancement processing of the corresponding degree of the zoomed live-action image according to the zoom magnification comprises the following steps:
and performing dynamic contrast enhancement processing of the same degree on the plurality of continuous live-action images which are zoomed according to the same zoom magnification.
Further, the image processing unit is further configured to:
after obtaining the enhanced image and before displaying the enhanced image in a near-eye display mode, acquiring a visual field image defect mode reflecting a defect area of a visual field of a user;
and carrying out deformation processing and/or movement on the enhanced image according to the visual field image defect mode, and obtaining an enhanced image of a visible region outside the visual field defect region of the user.
Further, the image acquisition unit is also used for acquiring and obtaining a detection image reflecting the visual field of the user;
the image display unit is also used for displaying the detection image;
the control unit is also used for marking a defect area in the detection image seen by a user;
the augmented reality glasses further comprise a database unit used for storing the marked result as the visual field image defect mode.
Further, the control unit further includes:
and the contrast switch is used for controlling the image processing unit to turn on/off the dynamic contrast enhancement processing on the live-action image.
Aiming at common AR product series with FOV less than 40 degrees, the invention creatively breaks through the conventional thinking of continuously improving the image resolution target pursued by the technicians in the field for a long time, and overcomes the technical bias; for zooming images, the invention enhances the light and shade contrast of objects in the emphasized images by dynamic contrast without improving the resolution of the images, thereby improving the overall recognition degree of the images, solving the problem that the amplified images are not clear, and particularly improving the vision condition of users with low vision or legal blind people and greatly improving the life quality.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of an augmented reality-based image dynamic contrast enhancement method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a dynamic contrast enhancement processing method according to an embodiment of the present invention;
fig. 3 is an exploded view of a three-level wavelet of a dynamic contrast enhancement processing method according to an embodiment of the present invention;
FIG. 4 is a region division diagram of enhanced image warping/shifting according to an embodiment of the present invention;
FIG. 5 is a region division diagram of a detected image labeled with a defective region according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a control unit according to an embodiment of the present invention.
Description of reference numerals:
1-Cursor 2-touch pad
3-label key
Detailed Description
In addition, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.
Fig. 1 shows an augmented reality-based image dynamic contrast enhancement method, which includes the following steps:
(1) acquiring and obtaining a live-action image reflecting the visual field of a user, and synchronously transforming the live-action image along with the rotation of the head or the rotation of eyeballs of the user so as to ensure that the acquired live-action image can truly reflect the actual visual field of the user; the live-action image is the basis of dynamic contrast enhancement processing, and a user can observe things around through the live-action image output by the AR equipment instead of directly observing through naked eyes;
(2) determining a zoom ratio, and zooming the live-action image according to the zoom ratio to realize accurate zoom, wherein the method for determining the zoom ratio of the invention has various methods, can be directly inputting a specific numerical value of the zoom ratio, can also be used for zooming the live-action image in real time, and automatically determines the zoom ratio by an AR device comprising AR glasses according to the final zoom condition; under the normal condition, in order to ensure that the amplified image still has higher identification degree, the image can be amplified by 2 times under the condition of not improving the resolution, but the image can be amplified by 4-8 times or even more, so that the amplification function of the invention is far superior to that of the conventional AR equipment;
(3) performing dynamic contrast enhancement processing with a corresponding degree on the zoomed live-action image according to the zoom magnification to obtain an enhanced image, and performing dynamic contrast enhancement processing with a larger corresponding degree on the live-action image with a lower resolution, wherein the edge of an object in the enhanced image is more obvious, and the recognition degree of the object in the enhanced image is still maintained or even improved although the resolution is not improved;
(4) the enhanced image is displayed in a near-eye display, which is a commonly used display method in AR glasses.
The invention enhances the identification degree of the zoomed image by the dynamic contrast enhancement method, enhances the edge of the object in the image, and is more obvious in the enhanced image, thereby not only aiming at common users, but also being suitable for users with low eyesight, leading the users with low eyesight to have good visual experience and greatly improving the life quality of the users.
In some embodiments of the present invention, a single or multiple cameras may be used to acquire and obtain a live-action image reflecting the visual field of the user, and in order to ensure accuracy of subsequent detection, the cameras acquire the live-action image with the center line of the natural sight line of the user as the center, thereby ensuring that the image can truly reflect the position and range of the visual field of the user. Preferably, the captured live-action image may be larger than the user field of view, and the user field of view is included in the live-action image.
In some embodiments of the present invention, the value of the zoom ratio can reflect the enlargement or reduction of the live-action image, and when the zoom ratio (value) is 1, the live-action image remains unchanged and neither is enlarged nor reduced; when the zoom magnification is greater than 1, the live-action image is magnified, for example, if the zoom magnification is 4, the live-action image is magnified by 4 times, and the greater the numerical value of the magnification, the greater the magnification of the image, the lower the definition, and therefore, the greater the enhancement of the enhanced image; when the zoom ratio is smaller than 1, the real-scene image is reduced, and since the resolution is not greatly reduced by reducing the real-scene image, the reduced real-scene image can be subjected to dynamic contrast enhancement processing according to the direct proportional relation, or the dynamic contrast enhancement processing is not performed, and the recognition degree is improved by adopting other modes, so that the user with low eyesight has better visual experience, and the specific method will be described in detail in the following text.
The invention discloses a dynamic contrast enhancement processing method, which is used for carrying out dynamic contrast enhancement processing on a zoomed live-action image to obtain an enhanced image, as shown in figure 2, and the method specifically comprises the following steps:
step S1: wavelet transformation is carried out on live-action image f (x, y)
The wavelet transform is carried out on the original image, and 4 frequency bands, 3 directional high-frequency sub-bands and a low-frequency sub-band which represent different frequency characteristics and directional characteristics of the image can be obtained. Fig. 3 is an exploded view of a three-level wavelet of an image. High-frequency subband HLj,LHj,HHj(j is 1,2,3, j is the wavelet decomposition level) as the high frequency component of the image, i.e. the high frequency detail of the original imageWherein HLjFor high-frequency information in the horizontal direction of the image, which reflects high-frequency edge details in the vertical direction of the image, LHjFor high-frequency information in the vertical direction of the image, which reflects high-frequency edge details in the horizontal direction of the image, HHjThe high-frequency information in the diagonal direction of the image reflects the comprehensive change information of the gray scale of the image in the horizontal direction and the vertical direction, and simultaneously contains a small amount of edge information. Low frequency sub-band LLjThe low frequency band retains information of the original image content for the low frequency components of the image, concentrating the energy of the image. For the low frequency sub-band LLjThe decomposition is continued to obtain multi-level decomposition levels, and different decomposition levels correspond to different resolutions.
Step S2: normalizing the wavelet coefficient;
assuming wavelet coefficients of an image
Figure BDA0002191350890000081
(j is the number of wavelet decomposition stages, and f (x, y) is the original image) is normalized to a value in the range ofNormalized wavelet coefficient zj(x, y) is:
Figure BDA0002191350890000083
step S3: enhancing the image according to the multi-scale gradient vector mode and histogram equalization of wavelet transformation;
the wavelet transform has a good frequency characteristic, and can perfectly preserve the gradient information of an image. By utilizing the multi-resolution characteristic of wavelet transformation, ideal image gray distribution can be obtained, and because the image energy is concentrated in LL low-frequency sub-bands and the edge detail part is concentrated in high-frequency sub-bands, the ideal filters can be selected in a classified mode. The multi-scale wavelet gradient information of the image can be calculated through the wavelet transformation coefficients. The wavelet gradient vector modulus of each wavelet layer is as follows:
Figure BDA0002191350890000091
formula (2) represents the intensity of the wavelet vector modulo maximum gray level variation, the wavelet gradient phase argument
Figure BDA0002191350890000092
Indicating the direction of maximum gray scale change.
Theoretically, the wavelet gradient vector modulus caused by noise decreases rapidly with increasing scale, i.e. at large scale, the modulus of the noise gradient vector is small; and at a small scale, the noise gradient mode is larger. According to the characteristics of noise expressed under different scales, different filter coefficients can be selected under different scales, and the modes of gradient vectors under different scales are changed, so that the aim of improving the image contrast can be achieved, noise signals can be effectively inhibited, the contradiction between the noise and the image contrast is better solved, and a more ideal contrast-enhanced image is obtained.
Step S3.1 determines the normalized wavelet coefficients z by means of a resolution information dependent linear filterj(x, y) and wavelet gradient information tj(x, y);
normalizing the wavelet coefficient zj(x, y) mapping to tj(x,y),tj(x, y) is normalized wavelet gradient information and the range of values is still within the [0,1 ] interval.
tj(x,y)=xj×Hj(zj(x,y)) (3)
Normalized wavelet gradient information zjAnd (x, y) is less than or equal to 1, and a linear filter can be adopted for filtering. Wherein HjIs a linear filter related to wavelet multiresolution information, xjAre the filter coefficients.
Step S3.2 retrieves wavelet coefficients of each band of the wavelet decomposition to obtain the maximum value of each band, and calculates the coefficient x of the linear filter in step S3.1 according to the formula (4)j
Figure BDA0002191350890000101
Step S3.3 Linear Filter coefficient x according to equation (3)jCalculating wavelet gradient information tj(x,y);
S3.4, after the calculation of the enhanced gradient information is completed, reducing the gradient information into wavelet coefficients according to the formula (3);
step S3.5, performing double histogram equalization processing on the wavelet coefficient obtained in the step S3.4;
(1) performing traditional histogram equalization processing on the wavelet coefficient restored in the step S3.4;
(2) and (4) carrying out uniform distribution processing on the result after the traditional histogram equalization to obtain a double histogram equalization processing result of the wavelet coefficient.
In the actual histogram processing process, since some gray levels have no pixel distribution and other gray levels have concentrated pixel distribution, the gray levels are reduced, a gray level fault phenomenon occurs, and details of an image are lost due to combination of adjacent gray levels. To solve this problem, the actual gray level of equalization is adjusted
Figure BDA0002191350890000102
Uniform distribution is achieved. Statistics of actual gray levels
Figure BDA0002191350890000103
Figure BDA0002191350890000104
In the above formula, niIs a gray level, LiThe gray levels after conventional histogram equalization.
The 2 nd gray level mapping is realized by realizing uniform distribution of actual gray levels by using the formula (6):
step S4: and performing wavelet inverse transformation to obtain an image with enhanced contrast.
In some embodiments of the present invention, not only a single static live-action image can be subjected to dynamic contrast enhancement processing, but also a video composed of consecutive live-action images can be subjected to dynamic contrast enhancement processing, which enriches the application scenarios of the present invention, and specifically includes:
continuously acquiring and obtaining a plurality of live-action images reflecting the view of a user;
determining a zoom ratio, and zooming all the live-action pictures to the same extent according to the zoom ratio;
specifically, after the live-action images are subjected to dynamic contrast enhancement processing with the same degree by the dynamic contrast enhancement processing method for the first time, as the dynamic contrast enhancement processing with the same degree is performed, for subsequent continuous live-action images, many calculation steps and conclusions in the dynamic contrast enhancement processing method can follow the process/step of the dynamic contrast enhancement processing for the live-action images for the first time, so that the calculation amount of the dynamic contrast enhancement processing is greatly reduced, the efficiency of the dynamic contrast enhancement processing is improved, and the method is very suitable for processing videos composed of a large number of continuous live-action images.
The visual field defect (defect of visual field) refers to that the visual field range is damaged, the patient can have tubular visual field, irregular visual field defect area and other diseases, and for the users with low vision, the enhanced image can be further processed, so that the users can obtain better visual experience.
In some embodiments of the invention, before displaying the enhanced image in a near-to-eye display, the following steps are performed:
firstly, calling a visual field image defect mode of a defect region reflecting the visual field of a user, wherein the visual field image defect mode can be previously marked and stored by the defect region reflecting the visual field defect condition of the user and can be called at any time;
and then, carrying out deformation/movement processing on the enhanced image according to the visual field image defect mode, so as to obtain an enhanced image which is positioned in a visible area outside the visual field defect area of the user and can be completely visible by the user, namely all contents contained in the user visible enhanced image.
As shown in fig. 6, the dashed line in the figure represents the tubular visual field of the patient with a defective visual field, the solid line box outside the tubular visual field represents the non-zoomed real image reflecting the visual field of the user, and the user can obtain all the information of the enhanced image only if the enhanced image is in the dashed line, so that the enhanced image after the compression processing needs to be stopped to the visible region visible to the visual field of the user, preferably, the enhanced image is directly compressed by taking the visible region as the zoom center, and the real image is already in the visible region before the dynamic contrast enhancement processing is performed.
Further, the invention also discloses a method for acquiring a visual field image defect mode of a defect area reflecting the visual field of a user, which specifically comprises the following steps:
(1) acquiring and obtaining a detection image reflecting the visual field of a user, wherein the detection image is essentially the same as the live-action image, so that the live-action image can be used as the detection image;
(2) displaying the detection image, preferably, displaying the detection image in a near-eye display mode commonly used in AR equipment, where the image used in the field of view includes one or more defect regions to be labeled;
(3) labeling a defect area in the detection image seen by a user; the standard result has strong individuation and can accurately reflect the visual field defect condition of the user;
(4) and saving the labeling result as a visual field image defect mode.
Preferably, as shown in fig. 5, an elliptical area is a marked defect area reflecting the defective field of view of the user, wherein the movable cursor 1 is used to mark the edge of the defect area, the solid line part of the ellipse represents the marked part, and the dotted line area represents the unmarked part. The cursor 1 may be controlled by a control unit as shown in fig. 6, which includes a touch pad 2 for controlling the movement of the cursor 1 and a marking key 3 for controlling the marking of the cursor 1.
Preferably, the notch area at the lower left of the ellipse can be amplified independently and then marked, so that the accuracy of the marking result and the convenience of the marking process are ensured.
It should be noted that the above-mentioned step of deformation processing for the image may be applied to the enhanced image after the dynamic contrast enhancement processing, or may be applied to the live-action image after the dynamic contrast enhancement processing.
The invention also discloses augmented reality glasses, which can be applied to the image dynamic contrast enhancement method based on augmented reality in each embodiment to improve the identification degree of the zoomed image.
The augmented reality glasses disclosed by the invention specifically comprise: the image acquisition unit is used for acquiring and obtaining an actual image reflecting the view of the user;
a control unit for determining a zoom magnification;
an image processing unit configured to:
zooming the live-action image according to the determined zooming magnification;
performing dynamic contrast enhancement processing of corresponding degree on the zoomed live-action image according to the zoom magnification to obtain an enhanced image;
an image display unit for displaying the enhanced image in a near-eye display manner.
In some embodiments of the invention, the image capturing unit is configured to capture the live-action image with a center line of a natural sight line of the user as a center.
In some embodiments of the present invention, the greater the zoom magnification value determined by the control unit, the greater the degree of dynamic contrast enhancement of the enhanced image by the image processing unit dynamic contrast enhancement processing.
In some embodiments of the present invention, as shown in fig. 2, the method for performing dynamic contrast enhancement processing on the live-action image by the image processing unit includes:
s1: performing wavelet transformation on the acquired and obtained live-action image;
s2: normalizing the wavelet coefficient obtained by the wavelet decomposition;
s3: performing dynamic contrast enhancement on the live-action image according to multi-scale gradient vector mode and histogram equalization of wavelet transformation;
s4: and performing wavelet inverse transformation to obtain the enhanced image.
In some embodiments of the invention, comprising:
the image acquisition unit is used for continuously acquiring and obtaining a plurality of live-action images reflecting the view of the user;
the image processing unit is used for carrying out dynamic contrast enhancement processing of the same degree on a plurality of live-action images which are zoomed according to the same zoom magnification.
In some embodiments of the present invention, the image processing unit is further configured to:
acquiring a visual field image defect mode reflecting a defect area of a visual field of a user before displaying the enhanced image in a near-eye display mode;
and carrying out deformation processing and/or movement on the enhanced image according to the visual field image defect mode, and obtaining an enhanced image of a visible region outside the visual field defect region of the user.
In some embodiments of the invention, comprising:
the image acquisition unit is also used for acquiring and obtaining a detection image reflecting the visual field of the user;
the image display unit is also used for displaying the detection image;
the control unit is also used for marking a defect area in the detection image seen by a user;
and the database unit is used for storing the marked result as the visual field image defect mode.
In some embodiments of the invention, the control unit further comprises:
and the contrast switch is used for controlling the image processing unit to turn on/off the dynamic contrast enhancement processing on the live-action image, so that a user can conveniently select and control the dynamic contrast enhancement processing.
In summary, the present invention discloses a usage flow of the augmented reality glasses, which specifically includes:
(1) the user wears natural augmented reality glasses (AR glasses) firstly, and an image acquisition unit (a single camera or a plurality of cameras), a control unit, an image processing unit and an image display unit (a light ray penetrable near-eye display) are arranged on the AR glasses;
(2) the user faces the front of the head and the eyes to a real environment needing to be seen clearly;
(3) the image acquisition unit acquires continuous live-action images taking the natural sight line center of the user as the center;
(4) the image acquisition unit continuously acquires live-action images along with the movement of the front face and eyes of the head of the user;
(5) by means of an image processing unit, first outputting the original continuous live view images to a display unit (light-transmissive near-eye display);
(6) the user firstly self-adjusts the magnification of the image according to the self-demand (the method for self-adjusting the magnification comprises finger touch, gesture control, voice command and key control) to the best state according with the self-vision ability;
(7) the image processing unit of the AR glasses automatically applies a dynamic contrast enhancement processing method on the zoomed live-action image according to the specific magnification, so that the light and shade contrast of an object in the image is enhanced, and the visibility is improved to help a low-vision patient; the image with the enhanced dynamic contrast can still use the superposition of other image processing methods (sharpening, brightness and the like) to enhance the identification degrees of all aspects of the image in parallel;
(8) along with the movement of the front face and eyes of the user (the AR glasses follow-up), the image processing unit performs dynamic contrast enhancement processing on the continuous live-action images originally acquired by the image acquisition unit and outputs the continuous live-action images to the display unit to form a video, so that the aim of continuously improving the vision of the user is fulfilled.
Those skilled in the art will understand that all or part of the steps in the method according to the above embodiments may be implemented by a program, which is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (15)

1. An image dynamic contrast enhancement method based on augmented reality is characterized by comprising the following steps:
acquiring and obtaining a live-action image reflecting the view of a user;
determining a zoom ratio, and zooming the live-action image according to the zoom ratio;
carrying out dynamic contrast enhancement processing of corresponding degree on the zoomed live-action image according to the zoom magnification to obtain an enhanced image;
displaying the enhanced image in a near-eye display.
2. The method for enhancing image dynamic contrast based on augmented reality according to claim 1, wherein the acquiring and obtaining of the real-scene image reflecting the view of the user comprises:
and collecting the live-action image by taking the natural sight line of the user as a center.
3. The method for enhancing image dynamic contrast based on augmented reality according to claim 1, wherein the obtaining an enhanced image by performing dynamic contrast enhancement processing of a corresponding degree on the zoomed real image according to the zoom magnification comprises:
the larger the zoom magnification value is, the larger the degree of enhancement of the dynamic contrast of the zoomed live-action image is.
4. The augmented reality-based image dynamic contrast enhancement method according to claim 1, wherein the dynamic contrast enhancement process includes:
s1: performing wavelet transformation on the zoomed live-action image;
s2: normalizing the wavelet coefficient obtained by the wavelet decomposition;
s3: performing dynamic contrast enhancement on the zoomed live-action image according to the multi-scale gradient vector mode of wavelet transformation and histogram equalization;
s4: and performing wavelet inverse transformation to obtain the enhanced image.
5. The method for enhancing image dynamic contrast based on augmented reality according to claim 1, wherein the acquiring and obtaining of the real-scene image reflecting the view of the user comprises:
continuously acquiring and obtaining a plurality of continuous live-action images reflecting the view of a user;
the dynamic contrast enhancement processing of the corresponding degree of the zoomed live-action image according to the zoom magnification comprises the following steps:
and performing dynamic contrast enhancement processing to the same degree on the plurality of continuous live-action images which are zoomed according to the same zoom magnification.
6. The augmented reality-based image dynamic contrast enhancement method of claim 1, wherein after obtaining the augmented image and before displaying the augmented image in a near-to-eye display, the method further comprises:
acquiring a visual field image defect mode of a defect area reflecting the visual field of a user;
and carrying out deformation processing and/or movement on the enhanced image according to the visual field image defect mode, and obtaining an enhanced image of a visible region outside the visual field defect region of the user.
7. The augmented reality-based image dynamic contrast enhancement method of claim 6, wherein the acquiring a visual field image defect pattern reflecting a defect region of a visual field of the user comprises:
collecting and obtaining a detection image reflecting the visual field of a user;
displaying the detection image;
marking a defect area in a detection image seen by a user;
and saving the labeling result as the visual field image defect mode.
8. An augmented reality glasses, comprising:
the image acquisition unit is used for acquiring and obtaining an actual image reflecting the view of the user;
a control unit for determining a zoom magnification;
an image processing unit configured to:
zooming the live-action image according to the determined zooming magnification;
carrying out dynamic contrast enhancement processing of corresponding degree on the zoomed live-action image according to the zoom magnification to obtain an enhanced image;
an image display unit for displaying the enhanced image in a near-eye display manner.
9. Augmented reality glasses according to claim 8, wherein the capturing and obtaining of live-action images reflecting what the user's field of view sees comprises: and collecting the live-action image by taking the natural sight line of the user as a center.
10. The augmented reality glasses according to claim 8, wherein the obtaining an augmented image by performing dynamic contrast enhancement processing on the zoomed real image according to the zoom magnification to a corresponding degree comprises:
the larger the zoom magnification value is, the larger the degree of enhancement of the dynamic contrast of the zoomed live-action image is.
11. Augmented reality glasses according to claim 8, wherein the dynamic contrast enhancement process comprises:
s1: performing wavelet transformation on the zoomed live-action image;
s2: normalizing the wavelet coefficient obtained by the wavelet decomposition;
s3: performing dynamic contrast enhancement on the zoomed live-action image according to the multi-scale gradient vector mode of wavelet transformation and histogram equalization;
s4: and performing wavelet inverse transformation to obtain the enhanced image.
12. Augmented reality glasses according to claim 8, comprising:
the acquiring and obtaining of the live-action image reflecting the view of the user comprises: continuously acquiring and obtaining a plurality of continuous live-action images reflecting the view of a user;
the dynamic contrast enhancement processing of the corresponding degree of the zoomed live-action image according to the zoom magnification comprises the following steps:
and performing dynamic contrast enhancement processing of the same degree on the plurality of continuous live-action images which are zoomed according to the same zoom magnification.
13. The augmented reality glasses of claim 8, wherein the image processing unit is further configured to:
after obtaining the enhanced image and before displaying the enhanced image in a near-eye display mode, acquiring a visual field image defect mode reflecting a defect area of a visual field of a user;
and carrying out deformation processing and/or movement on the enhanced image according to the visual field image defect mode, and obtaining an enhanced image of a visible region outside the visual field defect region of the user.
14. Augmented reality glasses according to claim 13,
the image acquisition unit is also used for acquiring and obtaining a detection image reflecting the visual field of the user;
the image display unit is also used for displaying the detection image;
the control unit is also used for marking a defect area in the detection image seen by a user;
the augmented reality glasses further comprise a database unit used for storing the marked result as the visual field image defect mode.
15. The augmented reality glasses of claim 8 wherein the control unit further comprises:
and the contrast switch is used for controlling the image processing unit to turn on/off the dynamic contrast enhancement processing on the live-action image.
CN201910833070.8A 2019-09-03 2019-09-04 Image dynamic contrast enhancement method based on augmented reality and augmented reality glasses Pending CN110728631A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019108295649 2019-09-03
CN201910829564 2019-09-03

Publications (1)

Publication Number Publication Date
CN110728631A true CN110728631A (en) 2020-01-24

Family

ID=69218884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910833070.8A Pending CN110728631A (en) 2019-09-03 2019-09-04 Image dynamic contrast enhancement method based on augmented reality and augmented reality glasses

Country Status (1)

Country Link
CN (1) CN110728631A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325685A (en) * 2020-02-04 2020-06-23 北京锐影医疗技术有限公司 Image enhancement algorithm based on multi-scale relative gradient histogram equalization

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045507A (en) * 2009-10-21 2011-05-04 奥林巴斯映像株式会社 Image processing apparatus, imaging apparatus, and image processing method
US20120105636A1 (en) * 2010-11-01 2012-05-03 Elrob, Inc. Digital video projection display system
CN102868847A (en) * 2012-10-19 2013-01-09 北京奇虎科技有限公司 Image type based processing method and device
WO2019067779A1 (en) * 2017-09-27 2019-04-04 University Of Miami Digital therapeutic corrective spectacles

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102045507A (en) * 2009-10-21 2011-05-04 奥林巴斯映像株式会社 Image processing apparatus, imaging apparatus, and image processing method
US20120105636A1 (en) * 2010-11-01 2012-05-03 Elrob, Inc. Digital video projection display system
CN102868847A (en) * 2012-10-19 2013-01-09 北京奇虎科技有限公司 Image type based processing method and device
WO2019067779A1 (en) * 2017-09-27 2019-04-04 University Of Miami Digital therapeutic corrective spectacles

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
甄志明: "雾天数字图像的处理方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
邹建成: "《数学及其在图像处理中的应用》", 31 July 2015 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325685A (en) * 2020-02-04 2020-06-23 北京锐影医疗技术有限公司 Image enhancement algorithm based on multi-scale relative gradient histogram equalization
CN111325685B (en) * 2020-02-04 2020-11-17 北京锐影医疗技术有限公司 Image enhancement algorithm based on multi-scale relative gradient histogram equalization

Similar Documents

Publication Publication Date Title
CN108376391B (en) Intelligent infrared image scene enhancement method
Cheng et al. Efficient histogram modification using bilateral Bezier curve for the contrast enhancement
US9720238B2 (en) Method and apparatus for a dynamic “region of interest” in a display system
CN110246108B (en) Image processing method, device and computer readable storage medium
US10672112B2 (en) Method and system for real-time noise removal and image enhancement of high-dynamic range images
US10410327B2 (en) Shallow depth of field rendering
US7796139B1 (en) Methods and apparatus for displaying a frame with contrasting text
JP4456819B2 (en) Digital image sharpening device
Zhang et al. Color image enhancement based on local spatial homomorphic filtering and gradient domain variance guided image filtering
CN110728631A (en) Image dynamic contrast enhancement method based on augmented reality and augmented reality glasses
CN110728630A (en) Internet image processing method based on augmented reality and augmented reality glasses
Zuo et al. Brightness preserving image contrast enhancement using spatially weighted histogram equalization.
TWI673997B (en) Dual channel image zooming system and method thereof
CN110597386A (en) Image brightness improving method based on augmented reality and augmented reality glasses
CN110717866B (en) Image sharpening method based on augmented reality and augmented reality glasses
KR102585573B1 (en) Content-based image processing
Bonetto et al. Image processing issues in a social assistive system for the blind
Garcia et al. Noise removal and real-time detail enhancement of high-dynamic-range infrared images with time consistency
WO2020084894A1 (en) Multi-camera system, control value calculation method and control device
US20200250457A1 (en) Systems and methods for modifying labeled content
JP6305942B2 (en) Image texture operation method, image texture operation device, and program
CN111080543A (en) Image processing method and device, electronic equipment and computer readable storage medium
Goel The implementation of image enhancement techniques using Matlab
Reddy et al. Underwater Image Enhancement Using Very Deep Super Resolution Technique
Maheshwary et al. Blind image sharpness metric based on edge and texture features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200124

RJ01 Rejection of invention patent application after publication