WO2019119372A1 - Display method and device, electronic device and computer program product - Google Patents

Display method and device, electronic device and computer program product Download PDF

Info

Publication number
WO2019119372A1
WO2019119372A1 PCT/CN2017/117837 CN2017117837W WO2019119372A1 WO 2019119372 A1 WO2019119372 A1 WO 2019119372A1 CN 2017117837 W CN2017117837 W CN 2017117837W WO 2019119372 A1 WO2019119372 A1 WO 2019119372A1
Authority
WO
WIPO (PCT)
Prior art keywords
mapping relationship
environment
color
depth
image
Prior art date
Application number
PCT/CN2017/117837
Other languages
French (fr)
Chinese (zh)
Inventor
崔华坤
王恺
廉士国
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2017/117837 priority Critical patent/WO2019119372A1/en
Priority to CN201780002902.0A priority patent/CN108140362B/en
Publication of WO2019119372A1 publication Critical patent/WO2019119372A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • G09G5/026Control of mixing and/or overlay of colours in general

Definitions

  • the present application relates to the field of guiding blind technology, and in particular to a display method, device, electronic device and computer program product.
  • Visual disability is a serious public health, social and economic problem worldwide.
  • the main causes of blindness in different economic regions are age-related macular degeneration, diabetic retinopathy, etc., while developing countries are older. Cataracts and infectious eye diseases are predominant.
  • Patients with visual disability are usually divided into blind and low-vision patients, of which blind people account for only about a quarter of all visually impaired patients, while low-vision patients account for nearly three-quarters. According to statistics, there are between 40 and 45 million blind people and about 135 million low vision patients worldwide, and about 7 million blind people and 21 million low vision patients are added each year. It can be seen that patients with visual disability, especially those with a large number of low vision patients, face serious challenges. Although some of the patients with low vision can recover or improve vision through surgery and refractive correction, there are still a large number of patients with low vision who need low vision equipment to assist.
  • the visual aids developed in the prior art for low vision patients are mainly low vision aids.
  • the existing visual aids mainly use the optical principle to volume enlarge or project the image seen by the low vision patient, and adjust the imaging distance or the imaging angle to assist the low vision patient to obtain a clear image.
  • Embodiments of the present application provide a display method, apparatus, device, and computer program product, which are mainly used to assist a low vision patient to effectively perceive the distance of an object in an environment.
  • the embodiment of the present application provides a display method, including: collecting depth data of an environment; determining a mapping relationship between each depth value and a color combination; and according to the depth data and the mapping a relationship, generating a pseudo color image of the environment; superimposing the pseudo color image with an image of the environment.
  • an embodiment of the present disclosure provides a display device, where the device includes: a data acquisition module, configured to collect depth data of an environment; and a color mapping module, configured to determine each depth value and color combination. a mapping relationship between the pseudo color image generating module for generating a pseudo color image of the environment according to the depth data and the mapping relationship; and a display module, configured to superimpose the pseudo color image and the image of the environment display.
  • an embodiment of the present application provides an electronic device, including: a memory, one or more processors; and one or more modules, the one or more modules being Stored in the memory and configured to be executed by the one or more processors, the one or more modules including instructions for performing the various steps of the above methods.
  • embodiments of the present application provide a computer program product for use in conjunction with an electronic device, the computer program product comprising a computer program embedded in a computer readable storage medium, the computer program comprising An instruction to cause the electronic device to perform the various steps in the above methods.
  • the depth data of the environment is mapped to obtain a pseudo color image, and the pseudo color image is superimposed with the current environment image for enhanced display, and the low vision patient can be fully utilized without substantially affecting the field of vision of the low vision patient. Residual vision and assist in its effective perception of the distance of objects in the environment.
  • FIG. 1 is a schematic flow chart showing a display method in Embodiment 1 of the present application.
  • FIG. 2 is a schematic diagram showing a mapping relationship between depth values and color combinations in the present application
  • FIG. 3 is a schematic flowchart diagram of a display method in Embodiment 2 of the present application.
  • FIG. 4 is a schematic diagram showing a mapping relationship between depth values and color combinations in the present application.
  • FIG. 5 is a schematic flowchart diagram of a display method in Embodiment 3 of the present application.
  • FIG. 6 is a schematic diagram showing a relationship between a depth value and a gray value and a color combination in the present application
  • FIGS. 7a-7d are schematic diagrams showing four implementation scenarios in Embodiment 4 of the present application.
  • FIG. 8 is a schematic structural diagram of a display device in Embodiment 5 of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device in Embodiment 6 of the present application.
  • the present application provides a display method for mapping depth data of an environment to obtain a pseudo color image, and superimposing the pseudo color image with the current environment image for enhanced display, which can substantially not affect the field of view of the low vision patient. In this case, make full use of the residual vision of patients with low vision and assist them to effectively perceive the distance of objects in the environment.
  • the embodiments of the present application are generally implemented in a low vision auxiliary device, such as AR (Augmented Reality) glasses, VR (Virtual Reality) glasses, or a blind helmet, etc., and may also be in a user's portable device such as a mobile phone or a tablet. Computer and other implementations.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • FIG. 1 is a schematic flowchart of a display method in Embodiment 1 of the present application. As shown in FIG. 1 , the display method includes:
  • Step 101 Collect depth data of an environment.
  • Step 102 Determine a mapping relationship between each depth value and a color combination.
  • Step 103 Generate a pseudo color image of the environment according to the depth data and the mapping relationship;
  • Step 104 superimpose and display the pseudo color image and the image of the environment.
  • the AR glasses worn by the low-vision patients collect the depth data of the environment.
  • the depth data here may be obtained by directly collecting the depth sensor or the depth of field camera mounted on the AR glasses, or by calculating the front environment image by a binocular camera mounted on the AR glasses.
  • the depth data may be separately collected and stored in a form of a table or a matrix corresponding to the current environment image location; the collection of the depth data may also be collected along with the collection of the environment image, and the depth data will be compared with the environment image. Each pixel corresponds.
  • the depth data usually has a limit recognition range. For areas that are beyond the range of the depth sensor, the depth data is usually recorded as 0 or the default value.
  • step 102 a mapping relationship between each depth value and a color combination is determined.
  • the effective working distance can be usually 0.5m to 5m. Therefore, when the unit is in millimeters, the range of the depth data of the front environment is [500, 5000], and the depth values and colors in the range are established. Combined mapping relationship.
  • FIG. 2 is a schematic diagram showing a relationship between a depth value and a color combination in the present application.
  • a combination of a depth value and a combination of three colors that is, a combination of red, green, and blue, wherein the depth is
  • the value is between [500, 1500]
  • it is mapped to a single red
  • the gray value of the red color component changes with the depth value.
  • the depth value is between [1500, 2500]
  • it is mapped to red and green.
  • the combination, and the color components of red and green vary with depth values, and so on.
  • the visible depth value is in the full range of [500, 5000], and will correspond to the gradient of "red-green-blue" depending on the depth value.
  • the depth value has been recorded as 0 or the default value outside the range of [500, 5000]
  • it can be directly mapped to 0, that is, the various color components are all zero
  • the gray value is 0, which corresponds to black.
  • the depth value may have a linear or non-linear, continuous or discrete mapping relationship with two or more colors, and the various colors are not limited to three colors of blue, red, and green, and may also be orange, yellow, cyan, Purple or pink, etc., or a combination of these colors and black or white.
  • mapping relationship between the depth values and the color combination may be preset or dynamically changed according to environmental conditions such as brightness, light, and the like.
  • the method before the step 102, the method further includes: acquiring current user information, and determining, in the step 102, a mapping relationship between each depth value and a color combination according to the user information;
  • each low vision patient can be matched in advance with the combination of the applicable colors and the mapping with the depth value, and the combination of colors that can be sensitively distinguished by each user and the mapping manner of the user's habits can be determined.
  • the mapping range of blue in the color combination can be appropriately expanded; for the indistinguishable or not Patients who adapt to the gradient color adopt a step-like mapping method, that is, a certain depth value range is mapped to a single color, and there is a clear boundary between colors in each depth value range.
  • the user can identify and obtain the user information of the current user, and match the corresponding depth value and color combination.
  • the mapping relationship is performed for subsequent pseudo color conversion processing.
  • the manner of identification and acquisition may be selected by a user's manual menu operation, or by biometric recognition such as user voice, iris, or the like.
  • step 101 and step 102 are not limited, that is, as long as the depth data of the environment is collected before step 103, and the mapping relationship between each depth value and the color combination is determined.
  • step 103 a pseudo color image of the environment is generated according to the collected environmental depth data and the mapping relationship between the depth values and color combinations.
  • the depth value in the depth data of the collected environment is converted into the color corresponding to the depth value according to the mapping relationship, and the corresponding color is obtained after the corresponding conversion completes the corresponding color at each position of the current environment image, and the pseudo color image of the current environment is obtained.
  • the various colors in the pseudo-color image correspond to the distance of the object in the environment to the depth sensor of the user. If somewhere in the pseudo-color image is black, it may indicate that the distance has exceeded the range of the depth sensor.
  • a pseudo color image of the environment is superimposed with an image of the environment.
  • the user can see the environment image through the transparent lens.
  • the relationship superimposes the current environmental image on the transparent lens to display the pseudo color image processed in real time.
  • the current environment image can be acquired synchronously and superimposed with the pseudo color image for display.
  • the depth data of the environment is mapped to obtain a pseudo color image, and the pseudo color image is superimposed with the current environment image for enhanced display, and the low vision patient can be fully utilized without substantially affecting the vision of the low vision patient. Residual vision and assists it in effectively sensing the distance of objects in the environment. For different patient users, different mappings of depth values and color combinations can be debugged for them to generate unique pseudo color images that are more suitable for each user's viewing.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG. 3 is a schematic flowchart of a display method in Embodiment 2 of the present application. As shown in FIG. 3, the display method includes:
  • Step 201 Collect depth data of an environment.
  • Step 202 Determine a mapping relationship between each depth value and a color combination in a preset range.
  • Step 203 Generate a pseudo color image of the environment according to the depth data of the environment and the mapping relationship within a preset range.
  • Step 204 Display the pseudo color image and the image of the environment.
  • step 201 For the implementation of step 201, reference may be made to the description of step 101 in the first embodiment, and the depth data of the environment is collected in step 201.
  • step 202 a mapping relationship between each depth value and a color combination within a preset range is determined, where the preset range falls within the limit acquisition range of the depth data and is smaller than the limit collection range of the depth data.
  • the significance of the preset range of depth values is that the user can perform subsequent pseudo-color processing according to the range of interest.
  • the depth value can be limited to a range of depth values corresponding to 1m to 2m.
  • the limit range of the depth sensor is 0.5m to 5m, that is, in millimeters
  • the limit range of the depth data of the front environment is [500, 5000]
  • the user can customize a preset range.
  • the preset range is in the range of [500, 5000] and is smaller than the range, for example, [1000, 2000] can be selected.
  • FIG. 4 is a schematic diagram showing a mapping relationship between depth values and color combinations in the present application.
  • a preset range that is, a combination of depth values and three colors in [1000, 2000]
  • the combination of red, green, and blue has a mapping relationship.
  • the visible depth value is within the preset range of [1000, 2000], and will correspond to the gradient of "red-green-blue" depending on the depth value.
  • the depth value has been recorded as 0 or the default value outside the range of [1000, 2000], it can be directly mapped to 0, that is, the various color components are all zero, and the gray value is 0, corresponding to black.
  • the present embodiment completes the gradation correspondence of “red-green-blue” within a small depth value range of [1000, 2000], and objects of 1 m and 1.5 m from the user. It can be easily identified by the difference between red and green, and in the first embodiment, objects 1m and 1.5m away from the user are basically displayed in red, and the color difference is small. It can be seen that when the mapping manner with the color combination is determined, the smaller the preset range of the depth value is, the more the objects of different distances in the range can be more finely distinguished by different colors. When the user performs close-range fine operations, the depth range can be adjusted even smaller, so that low-vision users can accurately distinguish the subtle distance differences of objects within a certain distance range.
  • each depth value within a preset range may have a linear or non-linear, continuous or discrete mapping relationship with two or more colors, and the various colors are not limited to three colors of blue, red, and green. It may be orange, yellow, cyan, purple or pink, etc., or a combination of these colors and black or white.
  • mapping relationship between the depth values and the color combinations in the preset range may be preset or dynamically changed according to environmental conditions such as brightness, light, and the like.
  • the method before the step 102, further includes: acquiring current user information, and determining, in the step 102, a mapping relationship between each depth value and a color combination according to the user information.
  • the method further includes: acquiring current user information, and determining, in the step 102, a mapping relationship between each depth value and a color combination according to the user information.
  • this step reference may be made to the related description of step 102 in the first embodiment.
  • step 201 and step 202 are not limited, that is, as long as the depth data of the environment is collected before step 203, and the mapping relationship between the depth values and the color combination in the preset range is determined.
  • a pseudo color image of the environment is generated according to the collected depth data of the environment within a preset range, and a mapping relationship between each depth value and color combination within the preset range.
  • the depth value in the depth data of the collected environment is converted into the color corresponding to the depth value according to the mapping relationship, and the corresponding conversion completes the color corresponding to each position of the current environment image, and the current environment is obtained.
  • a pseudo color image in which the various colors in the pseudo color image correspond to the distance of the object in the environment to the depth sensor of the user, and the color portion in which the pseudo color image is generated is the image portion of the object conforming to the preset depth value range , that is, the image portion of the object within the distance range that the user is most concerned about.
  • the pseudo color image is black, it may indicate that the distance has exceeded the range of the depth sensor, or the distance is not within the preset range specified by the user.
  • a pseudo color image of the environment is superimposed with an image of the environment.
  • the user can see the environment image through the transparent lens.
  • the relationship superimposes the current environmental image on the transparent lens to display the pseudo color image processed in real time.
  • the current environment image can be acquired synchronously and superimposed with the pseudo color image for display.
  • the depth data of the environment within the preset depth value range is mapped to obtain a pseudo color image, and the pseudo color image is superimposed with the current environment image for enhanced display, which can substantially not affect the vision of the low vision patient.
  • the embodiment can focus on the distance range portion that the user is most concerned about. When the mapping manner of the color combination is determined, the smaller the preset depth value range, the more the objects of different distances in the range can be different. Colors are more subdivided.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • FIG. 5 is a schematic flowchart of a display method in Embodiment 3 of the present application. As shown in FIG. 5, the display method includes:
  • Step 301 Collect depth data of the environment, and generate a grayscale image according to the depth data of the environment;
  • Step 302 Determine a mapping relationship between each depth value and a color combination; and determine a mapping relationship between each gray value and a color combination according to a mapping relationship between the depth values and the color combination;
  • Step 303 Generate a pseudo color image of the environment according to a grayscale map generated by the depth data and a mapping relationship between the grayscale values and color combinations;
  • Step 304 superimpose and display the pseudo color image and the image of the environment.
  • the AR glasses worn by the low vision patient collect the depth data of the environment, and generate a grayscale image of the current environment according to the depth data.
  • the depth data here may be obtained by directly collecting the depth sensor or the depth of field camera mounted on the AR glasses, or by calculating the front environment image by a binocular camera mounted on the AR glasses.
  • the depth data may be separately collected and stored in the form of a table or a matrix corresponding to the current environment image position; the collection of the depth data may also be accompanied by the collection of the environment image, and the depth data will be associated with each pixel of the environment image. correspond.
  • the grayscale image divides white and black into several grades by logarithmic relationship, and the grayscale map can be divided into 256 steps or 65536 steps.
  • the mapping relationship between the depth value and the grayscale value is usually established. Due to the limitation of the device, the depth data usually has a limit recognition range, and the depth sensor cannot exceed the range. The identified depth data is usually recorded as 0 or a default value.
  • the limit acquisition range of the depth value is scaled to the gray value of [0, 255], and the gray value corresponding to the depth value outside the limit range corresponds to 0 or 255.
  • the depth value and the gray value can be linear or non-linear, continuous or discrete mapping relationship; the short distance can be mapped to white, the long distance to black, or the short distance to black, and the long distance to white.
  • step 302 a mapping relationship between each depth value and a color combination is determined, and a mapping relationship between each gray value and a color combination is determined according to a mapping relationship between the depth values and the color combination.
  • the effective working distance can be usually 0.5m to 5m. Therefore, when the unit is in millimeters, the range of the depth data of the front environment is [500, 5000], and the depth values in the range can be established.
  • the mapping of color combinations since there is usually a mapping relationship between the depth value and the gray value in step 301, the mapping relationship between the gray value and the color combination can be determined according to the mapping relationship between the depth value and the color combination and the mapping relationship between the depth value and the gray value. .
  • FIG. 6 is a schematic diagram showing a relationship between a depth value and a gray value and a color combination in the present application.
  • a combination of a depth value and three colors that is, a combination of red, green, and blue has a mapping. Relationship, where the depth value is between [500, 1500], it is mapped to a single red, and the gray value of the red color component changes with the depth value.
  • the depth value is between [1500, 2500]
  • its mapping It It is a combination of red and green, and the color components of red and green vary with depth values, and so on.
  • the visible depth value is in the full range of [500, 5000], and will correspond to the gradient of "red-green-blue" depending on the depth value.
  • the depth value When the depth value has been recorded as 0 or the default value outside the range of [500, 5000], it can be directly mapped to 0, that is, the various color components are all zero, and the gray value is 0, which corresponds to black.
  • the mapping relationship between the gray value and the color combination can be further determined by the mapping relationship between the depth value of the abscissa and the gray value. It can be seen that when the gray value changes between [0, 255], it will correspond to the gradient color of "blue-green-red".
  • the depth value may have a linear or non-linear, continuous or discrete mapping relationship with two or more colors, and the various colors are not limited to three colors of blue, red, and green, and may also be orange, yellow, cyan, Purple or pink, etc., or a combination of these colors and black or white. Therefore, the gray value can also be combined with different colors to form different mapping modes.
  • the method before the step 302, the method further includes: acquiring current user information, in step 302, determining a mapping relationship between each depth value and a color combination according to the user information, and according to the depth values, The mapping relationship of the color combinations determines a mapping relationship between each gray value and a color combination.
  • each low vision patient can be matched in advance with the combination of the applicable colors and the mapping with the depth value, and the combination of colors that can be sensitively distinguished by each user and the mapping manner of the user's habits can be determined.
  • the mapping range of blue in the color combination can be appropriately expanded; for the indistinguishable or not Patients who adapt to the gradient color adopt a step-like mapping method, that is, a certain depth value range is mapped to a single color, and there is a clear boundary between colors in each depth value range.
  • the user can identify and obtain the user information of the current user, and match the corresponding depth value and color combination.
  • the mapping relationship further determines the mapping relationship between the user-specific gray value and the color combination according to the mapping relationship between the user-specific depth value and the color combination, and performs subsequent pseudo color conversion processing.
  • steps 301 and 302 are not limited, that is, as long as the depth data of the environment is collected before step 303, a grayscale image is generated, and the mapping relationship between the depth values and the color combination is determined, and the mapping relationship is also determined. The mapping relationship between each gray value and color combination is sufficient.
  • a pseudo color image of the environment is generated based on a grayscale map generated by the depth data and a mapping relationship between the grayscale values and color combinations.
  • the gray value in the grayscale image generated according to the depth data of the collected environment is converted into the color corresponding to the grayscale value according to the mapping relationship between the grayscale value and the color combination, and the corresponding conversion completes the grayscale image of the current environment.
  • the pseudo color image of the current environment is obtained from the gray image, and the various colors in the pseudo color image correspond to the distance of the object to the depth sensor of the user in the environment, if the pseudo color image Black in somewhere may indicate that the distance is beyond the depth sensor.
  • mapping relationship between the gray value and the color combination can be obtained by the mapping relationship between the depth value and the color combination, and the gray image generated by these intermediate steps can be directly used for pseudo color processing.
  • step 304 For the implementation of step 304, reference may be made to the descriptions of step 104 and step 204 in the first embodiment and the second embodiment.
  • step 304 the pseudo color image of the environment is superimposed with the image of the environment.
  • step 302 and step 303 of this embodiment may be:
  • Step 302 Determine a mapping relationship between each depth value and a color combination in a preset range, and determine each gray value and color in the corresponding range according to the mapping relationship between each depth value and the color combination in the preset range. Combined mapping relationship;
  • Step 303 Generate a pseudo color image of the environment according to a grayscale map generated by the depth data and a mapping relationship between each grayscale value and a color combination in the corresponding range.
  • step 302 the mapping relationship between each depth value and the color combination within the preset range is first determined, where the preset range falls within the limit collection range of the depth data and is smaller than the limit collection range of the depth data.
  • the significance of the preset range of depth values is that the user can perform subsequent pseudo-color processing according to the range of interest.
  • the depth value can be limited to a range of depth values corresponding to 1m to 2m.
  • the limit range of the depth sensor is 0.5m to 5m, that is, in millimeters
  • the limit range of the depth data of the front environment is [500, 5000]
  • the user can customize a preset range.
  • the preset range is in the range of [500, 5000] and is smaller than the range, for example, [1000, 2000] can be selected.
  • the mapping relationship between the depth values and the color combinations in the preset range is similar to the mapping relationship between the depth values and the color combinations in the entire range. Refer to FIG. 4 . It can be understood that when the mapping manner with the color combination is determined, the smaller the preset range of the depth value is, the more the objects of different distances in the range can be more finely distinguished by different colors. When the user performs close-range fine operations, the depth range can be adjusted even smaller, so that low-vision users can accurately distinguish the subtle distance differences of objects within a certain distance range.
  • the mapping between the depth values and the gray values when the grayscale image is generated may be determined, and the corresponding mappings of the depth values in the preset range may be determined.
  • Each gray value in the range further determines a mapping relationship between each gray value and a color combination in the corresponding range. For example, when the preset range of the depth value is [1000, 2000], the gray value in the corresponding range is between 220-160.
  • each depth value within a preset range may have a linear or non-linear, continuous or discrete mapping relationship with two or more colors, and various colors are not limited to blue. , red and green, can also be orange, yellow, cyan, purple or pink, or a combination of these colors and black or white.
  • a pseudo color image of the environment is generated according to a grayscale map generated by the depth data and a mapping relationship between each grayscale value and a color combination in the corresponding range.
  • the gray value in the grayscale image generated according to the depth data of the collected environment is converted into the corresponding grayscale value according to the mapping relationship, and the corresponding conversion completes the grayscale image of the current environment.
  • the pseudo color image of the current environment is obtained from the grayscale image. Since there is no correspondence between each gray value outside the corresponding range and the color combination, it is usually recorded as black.
  • the image in the gray scale range is actually processed according to the preset depth value range.
  • the depth value is pre-predicted.
  • the depth range can be adjusted even smaller, so that low-vision users can accurately distinguish the subtle distance differences of objects within a certain distance range.
  • a grayscale image is generated according to depth data of the environment, and a pseudo color image is obtained based on a mapping relationship between the grayscale value and the color combination, and the pseudo color image is superimposed with the current environment image to be enhanced and displayed.
  • Grayscale images obtained by some intermediate steps of the image processing scheme are easy to combine with existing image processing techniques, and can fully utilize the residual vision of patients with low vision and assist them without affecting the field of vision of patients with low vision. Effectively perceive the distance of objects in the environment; in addition, it can focus on the grayscale part of the distance range part that the user is most concerned about.
  • the mapping mode of the color combination is determined, the smaller the preset depth value range needs to be processed.
  • the grayscale portion is about small, and the more diverse objects in the range can be more finely distinguished by different colors.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • FIGS. 7a-7d are schematic diagrams showing four implementation scenarios in the fourth embodiment of the present application, wherein the user wears AR glasses, and the AR glasses have depth sensors.
  • the solid line drawn by the depth sensor is The upper and lower field of view of the user AR glasses, the dashed line is the range of distances that the depth sensor can effectively detect.
  • the depth values of objects outside the maximum recognition distance are both 0 or default values.
  • the right side of each drawing is a schematic diagram of a pseudo color image superimposed for the user in the AR glasses in each scene.
  • Figure 7a shows a schematic diagram of an implementation scenario of the present application.
  • the front of the user is flat and unobstructed.
  • the patient sees a small amount of green and a small amount of red synthesized color superimposed on the actual environment image at the lower edge A; the following is seen at the B near the middle of the field of view. Blue is superimposed on the actual environment image; above B in the middle of the field of view, the superimposed pseudo color is black because the depth detection is exceeded, that is, the partial area on the AR glasses is not processed and is transparent.
  • the user can recognize the change in the distance of the front flat ground by the color change between A and B in the field of view.
  • FIG. 7b shows a schematic diagram of another implementation scenario of the present application.
  • the front of the user is a raised step.
  • the patient sees a small amount of green and a small amount of red synthesized color superimposed on the actual environment image at the lower edge A; the following is seen below the upper portion of the field of view B.
  • Blue is superimposed to the actual environment image; between the A and B of the field of view, the user can distinguish the dividing lines C and D, that is, the outline of the step, because the color changes from green to blue between AC and DB, and between CDs The color changes from blue to green.
  • the superimposed pseudo color is black because the depth detection is exceeded, that is, the partial area on the AR glasses is not processed and is transparent.
  • the user can identify the position and distance of the front obstacle well by the higher color boundary line B in the field of view and the color change between A and B.
  • FIG. 7c is a schematic diagram showing another implementation scenario of the present application.
  • the front of the user is a descending step.
  • the patient sees a small amount of green and a small amount of red synthesized color superimposed on the actual environment image at the lower edge A; the following is seen below the lower portion of the field of view B.
  • a small amount of green and a small amount of blue are superimposed on the actual environment image; since the edge of the step does not reach the farthest distance, the display of B in the field of view has not yet reached the dark blue, and above B, the superposition of the superposition is exceeded because of the depth detection.
  • the color is black, that is, the part of the area on the AR glasses is not processed and is transparent.
  • the pseudo color boundary displayed by the superposition is low, and at the boundary, it is directly changed from blue-green to colorless.
  • the user After the user is trained, it can know that the distance value corresponding to the blue is missing according to the color mapping relationship.
  • the object at the location so there may be a lower step at this location.
  • FIG. 7d shows a schematic diagram of another implementation scenario of the present application.
  • the front of the user is a descending step.
  • the user presets an object whose range of interest is 2m-4m, that is, within the distance range marked by the shadow in the figure, and the depth value and color in the preset range in FIG.
  • the patient sees red at the lower edge A superimposed to the actual environment image; below the lower portion of the field of view, B sees the blue overlay to the actual environment image; the AB basic realization in the field of view
  • the gradient of the full color combination of "red-green-blue"; above B, the superimposed pseudo color is black because the depth detection is exceeded, that is, the partial area on the AR glasses is not processed, and is transparent.
  • the pseudo color superimposed in the user's field of view is mainly green, and it is easy to understand that when the distance range of the smaller user is preset, the display can be more abundant in the same field of view. The color is more convenient for patients with low vision to distinguish.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • the display device 500 includes:
  • the data collection module 501 is configured to collect depth data of the environment
  • a color mapping module 502 configured to determine a mapping relationship between each depth value and a color combination
  • the pseudo color image generating module 503 is configured to generate a pseudo color image of the environment according to the depth data and the mapping relationship;
  • the display module 504 is configured to superimpose and display the pseudo color image and the image of the environment.
  • the color mapping module 502 is configured to determine a mapping relationship between each depth value and a color combination within a preset range
  • the pseudo color image generating module 503 is configured to generate a pseudo color image of the environment according to the depth data of the environment and the mapping relationship within a preset range.
  • the apparatus 500 further includes:
  • a grayscale map generating module 505 configured to generate a grayscale map according to the depth data of the environment
  • the color mapping module 502 is further configured to determine a mapping relationship between each gray value and a color combination according to a mapping relationship between the depth values and the color combination;
  • the pseudo color image generation module 503 is configured to generate a pseudo color image of the environment according to a grayscale map generated by the depth data and a mapping relationship between the grayscale values and color combinations.
  • the color mapping module 502 is configured to determine a mapping relationship between each depth value and a color combination within a preset range; and, according to the depth values and the color within the preset range.
  • the combined mapping relationship determines a mapping relationship between each gray value and a color combination in the corresponding range;
  • the pseudo color image generation module 503 is configured to generate a pseudo color image of the environment according to a grayscale map generated by the depth data and a mapping relationship between each grayscale value and a color combination in the corresponding range.
  • the apparatus 500 further includes:
  • the user information obtaining module 506 is configured to acquire current user information.
  • the color mapping module 502 is configured to determine a mapping relationship between each depth value and a color combination according to the user information.
  • the electronic device 600 includes: a memory 601, one or more processors 602; and one or more modules, the one or more modules being stored in the memory and configured to Executed by the one or more processors, the one or more modules include instructions for performing the various steps of any of the above methods.
  • an embodiment of the present application further provides a computer program product for use in combination with an electronic device, the computer program product comprising a computer program embedded in a computer readable storage medium, the computer program comprising An instruction to cause the electronic device to perform each of the steps of any of the above methods.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Eye Examination Apparatus (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A display method and device, an electronic device, and a computer program product, the method comprising: collecting depth data of an environment; determining mapping between each depth value and a color combination; generating a pseudo color image of the environment according to the depth data and the mapping; and superimposing the pseudo color image with an image of the environment and displaying the same. According to the present application, the depth data of the environment is mapped to obtain a pseudo color image, and the pseudo color image is superimposed with an image of a current environment to enhance display, so that the residual vision of a patient with poor eyesight may be fully utilized without substantially affecting the field of vision of the patient with poor eyesight, and the patient is thus helped to effectively sense the distance of objects in the environment.

Description

显示方法、装置、电子设备和计算机程序产品Display method, device, electronic device and computer program product 技术领域Technical field
本申请涉及导盲技术领域,特别涉及显示方法、装置、电子设备和计算机程序产品。The present application relates to the field of guiding blind technology, and in particular to a display method, device, electronic device and computer program product.
背景技术Background technique
视力残疾是世界范围内严重的公共卫生、社会和经济问题,不同经济地区致盲的主要原因不同,经济发达地区为年龄性相关性黄斑变性、糖尿病性视网膜病变等,而发展中国家则以老年性白内障和感染性眼病为主。视力残疾患者通常分为盲人和低视力患者,其中盲人仅占全部视力残疾患者的四分之一左右,而低视力患者则占到了近四分之三。据统计,全世界已有4000-4500万盲人和约1.35亿低视力患者,并且每年约新增约700万盲人和2100万低视力患者。可见视力残疾患者,尤其是其中数量巨大的低视力患者的辅助和康复都面临十分严峻的挑战。虽然低视力患者中的一部分可以通过手术及屈光矫正得以恢复或提高视力,但是仍有大量低视力患者需要低视力设备进行辅助。Visual disability is a serious public health, social and economic problem worldwide. The main causes of blindness in different economic regions are age-related macular degeneration, diabetic retinopathy, etc., while developing countries are older. Cataracts and infectious eye diseases are predominant. Patients with visual disability are usually divided into blind and low-vision patients, of which blind people account for only about a quarter of all visually impaired patients, while low-vision patients account for nearly three-quarters. According to statistics, there are between 40 and 45 million blind people and about 135 million low vision patients worldwide, and about 7 million blind people and 21 million low vision patients are added each year. It can be seen that patients with visual disability, especially those with a large number of low vision patients, face serious challenges. Although some of the patients with low vision can recover or improve vision through surgery and refractive correction, there are still a large number of patients with low vision who need low vision equipment to assist.
由于低视力还没有得到社会的广泛认识,仍有残余视力的患者往往被淹没在人群中,被认为是盲人对待,现有技术中针对低视力患者开发的视觉辅助设备主要是低视力助视器。现有的助视器主要是通过光学原理对低视力患者看到的图像进行体积放大或者投影放大,以及对成像距离或者成像角度进行调整等,以协助低视力患者获得清晰的图像。Because low vision has not been widely recognized by the society, patients with residual vision are often submerged in the crowd and are considered blind. The visual aids developed in the prior art for low vision patients are mainly low vision aids. . The existing visual aids mainly use the optical principle to volume enlarge or project the image seen by the low vision patient, and adjust the imaging distance or the imaging angle to assist the low vision patient to obtain a clear image.
现有技术的不足在于:The shortcomings of the prior art are:
现有的助视器通常对用户视野范围影响较大,不能较好的利用用户的残余视力,并且不能辅助低视力患者通过图像有效感知环境中物体的距离。Existing visual aids generally have a large influence on the user's field of view, can not make good use of the user's residual vision, and can not assist the low vision patient to effectively perceive the distance of objects in the environment through the image.
发明内容Summary of the invention
本申请实施例提出了显示方法、装置、设备和计算机程序产品,主要用以辅助低视力患者有效的感知环境中物体的距离。Embodiments of the present application provide a display method, apparatus, device, and computer program product, which are mainly used to assist a low vision patient to effectively perceive the distance of an object in an environment.
在一个方面,本申请实施例提供了一种显示方法,其特征在于,所述方法包括:采集环境的深度数据;确定各深度值与色彩组合的映射关系;根据所述深度数据和所述映射关系,生成所述环境的伪彩色图像;将所述伪彩色图像与所述环境的图像叠加显示。In one aspect, the embodiment of the present application provides a display method, including: collecting depth data of an environment; determining a mapping relationship between each depth value and a color combination; and according to the depth data and the mapping a relationship, generating a pseudo color image of the environment; superimposing the pseudo color image with an image of the environment.
在另一个方面,本申请实施例提供了一种显示装置,其特征在于,所述装置包括:数据采集模块,用于采集环境的深度数据;色彩映射模块,用于确定各深度值与色彩组合的映射关系;伪彩色图像生成模块,用于根据所述深度数据和所述映射关系,生成所述环境的伪彩色图像;显示模块,用于将所述伪彩色图像与所述环境的图像叠加显示。In another aspect, an embodiment of the present disclosure provides a display device, where the device includes: a data acquisition module, configured to collect depth data of an environment; and a color mapping module, configured to determine each depth value and color combination. a mapping relationship between the pseudo color image generating module for generating a pseudo color image of the environment according to the depth data and the mapping relationship; and a display module, configured to superimpose the pseudo color image and the image of the environment display.
在另一个方面,本申请实施例提供了一种电子设备,其特征在于,所述电子设备包括:存储器,一个或多个处理器;以及一个或多个模块,所述一个或多个模块被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述一个或多个模块包括用于执行上述方法中各个步骤的指令。In another aspect, an embodiment of the present application provides an electronic device, including: a memory, one or more processors; and one or more modules, the one or more modules being Stored in the memory and configured to be executed by the one or more processors, the one or more modules including instructions for performing the various steps of the above methods.
在另一个方面,本申请实施例提供了一种与电子设备结合使用的计算机程序产品,所述计算机程序产品包括内嵌于计算机可读的存储介质中的计算机程序,所述计算机程序包括用于使所述电子设备执行上述方法中的各个步骤的指令。In another aspect, embodiments of the present application provide a computer program product for use in conjunction with an electronic device, the computer program product comprising a computer program embedded in a computer readable storage medium, the computer program comprising An instruction to cause the electronic device to perform the various steps in the above methods.
本申请实施例的有益效果如下:The beneficial effects of the embodiments of the present application are as follows:
本申请中,将环境的深度数据映射得到伪彩色图像,并将所述伪彩色图像与当前环境图像叠加进行增强显示,能够在基本不影响低视力患者视野的情况下,充分利用低视力患者的残余视力,并协助其有效的感知环境中物体的距离。In the present application, the depth data of the environment is mapped to obtain a pseudo color image, and the pseudo color image is superimposed with the current environment image for enhanced display, and the low vision patient can be fully utilized without substantially affecting the field of vision of the low vision patient. Residual vision and assist in its effective perception of the distance of objects in the environment.
附图说明DRAWINGS
下面将参照附图描述本申请的具体实施例,其中:Specific embodiments of the present application will be described below with reference to the accompanying drawings, in which:
图1示出了本申请实施例一中显示方法的流程示意图;1 is a schematic flow chart showing a display method in Embodiment 1 of the present application;
图2示出了本申请其中一种深度值与色彩组合映射关系的示意图;2 is a schematic diagram showing a mapping relationship between depth values and color combinations in the present application;
图3示出了本申请实施例二中显示方法的流程示意图;FIG. 3 is a schematic flowchart diagram of a display method in Embodiment 2 of the present application;
图4示出了本申请其中一种深度值与色彩组合映射关系的示意图;4 is a schematic diagram showing a mapping relationship between depth values and color combinations in the present application;
图5示出了本申请实施例三中显示方法的流程示意图;FIG. 5 is a schematic flowchart diagram of a display method in Embodiment 3 of the present application;
图6示出了本申请其中一种深度值和灰度值与色彩组合映射关系的示意图;6 is a schematic diagram showing a relationship between a depth value and a gray value and a color combination in the present application;
图7a-7d示出了本申请实施例四中四种实现场景的示意图;7a-7d are schematic diagrams showing four implementation scenarios in Embodiment 4 of the present application;
图8示出了本申请实施例五中显示装置的结构示意图;FIG. 8 is a schematic structural diagram of a display device in Embodiment 5 of the present application;
图9示出了本申请实施例六中电子设备的结构示意图。FIG. 9 is a schematic structural diagram of an electronic device in Embodiment 6 of the present application.
具体实施方式Detailed ways
为了使本申请的技术方案及优点更加清楚明白,以下结合附图对本申请的示例性实施例进行进一步详细的说明,显然,所描述的实施例仅是本申请的一部分实施例,而不是所有实施例的穷举。并且在不冲突的情况下,本说明中的实施例及实施例中的特征可以互相结合。The exemplary embodiments of the present application are further described in detail below with reference to the accompanying drawings, in which the embodiments described are only a part of the embodiments of the present application, but not all embodiments. An exhaustive example. And in the case of no conflict, the features in the embodiments and the embodiments in the description can be combined with each other.
发明人在发明过程中注意到:现有的助视器通常对用户视野范围影响较大,不能较好的利用用户的残余视力,并且不能辅助低视力患者通过图像有效感知环境中物体的距离。The inventor noticed during the invention that the existing visual aid generally has a large influence on the user's field of view, can not make good use of the user's residual vision, and cannot assist the low vision patient to effectively perceive the distance of the object in the environment through the image.
针对上述不足,本申请提供了一种显示方法,将环境的深度数据映射得到伪彩色图像,并将所述伪彩色图像与当前环境图像叠加进行增强显示,能够在基本不影响低视力患者视野的情况下,充分利用低视力患者的残余视力,并协助其有效的感知环境中物体的距离。In view of the above deficiencies, the present application provides a display method for mapping depth data of an environment to obtain a pseudo color image, and superimposing the pseudo color image with the current environment image for enhanced display, which can substantially not affect the field of view of the low vision patient. In this case, make full use of the residual vision of patients with low vision and assist them to effectively perceive the distance of objects in the environment.
本申请实施例通常在低视力辅助设备中实现,例如AR(Augmented Reality,增强现实)眼镜,VR(Virtual Reality,虚拟现实)眼镜或者导盲头盔等;也可以在用户的随身设备如手机或者平板电脑等实现。The embodiments of the present application are generally implemented in a low vision auxiliary device, such as AR (Augmented Reality) glasses, VR (Virtual Reality) glasses, or a blind helmet, etc., and may also be in a user's portable device such as a mobile phone or a tablet. Computer and other implementations.
以下通过具体示例,进一步阐明本发明实施例技术方案的实质。The essence of the technical solution of the embodiment of the present invention is further clarified by specific examples below.
实施例一:Embodiment 1:
图1示出了本申请实施例一中显示方法流程示意图,如图1所示,所述显示方法包括:FIG. 1 is a schematic flowchart of a display method in Embodiment 1 of the present application. As shown in FIG. 1 , the display method includes:
步骤101,采集环境的深度数据;Step 101: Collect depth data of an environment.
步骤102,确定各深度值与色彩组合的映射关系;Step 102: Determine a mapping relationship between each depth value and a color combination.
步骤103,根据所述深度数据和所述映射关系,生成所述环境的伪彩色图像;Step 103: Generate a pseudo color image of the environment according to the depth data and the mapping relationship;
步骤104,将所述伪彩色图像与所述环境的图像叠加显示。Step 104: superimpose and display the pseudo color image and the image of the environment.
以低视力患者佩戴AR眼镜为例,在步骤101中,低视力患者佩戴的AR眼镜采集环境的深度数据。这里的深度数据可以是AR眼镜搭载的深度传感器、景深摄像头直接采集得到的,也可以是AR眼镜搭载的双目摄像头等采集前方环境图像进行计算后得到的。所述深度数据可以单独采集,并以表格或者矩阵的形式对应当前的环境图像位置进行存储;所述深度数据 的采集也可以伴随着环境图像的采集而采集,此时深度数据将与环境图像的各像素对应。Taking the AR glasses as a low-visibility patient as an example, in step 101, the AR glasses worn by the low-vision patients collect the depth data of the environment. The depth data here may be obtained by directly collecting the depth sensor or the depth of field camera mounted on the AR glasses, or by calculating the front environment image by a binocular camera mounted on the AR glasses. The depth data may be separately collected and stored in a form of a table or a matrix corresponding to the current environment image location; the collection of the depth data may also be collected along with the collection of the environment image, and the depth data will be compared with the environment image. Each pixel corresponds.
因设备的限制,深度数据通常具有极限识别范围,对于超出该范围的深度传感器无法识别的区域,通常将其深度数据记为0或者缺省值。Due to the limitations of the device, the depth data usually has a limit recognition range. For areas that are beyond the range of the depth sensor, the depth data is usually recorded as 0 or the default value.
在步骤102中,确定各深度值与色彩组合的映射关系。以深度传感器为例,其有效工作距离通常可以为0.5m到5m,因此以毫米为单位时,其采集前方环境的深度数据的范围是[500,5000],建立该范围内各深度值与色彩组合的映射关系。In step 102, a mapping relationship between each depth value and a color combination is determined. Taking the depth sensor as an example, the effective working distance can be usually 0.5m to 5m. Therefore, when the unit is in millimeters, the range of the depth data of the front environment is [500, 5000], and the depth values and colors in the range are established. Combined mapping relationship.
图2示出了本申请其中一种深度值与色彩组合映射关系的示意图,如图2所示,深度值与三种色彩的组合,即红色、绿色和蓝色的组合具有映射关系,其中深度值在[500,1500]间时,其映射为单一的红色,并且红色色彩分量的灰度值随深度值的变化而变化,深度值在[1500,2500]间时,其映射为红色和绿色的组合,并且红色和绿色的色彩分量分别随深度值的变化而变化,以此类推。可见深度值在[500,5000]的全范围内,将依深度值的变化而对应渐变的“红色-绿色-蓝色”。当深度值在[500,5000]范围外已被记录为0或者缺省值的,可以直接映射为0,即各种彩色分量均为零,该处灰度值为0,对应黑色。FIG. 2 is a schematic diagram showing a relationship between a depth value and a color combination in the present application. As shown in FIG. 2, a combination of a depth value and a combination of three colors, that is, a combination of red, green, and blue, wherein the depth is When the value is between [500, 1500], it is mapped to a single red, and the gray value of the red color component changes with the depth value. When the depth value is between [1500, 2500], it is mapped to red and green. The combination, and the color components of red and green vary with depth values, and so on. The visible depth value is in the full range of [500, 5000], and will correspond to the gradient of "red-green-blue" depending on the depth value. When the depth value has been recorded as 0 or the default value outside the range of [500, 5000], it can be directly mapped to 0, that is, the various color components are all zero, and the gray value is 0, which corresponds to black.
实际上深度值可以与两种或者多种色彩存在线性或者非线性,连续或者离散的映射关系,并且各种色彩不限于蓝色、红色和绿色三种颜色,还可以为橙色、黄色、青色、紫色或者粉色等,或者这些彩色与黑色或白色的组合等。In fact, the depth value may have a linear or non-linear, continuous or discrete mapping relationship with two or more colors, and the various colors are not limited to three colors of blue, red, and green, and may also be orange, yellow, cyan, Purple or pink, etc., or a combination of these colors and black or white.
此外各深度值与色彩组合的映射关系可以是预设的也可以是根据环境情况如亮度、光线等动态改变的。In addition, the mapping relationship between the depth values and the color combination may be preset or dynamically changed according to environmental conditions such as brightness, light, and the like.
在一些实施方式中,在所述步骤102之前还包括:获取当前用户信息,所述步骤102中,根据所述用户信息确定各深度值与色彩组合的映射关系;In some embodiments, before the step 102, the method further includes: acquiring current user information, and determining, in the step 102, a mapping relationship between each depth value and a color combination according to the user information;
通常,低视力患者对于颜色分辨也有障碍,例如一些患者只能区分红色和蓝色,而认为黄色、青色与灰色相同,另一些患者能感觉到红色、黄色、青色和蓝色四种色不同,但是并不知道具体是什么颜色;一些用户能够明显区分出某些色彩的渐变,而另一些患者仅能区分边界分明的色彩组合等。因此可事先为每一位低视力患者匹配其适用的色彩的组合以及其与深度值的映射方式,确定各用户能够敏感区分的色彩的组合以及用户习惯的映射方式。例如,对于无法区分红色的患者,则不将红色列入色彩组合;对能够非常敏感的辨别渐变蓝色的用户,则可适当扩大色彩组合中蓝色所占的映射范围;对于不能区分或者不适应渐变颜色的患者,则采用阶梯状的映射方式,即在某一深度值范围映射为单一的颜色,各深度值范围间色 彩存在明显边界。In general, patients with low vision have obstacles in color resolution. For example, some patients can only distinguish between red and blue, while yellow, cyan and gray are considered the same, while others can feel red, yellow, cyan and blue. But I don't know what color it is; some users can clearly distinguish the gradation of certain colors, while others can only distinguish the color combinations with clear boundaries. Therefore, each low vision patient can be matched in advance with the combination of the applicable colors and the mapping with the depth value, and the combination of colors that can be sensitively distinguished by each user and the mapping manner of the user's habits can be determined. For example, for patients who cannot distinguish red, red is not included in the color combination; for users who can identify the gradient blue very sensitively, the mapping range of blue in the color combination can be appropriately expanded; for the indistinguishable or not Patients who adapt to the gradient color adopt a step-like mapping method, that is, a certain depth value range is mapped to a single color, and there is a clear boundary between colors in each depth value range.
在事先调试和设定了各用户对应的深度值与色彩组合的映射关系后,当用户佩戴了AR眼镜后,其可以识别和获取当前用户的用户信息,并匹配其对应的深度值与色彩组合的映射关系,进行后续伪彩色转换的处理。所述识别和获取的方式可以通过用户手动菜单操作选择、或者通过用户语音、虹膜等生物特征识别获得。After the AR and the color combination are mapped and set in advance, after the user wears the AR glasses, the user can identify and obtain the user information of the current user, and match the corresponding depth value and color combination. The mapping relationship is performed for subsequent pseudo color conversion processing. The manner of identification and acquisition may be selected by a user's manual menu operation, or by biometric recognition such as user voice, iris, or the like.
需要说明的是上述步骤101和步骤102的实现顺序不限,即只要在步骤103前采集了环境的深度数据,并确定了各深度值与色彩组合的映射关系即可。It should be noted that the implementation steps of step 101 and step 102 are not limited, that is, as long as the depth data of the environment is collected before step 103, and the mapping relationship between each depth value and the color combination is determined.
在步骤103中,根据采集的环境深度数据和所述各深度值与色彩组合的映射关系生成所述环境的伪彩色图像。In step 103, a pseudo color image of the environment is generated according to the collected environmental depth data and the mapping relationship between the depth values and color combinations.
将采集的环境的深度数据中的深度值逐一根据映射关系转换为该深度值对应的色彩,对应的转换完成当前的环境图像各位置处对应的色彩后,即得到了当前环境的伪彩色图像,伪彩色图像中的各种色彩对应的表明了该处环境中物体到用户的深度传感器的距离,若伪彩色图像中某处为黑色,则可能表明该处的距离已经超出了深度传感器的范围。The depth value in the depth data of the collected environment is converted into the color corresponding to the depth value according to the mapping relationship, and the corresponding color is obtained after the corresponding conversion completes the corresponding color at each position of the current environment image, and the pseudo color image of the current environment is obtained. The various colors in the pseudo-color image correspond to the distance of the object in the environment to the depth sensor of the user. If somewhere in the pseudo-color image is black, it may indicate that the distance has exceeded the range of the depth sensor.
在步骤104中,将所述环境的伪彩色图像与所述环境的图像叠加显示。当用户佩戴的为AR眼镜时,用户能够通过透明镜片看到环境图像,此时根据所述环境的伪彩色图像与环境深度数据的对应关系,或者所述环境的伪彩色图像与环境图像的对应关系在透明镜片上叠加当前的环境图像实时显示处理得到的伪彩色图像。当用户佩戴的为VR眼镜、导盲头盔或者使用其他终端时,可同步采集当前环境图像,并与伪彩色图像叠加后进行显示。In step 104, a pseudo color image of the environment is superimposed with an image of the environment. When the user wears the AR glasses, the user can see the environment image through the transparent lens. At this time, according to the corresponding relationship between the pseudo color image of the environment and the environment depth data, or the corresponding color image of the environment and the environment image The relationship superimposes the current environmental image on the transparent lens to display the pseudo color image processed in real time. When the user wears VR glasses, a blind guide helmet or other terminals, the current environment image can be acquired synchronously and superimposed with the pseudo color image for display.
由于人眼对色彩的辨别能力相对较强,可分辨出上千种颜色,在原环境图像的基础上叠加对应的伪彩色图像能够使患者更有效的提取图形信息,更易辨认环境图像细节,能够更敏感、形象的感知和识别各种距离的物体。Because the human eye has a relatively strong ability to distinguish colors, thousands of colors can be distinguished. Superimposing corresponding pseudo-color images on the basis of the original environment image enables the patient to extract graphic information more effectively, and more easily recognize the details of the environmental image. Sensitive, visual perception and recognition of objects at various distances.
本实施例中,将环境的深度数据映射得到伪彩色图像,并将所述伪彩色图像与当前环境图像叠加进行增强显示,能够在基本不影响低视力患者视野的情况下,充分利用低视力患者的残余视力,并协助其有效的感知环境中物体的距离。针对不同的患者用户可以为其调试不同的深度值与色彩组合的映射关系,以生成更适合各用户观看的特有的伪彩色图像。In this embodiment, the depth data of the environment is mapped to obtain a pseudo color image, and the pseudo color image is superimposed with the current environment image for enhanced display, and the low vision patient can be fully utilized without substantially affecting the vision of the low vision patient. Residual vision and assists it in effectively sensing the distance of objects in the environment. For different patient users, different mappings of depth values and color combinations can be debugged for them to generate unique pseudo color images that are more suitable for each user's viewing.
实施例二:Embodiment 2:
图3示出了本申请实施例二中显示方法流程示意图,如图3所示,所述显示方法包括:FIG. 3 is a schematic flowchart of a display method in Embodiment 2 of the present application. As shown in FIG. 3, the display method includes:
步骤201,采集环境的深度数据;Step 201: Collect depth data of an environment.
步骤202,确定在预设范围内的各深度值与色彩组合的映射关系;Step 202: Determine a mapping relationship between each depth value and a color combination in a preset range.
步骤203,根据在预设范围内的所述环境的深度数据和所述映射关系,生成所述环境的伪彩色图像;Step 203: Generate a pseudo color image of the environment according to the depth data of the environment and the mapping relationship within a preset range.
步骤204,将所述伪彩色图像与所述环境的图像叠加显示。Step 204: Display the pseudo color image and the image of the environment.
步骤201的实施可参照上述实施例一中对步骤101的说明,在步骤201中采集环境的深度数据。For the implementation of step 201, reference may be made to the description of step 101 in the first embodiment, and the depth data of the environment is collected in step 201.
在步骤202中,确定在预设范围内的各深度值与色彩组合的映射关系,这里的预设范围落入深度数据的极限采集范围内,并且小于所述深度数据的极限采集范围。In step 202, a mapping relationship between each depth value and a color combination within a preset range is determined, where the preset range falls within the limit acquisition range of the depth data and is smaller than the limit collection range of the depth data.
深度值的预设范围的意义在于,用户可以根据自己感兴趣的范围进行后续的伪彩色处理。通常低视力患者更对周围2m内的障碍物更感兴趣,因此可将深度值限定在1m到2m对应的深度值范围。例如,若深度传感器的极限采集范围是0.5m到5m,即以毫米为单位时,其采集前方环境的深度数据的极限范围是[500,5000],此时用户可自定义一个预设范围,该预设范围在[500,5000]的范围内,并且小于该范围,例如可选取[1000,2000]。The significance of the preset range of depth values is that the user can perform subsequent pseudo-color processing according to the range of interest. Usually low vision patients are more interested in obstacles within 2m around, so the depth value can be limited to a range of depth values corresponding to 1m to 2m. For example, if the limit range of the depth sensor is 0.5m to 5m, that is, in millimeters, the limit range of the depth data of the front environment is [500, 5000], and the user can customize a preset range. The preset range is in the range of [500, 5000] and is smaller than the range, for example, [1000, 2000] can be selected.
预设范围内的各深度值与色彩组合的映射关系与全范围内各深度值与色彩组合的映射关系类似。图4示出了本申请其中一种深度值与色彩组合映射关系的示意图,如图4所示,预设范围内,即[1000,2000]中的各深度值与三种色彩的组合,即红色、绿色和蓝色的组合具有映射关系,可见深度值在[1000,2000]的预设范围内,将依深度值的变化而对应渐变的“红色-绿色-蓝色”。当深度值在[1000,2000]范围外已被记录为0或者缺省值的,可以直接映射为0,即各种彩色分量均为零,该处灰度值为0,对应黑色。The mapping relationship between the depth values and the color combinations in the preset range is similar to the mapping relationship between the depth values and the color combinations in the entire range. FIG. 4 is a schematic diagram showing a mapping relationship between depth values and color combinations in the present application. As shown in FIG. 4, within a preset range, that is, a combination of depth values and three colors in [1000, 2000], that is, The combination of red, green, and blue has a mapping relationship. The visible depth value is within the preset range of [1000, 2000], and will correspond to the gradient of "red-green-blue" depending on the depth value. When the depth value has been recorded as 0 or the default value outside the range of [1000, 2000], it can be directly mapped to 0, that is, the various color components are all zero, and the gray value is 0, corresponding to black.
对比上述实施例一中的图2可知,本实施例在[1000,2000]较小的深度值范围内即完成了“红色-绿色-蓝色”的渐变对应,距离用户1m和1.5m的物体能够轻易的通过红色和绿色的区别进行识别,而在实施例一中,距离用户1m和1.5m的物体则基本均显示为红色,并且色差较小。可见,当与色彩组合的映射方式确定时,所述深度值的预设范围越小,越能够将该范围内各种不同距离的物体通过不同色彩进行更细致的区分。当用户进行近距离的精细操作时,甚至可以将深度范围调整的更小,使低视力用户也能高精度的分辨特定距离范围内物体细微的距离差异。Comparing FIG. 2 in the first embodiment, the present embodiment completes the gradation correspondence of “red-green-blue” within a small depth value range of [1000, 2000], and objects of 1 m and 1.5 m from the user. It can be easily identified by the difference between red and green, and in the first embodiment, objects 1m and 1.5m away from the user are basically displayed in red, and the color difference is small. It can be seen that when the mapping manner with the color combination is determined, the smaller the preset range of the depth value is, the more the objects of different distances in the range can be more finely distinguished by different colors. When the user performs close-range fine operations, the depth range can be adjusted even smaller, so that low-vision users can accurately distinguish the subtle distance differences of objects within a certain distance range.
同样的,在预设范围内的各深度值可以与两种或者多种色彩存在线性或者非线性,连续或者离散的映射关系,并且各种色彩不限于蓝色、红色和绿色三种颜色,还可以为橙色、黄色、青色、紫色或者粉色等,或者这些彩色与黑色或白色的组合等。Similarly, each depth value within a preset range may have a linear or non-linear, continuous or discrete mapping relationship with two or more colors, and the various colors are not limited to three colors of blue, red, and green. It may be orange, yellow, cyan, purple or pink, etc., or a combination of these colors and black or white.
此外在预设范围内的各深度值与色彩组合的映射关系可以是预设的也 可以是根据环境情况如亮度、光线等动态改变的。In addition, the mapping relationship between the depth values and the color combinations in the preset range may be preset or dynamically changed according to environmental conditions such as brightness, light, and the like.
在一些实施方式中,在所述步骤102之前还包括:获取当前用户信息,所述步骤102中,根据所述用户信息确定各深度值与色彩组合的映射关系。对该步骤的实施可参照上述实施例一中对步骤102的相关说明。In some embodiments, before the step 102, the method further includes: acquiring current user information, and determining, in the step 102, a mapping relationship between each depth value and a color combination according to the user information. For the implementation of this step, reference may be made to the related description of step 102 in the first embodiment.
需要说明的是上述步骤201和步骤202的实现顺序不限,即只要在步骤203前采集了环境的深度数据,并确定了预设范围内的各深度值与色彩组合的映射关系即可。It should be noted that the implementation steps of step 201 and step 202 are not limited, that is, as long as the depth data of the environment is collected before step 203, and the mapping relationship between the depth values and the color combination in the preset range is determined.
在步骤203中,根据采集的在预设范围内的所述环境的深度数据,以及所述在预设范围内的各深度值与色彩组合的映射关系,生成所述环境的伪彩色图像。In step 203, a pseudo color image of the environment is generated according to the collected depth data of the environment within a preset range, and a mapping relationship between each depth value and color combination within the preset range.
将采集的环境的深度数据中符合预设范围的深度值逐一根据映射关系转换为该深度值对应的色彩,对应的转换完成当前的环境图像各位置处对应的色彩后,即得到了当前环境的伪彩色图像,伪彩色图像中的各种色彩对应的表明了该处环境中物体到用户的深度传感器的距离,并且生成了伪彩色图像的彩色部分是符合预设深度值范围的物体的图像部分,也即用户最关注的距离范围内物体的图像部分。此时若伪彩色图像中某处为黑色,则可能表明该处的距离已经超出了深度传感器的范围,或者该处距离没有在用户指定的预设范围内。The depth value in the depth data of the collected environment is converted into the color corresponding to the depth value according to the mapping relationship, and the corresponding conversion completes the color corresponding to each position of the current environment image, and the current environment is obtained. a pseudo color image in which the various colors in the pseudo color image correspond to the distance of the object in the environment to the depth sensor of the user, and the color portion in which the pseudo color image is generated is the image portion of the object conforming to the preset depth value range , that is, the image portion of the object within the distance range that the user is most concerned about. At this point, if somewhere in the pseudo color image is black, it may indicate that the distance has exceeded the range of the depth sensor, or the distance is not within the preset range specified by the user.
在步骤204中,将所述环境的伪彩色图像与所述环境的图像叠加显示。当用户佩戴的为AR眼镜时,用户能够通过透明镜片看到环境图像,此时根据所述环境的伪彩色图像与环境深度数据的对应关系,或者所述环境的伪彩色图像与环境图像的对应关系在透明镜片上叠加当前的环境图像实时显示处理得到的伪彩色图像。当用户佩戴的为VR眼镜、导盲头盔或者使用其他终端时,可同步采集当前环境图像,并与伪彩色图像叠加后进行显示。In step 204, a pseudo color image of the environment is superimposed with an image of the environment. When the user wears the AR glasses, the user can see the environment image through the transparent lens. At this time, according to the corresponding relationship between the pseudo color image of the environment and the environment depth data, or the corresponding color image of the environment and the environment image The relationship superimposes the current environmental image on the transparent lens to display the pseudo color image processed in real time. When the user wears VR glasses, a blind guide helmet or other terminals, the current environment image can be acquired synchronously and superimposed with the pseudo color image for display.
由于人眼对色彩的辨别能力相对较强,可分辨出上千种颜色,在原环境图像的基础上叠加对应的伪彩色图像能够使患者更有效的提取图形信息,更易辨认环境图像细节,能够更敏感、形象的感知和识别各种距离的物体。Because the human eye has a relatively strong ability to distinguish colors, thousands of colors can be distinguished. Superimposing corresponding pseudo-color images on the basis of the original environment image enables the patient to extract graphic information more effectively, and more easily recognize the details of the environmental image. Sensitive, visual perception and recognition of objects at various distances.
本实施例中,将预设深度值范围内的环境的深度数据映射得到伪彩色图像,并将所述伪彩色图像与当前环境图像叠加进行增强显示,能够在基本不影响低视力患者视野的情况下,充分利用低视力患者的残余视力,并协助其有效的感知环境中物体的距离,针对不同的患者用户可以为其调试不同的深度值与色彩组合的映射关系,以生成更适合各用户观看的特有的伪彩色图像。此外,本实施例尤其能够着重处理用户最关注的距离范围部分,当色彩组合的映射方式确定时,所述预设深度值范围越小,越能够将 该范围内各种不同距离的物体通过不同色彩进行更细致的区分。In this embodiment, the depth data of the environment within the preset depth value range is mapped to obtain a pseudo color image, and the pseudo color image is superimposed with the current environment image for enhanced display, which can substantially not affect the vision of the low vision patient. Under the full view of the residual vision of patients with low vision, and assist them to effectively perceive the distance of objects in the environment, for different patient users can debug different mappings of depth values and color combinations to generate more suitable for each user to watch Unique pseudo color image. In addition, the embodiment can focus on the distance range portion that the user is most concerned about. When the mapping manner of the color combination is determined, the smaller the preset depth value range, the more the objects of different distances in the range can be different. Colors are more subdivided.
实施例三:Embodiment 3:
图5示出了本申请实施例三中显示方法流程示意图,如图5所示,所述显示方法包括:FIG. 5 is a schematic flowchart of a display method in Embodiment 3 of the present application. As shown in FIG. 5, the display method includes:
步骤301,采集环境的深度数据;根据所述环境的深度数据生成灰度图;Step 301: Collect depth data of the environment, and generate a grayscale image according to the depth data of the environment;
步骤302,确定各深度值与色彩组合的映射关系;根据所述各深度值与所述色彩组合的映射关系确定各灰度值与色彩组合的映射关系;Step 302: Determine a mapping relationship between each depth value and a color combination; and determine a mapping relationship between each gray value and a color combination according to a mapping relationship between the depth values and the color combination;
步骤303,根据由所述深度数据生成的灰度图和所述各灰度值与色彩组合的映射关系,生成所述环境的伪彩色图像;Step 303: Generate a pseudo color image of the environment according to a grayscale map generated by the depth data and a mapping relationship between the grayscale values and color combinations;
步骤304,将所述伪彩色图像与所述环境的图像叠加显示。Step 304: superimpose and display the pseudo color image and the image of the environment.
以低视力患者佩戴AR眼镜为例,在步骤301中,低视力患者佩戴的AR眼镜采集环境的深度数据,并根据该深度数据生成当前环境的灰度图。Taking the AR glasses as a low-visibility patient as an example, in step 301, the AR glasses worn by the low vision patient collect the depth data of the environment, and generate a grayscale image of the current environment according to the depth data.
这里的深度数据可以是AR眼镜搭载的深度传感器、景深摄像头直接采集得到的,也可以AR眼镜搭载的双目摄像头等采集前方环境图像进行计算后得到的。所述深度数据可以单独采集,并以表格或者矩阵的形式对应当前的环境图像位置进行存储;所述深度数据的采集也可以伴随着环境图像的采集,此时深度数据将与环境图像的各像素对应。The depth data here may be obtained by directly collecting the depth sensor or the depth of field camera mounted on the AR glasses, or by calculating the front environment image by a binocular camera mounted on the AR glasses. The depth data may be separately collected and stored in the form of a table or a matrix corresponding to the current environment image position; the collection of the depth data may also be accompanied by the collection of the environment image, and the depth data will be associated with each pixel of the environment image. correspond.
灰度图是将白色与黑色按对数关系分为若干等级,灰度图可分为256阶或者65536阶等。以256阶灰度图为例,在生成灰度图时,通常建立深度值与灰度值的映射关系,因设备的限制,深度数据通常具有极限识别范围,超出该范围的,即深度传感器无法识别的通常将该处深度数据记为0或者缺省值。将深度值的极限采集范围缩放对应至[0,255]的灰度值,在极限范围外的深度值对应的灰度值均对应为0或者255。同时,深度值与灰度值可以为线性或者非线性,连续或者离散的映射关系;既可以将近距离映射为白,远距离映射为黑,也可以将近距离映射为黑,远距离映射为白。The grayscale image divides white and black into several grades by logarithmic relationship, and the grayscale map can be divided into 256 steps or 65536 steps. Taking the 256-level grayscale image as an example, when generating a grayscale image, the mapping relationship between the depth value and the grayscale value is usually established. Due to the limitation of the device, the depth data usually has a limit recognition range, and the depth sensor cannot exceed the range. The identified depth data is usually recorded as 0 or a default value. The limit acquisition range of the depth value is scaled to the gray value of [0, 255], and the gray value corresponding to the depth value outside the limit range corresponds to 0 or 255. At the same time, the depth value and the gray value can be linear or non-linear, continuous or discrete mapping relationship; the short distance can be mapped to white, the long distance to black, or the short distance to black, and the long distance to white.
在步骤302中,确定各深度值与色彩组合的映射关系,并根据所述各深度值与所述色彩组合的映射关系确定各灰度值与色彩组合的映射关系。In step 302, a mapping relationship between each depth value and a color combination is determined, and a mapping relationship between each gray value and a color combination is determined according to a mapping relationship between the depth values and the color combination.
以深度传感器为例,其有效工作距离通常可以为0.5m到5m,因此以毫米为单位时,其采集前方环境的深度数据的范围是[500,5000],可以建立该范围内各深度值与色彩组合的映射关系。又因为在步骤301中通常存在深度值与灰度值的映射关系,因此可根据深度值与色彩组合的映射关系以及深度值与灰度值的映射关系确定各灰度值与色彩组合的映射关系。Taking the depth sensor as an example, the effective working distance can be usually 0.5m to 5m. Therefore, when the unit is in millimeters, the range of the depth data of the front environment is [500, 5000], and the depth values in the range can be established. The mapping of color combinations. Moreover, since there is usually a mapping relationship between the depth value and the gray value in step 301, the mapping relationship between the gray value and the color combination can be determined according to the mapping relationship between the depth value and the color combination and the mapping relationship between the depth value and the gray value. .
图6示出了本申请其中一种深度值和灰度值与色彩组合映射关系的示意图,如图6所示,深度值与三种色彩的组合,即红色、绿色和蓝色的组合具有映射关系,其中深度值在[500,1500]间时,其映射为单一的红色, 并且红色色彩分量的灰度值随深度值的变化而变化,深度值在[1500,2500]间时,其映射为红色和绿色的组合,并且红色和绿色的色彩分量分别随深度值的变化而变化,以此类推。可见深度值在[500,5000]的全范围内,将依深度值的变化而对应渐变的“红色-绿色-蓝色”。当深度值在[500,5000]范围外已被记录为0或者缺省值的,可以直接映射为0,即各种彩色分量均为零,该处灰度值为0,对应黑色。由横坐标的深度值与灰度值的映射关系可以进一步确定灰度值与色彩组合的映射关系。可见当灰度值在[0,255]间变化时,将对应“蓝色-绿色-红色”的渐变色彩。FIG. 6 is a schematic diagram showing a relationship between a depth value and a gray value and a color combination in the present application. As shown in FIG. 6, a combination of a depth value and three colors, that is, a combination of red, green, and blue has a mapping. Relationship, where the depth value is between [500, 1500], it is mapped to a single red, and the gray value of the red color component changes with the depth value. When the depth value is between [1500, 2500], its mapping It is a combination of red and green, and the color components of red and green vary with depth values, and so on. The visible depth value is in the full range of [500, 5000], and will correspond to the gradient of "red-green-blue" depending on the depth value. When the depth value has been recorded as 0 or the default value outside the range of [500, 5000], it can be directly mapped to 0, that is, the various color components are all zero, and the gray value is 0, which corresponds to black. The mapping relationship between the gray value and the color combination can be further determined by the mapping relationship between the depth value of the abscissa and the gray value. It can be seen that when the gray value changes between [0, 255], it will correspond to the gradient color of "blue-green-red".
实际上深度值可以与两种或者多种色彩存在线性或者非线性,连续或者离散的映射关系,并且各种色彩不限于蓝色、红色和绿色三种颜色,还可以为橙色、黄色、青色、紫色或者粉色等,或者这些彩色与黑色或白色的组合等。因此灰度值同样可以与不同的色彩组合构成不同的映射方式。In fact, the depth value may have a linear or non-linear, continuous or discrete mapping relationship with two or more colors, and the various colors are not limited to three colors of blue, red, and green, and may also be orange, yellow, cyan, Purple or pink, etc., or a combination of these colors and black or white. Therefore, the gray value can also be combined with different colors to form different mapping modes.
在一些实施方式中,在所述步骤302之前还包括:获取当前用户信息,所述步骤302中,根据所述用户信息确定各深度值与色彩组合的映射关系,并根据所述各深度值与所述色彩组合的映射关系确定各灰度值与色彩组合的映射关系。In some embodiments, before the step 302, the method further includes: acquiring current user information, in step 302, determining a mapping relationship between each depth value and a color combination according to the user information, and according to the depth values, The mapping relationship of the color combinations determines a mapping relationship between each gray value and a color combination.
通常,低视力患者对于颜色分辨也有障碍,例如一些患者只能区分红色和蓝色,而认为黄色、青色与灰色相同,另一些患者能感觉到红色、黄色、青色和蓝色四种色不同,但是并不知道具体是什么颜色;一些用户能够明显区分出某些色彩的渐变,而另一些患者仅能区分边界分明的色彩组合等。因此可事先为每一位低视力患者匹配其适用的色彩的组合以及其与深度值的映射方式,确定各用户能够敏感区分的色彩的组合以及用户习惯的映射方式。例如,对于无法区分红色的患者,则不将红色列入色彩组合;对能够非常敏感的辨别渐变蓝色的用户,则可适当扩大色彩组合中蓝色所占的映射范围;对于不能区分或者不适应渐变颜色的患者,则采用阶梯状的映射方式,即在某一深度值范围映射为单一的颜色,各深度值范围间色彩存在明显边界。In general, patients with low vision have obstacles in color resolution. For example, some patients can only distinguish between red and blue, while yellow, cyan and gray are considered the same, while others can feel red, yellow, cyan and blue. But I don't know what color it is; some users can clearly distinguish the gradation of certain colors, while others can only distinguish the color combinations with clear boundaries. Therefore, each low vision patient can be matched in advance with the combination of the applicable colors and the mapping with the depth value, and the combination of colors that can be sensitively distinguished by each user and the mapping manner of the user's habits can be determined. For example, for patients who cannot distinguish red, red is not included in the color combination; for users who can identify the gradient blue very sensitively, the mapping range of blue in the color combination can be appropriately expanded; for the indistinguishable or not Patients who adapt to the gradient color adopt a step-like mapping method, that is, a certain depth value range is mapped to a single color, and there is a clear boundary between colors in each depth value range.
在事先调试和设定了各用户对应的深度值与色彩组合的映射关系后,当用户佩戴了AR眼镜后,其可以识别和获取当前用户的用户信息,并匹配其对应的深度值与色彩组合的映射关系,进而根据该用户特定的深度值与色彩组合的映射关系确定该用户特定的灰度值与色彩组合的映射关系,进行后续伪彩色转换的处理。After the AR and the color combination are mapped and set in advance, after the user wears the AR glasses, the user can identify and obtain the user information of the current user, and match the corresponding depth value and color combination. The mapping relationship further determines the mapping relationship between the user-specific gray value and the color combination according to the mapping relationship between the user-specific depth value and the color combination, and performs subsequent pseudo color conversion processing.
需要说明的是上述步骤301和步骤302的实现顺序不限,即只要在步骤303前采集了环境的深度数据,生成了灰度图,确定了各深度值与色彩组合的映射关系,也确定了各灰度值与色彩组合的映射关系即可。It should be noted that the implementation steps of the foregoing steps 301 and 302 are not limited, that is, as long as the depth data of the environment is collected before step 303, a grayscale image is generated, and the mapping relationship between the depth values and the color combination is determined, and the mapping relationship is also determined. The mapping relationship between each gray value and color combination is sufficient.
在步骤303中,根据由所述深度数据生成的灰度图和所述各灰度值与色彩组合的映射关系,生成所述环境的伪彩色图像。In step 303, a pseudo color image of the environment is generated based on a grayscale map generated by the depth data and a mapping relationship between the grayscale values and color combinations.
将根据采集的环境的深度数据生成的灰度图中的灰度值逐一根据灰度值与色彩组合的映射关系转换为该灰度值对应的色彩,对应的转换完成当前环境的灰度图各位置处对应的色彩后,即由灰度图得到了当前环境的伪彩色图像,伪彩色图像中的各种色彩对应的表明了该处环境中物体到用户的深度传感器的距离,若伪彩色图像中某处为黑色,则可能表明该处的距离已经超出了深度传感器的范围。The gray value in the grayscale image generated according to the depth data of the collected environment is converted into the color corresponding to the grayscale value according to the mapping relationship between the grayscale value and the color combination, and the corresponding conversion completes the grayscale image of the current environment. After the corresponding color at the position, the pseudo color image of the current environment is obtained from the gray image, and the various colors in the pseudo color image correspond to the distance of the object to the depth sensor of the user in the environment, if the pseudo color image Black in somewhere may indicate that the distance is beyond the depth sensor.
在现有的图像处理技术中,尤其是应用于具有双目摄像头的用户可穿戴设备中,可能已经存在对环境图像处理的某些中间步骤能够得到基于深度信息生成的灰度图,本实施例可通过深度值与色彩组合的映射关系得到灰度值与色彩组合的映射关系,进而直接利用这些中间步骤产出的灰度图进行伪彩色处理。In the existing image processing technology, especially in a user wearable device having a binocular camera, there may already be some intermediate steps of environmental image processing to obtain a grayscale image generated based on depth information, this embodiment The mapping relationship between the gray value and the color combination can be obtained by the mapping relationship between the depth value and the color combination, and the gray image generated by these intermediate steps can be directly used for pseudo color processing.
步骤304的实施可参照上述实施例一和实施例二中对步骤104和步骤204的说明,在步骤304中将所述环境的伪彩色图像与所述环境的图像叠加显示。For the implementation of step 304, reference may be made to the descriptions of step 104 and step 204 in the first embodiment and the second embodiment. In step 304, the pseudo color image of the environment is superimposed with the image of the environment.
由于人眼对色彩的辨别能力相对较强,可分辨出上千种颜色,在原环境图像的基础上叠加对应的伪彩色图像能够使患者更有效的提取图形信息,更易辨认环境图像细节,能够更敏感、形象的感知和识别各种距离的物体。Because the human eye has a relatively strong ability to distinguish colors, thousands of colors can be distinguished. Superimposing corresponding pseudo-color images on the basis of the original environment image enables the patient to extract graphic information more effectively, and more easily recognize the details of the environmental image. Sensitive, visual perception and recognition of objects at various distances.
在一些实施方式中,本实施例的步骤302和步骤303可以为:In some embodiments, step 302 and step 303 of this embodiment may be:
步骤302,确定在预设范围内的各深度值与色彩组合的映射关系;根据所述在预设范围内的各深度值与所述色彩组合的映射关系确定对应范围内各灰度值与色彩组合的映射关系;Step 302: Determine a mapping relationship between each depth value and a color combination in a preset range, and determine each gray value and color in the corresponding range according to the mapping relationship between each depth value and the color combination in the preset range. Combined mapping relationship;
步骤303,根据由所述深度数据生成的灰度图和所述对应范围内各灰度值与色彩组合的映射关系,生成所述环境的伪彩色图像。Step 303: Generate a pseudo color image of the environment according to a grayscale map generated by the depth data and a mapping relationship between each grayscale value and a color combination in the corresponding range.
即在步骤302中,首先确定在预设范围内的各深度值与色彩组合的映射关系,这里的预设范围落入深度数据的极限采集范围内,并且小于所述深度数据的极限采集范围。That is, in step 302, the mapping relationship between each depth value and the color combination within the preset range is first determined, where the preset range falls within the limit collection range of the depth data and is smaller than the limit collection range of the depth data.
深度值的预设范围的意义在于,用户可以根据自己感兴趣的范围进行后续的伪彩色处理。通常低视力患者更对周围2m内的障碍物更感兴趣,因此可将深度值限定在1m到2m对应的深度值范围。例如,若深度传感器的极限采集范围是0.5m到5m,即以毫米为单位时,其采集前方环境的深度数据的极限范围是[500,5000],此时用户可自定义一个预设范围,该预设范围在[500,5000]的范围内,并且小于该范围,例如可选取[1000,2000]。The significance of the preset range of depth values is that the user can perform subsequent pseudo-color processing according to the range of interest. Usually low vision patients are more interested in obstacles within 2m around, so the depth value can be limited to a range of depth values corresponding to 1m to 2m. For example, if the limit range of the depth sensor is 0.5m to 5m, that is, in millimeters, the limit range of the depth data of the front environment is [500, 5000], and the user can customize a preset range. The preset range is in the range of [500, 5000] and is smaller than the range, for example, [1000, 2000] can be selected.
预设范围内的各深度值与色彩组合的映射关系与全范围内各深度值与色彩组合的映射关系类似,可参考图4。可以理解,当与色彩组合的映射方式确定时,所述深度值的预设范围越小,越能够将该范围内各种不同距离的物体通过不同色彩进行更细致的区分。当用户进行近距离的精细操作时,甚至可以将深度范围调整的更小,使低视力用户也能高精度的分辨特定距离范围内物体细微的距离差异。The mapping relationship between the depth values and the color combinations in the preset range is similar to the mapping relationship between the depth values and the color combinations in the entire range. Refer to FIG. 4 . It can be understood that when the mapping manner with the color combination is determined, the smaller the preset range of the depth value is, the more the objects of different distances in the range can be more finely distinguished by different colors. When the user performs close-range fine operations, the depth range can be adjusted even smaller, so that low-vision users can accurately distinguish the subtle distance differences of objects within a certain distance range.
在确定了在预设范围内的各深度值与色彩组合的映射关系后,可根据生成灰度图时深度值与灰度值的映射关系,确定预设范围内的各深度值映射得到的对应范围内的各灰度值,进而确定对应范围内各灰度值与色彩组合的映射关系。例如当深度值的预设范围为[1000,2000]时,对应范围内的灰度值约在220-160之间。After determining the mapping relationship between the depth values and the color combinations in the preset range, the mapping between the depth values and the gray values when the grayscale image is generated may be determined, and the corresponding mappings of the depth values in the preset range may be determined. Each gray value in the range further determines a mapping relationship between each gray value and a color combination in the corresponding range. For example, when the preset range of the depth value is [1000, 2000], the gray value in the corresponding range is between 220-160.
同样的,在预设范围内的各深度值也即对应范围内各灰度值可以与两种或者多种色彩存在线性或者非线性,连续或者离散的映射关系,并且各种色彩不限于蓝色、红色和绿色三种颜色,还可以为橙色、黄色、青色、紫色或者粉色等,或者这些彩色与黑色或白色的组合等。Similarly, each depth value within a preset range, that is, each gray value in the corresponding range may have a linear or non-linear, continuous or discrete mapping relationship with two or more colors, and various colors are not limited to blue. , red and green, can also be orange, yellow, cyan, purple or pink, or a combination of these colors and black or white.
相应的在步骤303中,根据由所述深度数据生成的灰度图和所述对应范围内各灰度值与色彩组合的映射关系,生成所述环境的伪彩色图像。即将根据采集的环境的深度数据生成的灰度图中灰度值落入对应范围内的灰度值逐一根据映射关系转换为该灰度值对应的色彩,对应的转换完成当前环境的灰度图各位置处对应的色彩后,即由灰度图得到了当前环境的伪彩色图像。因为对应范围外的各灰度值与色彩组合不存在对应关系,所以通常记为黑色。Correspondingly, in step 303, a pseudo color image of the environment is generated according to a grayscale map generated by the depth data and a mapping relationship between each grayscale value and a color combination in the corresponding range. The gray value in the grayscale image generated according to the depth data of the collected environment is converted into the corresponding grayscale value according to the mapping relationship, and the corresponding conversion completes the grayscale image of the current environment. After the corresponding color at each position, the pseudo color image of the current environment is obtained from the grayscale image. Since there is no correspondence between each gray value outside the corresponding range and the color combination, it is usually recorded as black.
可见该种实施方式中,实际仅根据预设的深度值范围对灰度图中某些灰度值范围内的图像进行了处理,当与色彩组合的映射方式确定时,所述深度值的预设范围越小,对应范围内的灰度值越少,越能够在对灰度图进行处理时将该范围内各种不同距离的物体通过不同色彩进行更细致的区分。当用户进行近距离的精细操作时,甚至可以将深度范围调整的更小,使低视力用户也能高精度的分辨特定距离范围内物体细微的距离差异。It can be seen that in this embodiment, the image in the gray scale range is actually processed according to the preset depth value range. When the mapping manner with the color combination is determined, the depth value is pre-predicted. The smaller the range is, the smaller the gray value in the corresponding range is, and the more the objects of different distances in the range can be more finely distinguished by different colors when processing the grayscale image. When the user performs close-range fine operations, the depth range can be adjusted even smaller, so that low-vision users can accurately distinguish the subtle distance differences of objects within a certain distance range.
本实施例中,根据环境的深度数据生成灰度图,并基于灰度值与色彩组合的映射关系得到伪彩色图像,并将所述伪彩色图像与当前环境图像叠加进行增强显示,能够利用现有图像处理方案某些中间步骤得到的灰度图,易于与现有的图像处理技术相结合,能够在基本不影响低视力患者视野的情况下,充分利用低视力患者的残余视力,并协助其有效的感知环境中物体的距离;此外还能够着重针对用户最关注的距离范围部分的灰度图部分进行处理,当色彩组合的映射方式确定时,所述预设深度值范围越小,需 要处理的灰度图部分约小,越能够将该范围内各种不同距离的物体通过不同色彩进行更细致的区分。In this embodiment, a grayscale image is generated according to depth data of the environment, and a pseudo color image is obtained based on a mapping relationship between the grayscale value and the color combination, and the pseudo color image is superimposed with the current environment image to be enhanced and displayed. Grayscale images obtained by some intermediate steps of the image processing scheme are easy to combine with existing image processing techniques, and can fully utilize the residual vision of patients with low vision and assist them without affecting the field of vision of patients with low vision. Effectively perceive the distance of objects in the environment; in addition, it can focus on the grayscale part of the distance range part that the user is most concerned about. When the mapping mode of the color combination is determined, the smaller the preset depth value range needs to be processed. The grayscale portion is about small, and the more diverse objects in the range can be more finely distinguished by different colors.
实施例四:Embodiment 4:
图7a-7d示出了本申请实施例四中四种实现场景的示意图,其中,用户头戴AR眼镜,所述AR眼镜具有深度传感器,当用户低头注视前方时,深度传感器引出的实线为用户AR眼镜的上下视野范围,虚线为深度传感器能够有效探测的距离范围。在最大识别距离之外的物体的深度值均为0或者缺省值。各附图右侧为各场景中在AR眼镜中为用户叠加的伪彩色图像示意图。7a-7d are schematic diagrams showing four implementation scenarios in the fourth embodiment of the present application, wherein the user wears AR glasses, and the AR glasses have depth sensors. When the user looks down at the front, the solid line drawn by the depth sensor is The upper and lower field of view of the user AR glasses, the dashed line is the range of distances that the depth sensor can effectively detect. The depth values of objects outside the maximum recognition distance are both 0 or default values. The right side of each drawing is a schematic diagram of a pseudo color image superimposed for the user in the AR glasses in each scene.
图7a示出了本申请一种实现场景的示意图,如图7a所示,用户前方为平地且无障碍物。以图2中深度值与色彩组合的映射关系为例,患者在下边缘A处看到的为少量绿色与少量红色合成的色彩叠加至实际环境图像;在视野的中部附近B处以下看到的是蓝色叠加至实际环境图像;视野中部附近B处以上则因为超出深度探测所以叠加的伪彩色为黑色,也即在AR眼镜上的该部分区域未进行处理,是透明的。Figure 7a shows a schematic diagram of an implementation scenario of the present application. As shown in Figure 7a, the front of the user is flat and unobstructed. Taking the mapping relationship between the depth value and the color combination in FIG. 2 as an example, the patient sees a small amount of green and a small amount of red synthesized color superimposed on the actual environment image at the lower edge A; the following is seen at the B near the middle of the field of view. Blue is superimposed on the actual environment image; above B in the middle of the field of view, the superimposed pseudo color is black because the depth detection is exceeded, that is, the partial area on the AR glasses is not processed and is transparent.
用户可通过视野中A与B间的颜色变化很好的识别前方平地的距离变化。The user can recognize the change in the distance of the front flat ground by the color change between A and B in the field of view.
图7b示出了本申请另一种实现场景的示意图,如图7b所示,用户前方为凸起台阶。以图2中深度值与色彩组合的映射关系为例,患者在下边缘A处看到的为少量绿色与少量红色合成的色彩叠加至实际环境图像;在视野的上部附近B处以下看到的是蓝色叠加至实际环境图像;在视野的A和B之间用户能够区分出分界线C和D,也即台阶的轮廓,因为在A-C和D-B间颜色由绿色向蓝色渐变,而在C-D间,色彩由蓝色向绿色渐变。视野中B处以上则因为超出深度探测所以叠加的伪彩色为黑色,也即在AR眼镜上的该部分区域未进行处理,是透明的。FIG. 7b shows a schematic diagram of another implementation scenario of the present application. As shown in FIG. 7b, the front of the user is a raised step. Taking the mapping relationship between the depth value and the color combination in FIG. 2 as an example, the patient sees a small amount of green and a small amount of red synthesized color superimposed on the actual environment image at the lower edge A; the following is seen below the upper portion of the field of view B. Blue is superimposed to the actual environment image; between the A and B of the field of view, the user can distinguish the dividing lines C and D, that is, the outline of the step, because the color changes from green to blue between AC and DB, and between CDs The color changes from blue to green. Above B in the field of view, the superimposed pseudo color is black because the depth detection is exceeded, that is, the partial area on the AR glasses is not processed and is transparent.
用户可通过视野中较高的彩色边界线B,以及A与B间的颜色变化很好的识别前方障碍的位置和距离。The user can identify the position and distance of the front obstacle well by the higher color boundary line B in the field of view and the color change between A and B.
图7c示出了本申请另一种实现场景的示意图,如图7c所示,用户前方为下行台阶。以图2中深度值与色彩组合的映射关系为例,患者在下边缘A处看到的为少量绿色与少量红色合成的色彩叠加至实际环境图像;在视野的下部附近B处以下看到的是少量绿色与少量蓝色叠加至实际环境图像;因为该台阶边缘未达到距离最远值,因此在视野中B处暂未达到深蓝色的显示,在B处以上则因为超出深度探测所以叠加的伪彩色为黑色,也即在AR眼镜上的该部分区域未进行处理,是透明的,FIG. 7c is a schematic diagram showing another implementation scenario of the present application. As shown in FIG. 7c, the front of the user is a descending step. Taking the mapping relationship between the depth value and the color combination in FIG. 2 as an example, the patient sees a small amount of green and a small amount of red synthesized color superimposed on the actual environment image at the lower edge A; the following is seen below the lower portion of the field of view B. A small amount of green and a small amount of blue are superimposed on the actual environment image; since the edge of the step does not reach the farthest distance, the display of B in the field of view has not yet reached the dark blue, and above B, the superposition of the superposition is exceeded because of the depth detection. The color is black, that is, the part of the area on the AR glasses is not processed and is transparent.
可见该种情况下叠加显示的伪彩色边界较低,并且在边界处,由蓝绿 色直接变为无色,在用户经过训练后,其可以知道根据色彩映射关系,缺失了蓝色对应的距离值处的物体,因此该位置处可能有下台阶的情况。It can be seen that in this case, the pseudo color boundary displayed by the superposition is low, and at the boundary, it is directly changed from blue-green to colorless. After the user is trained, it can know that the distance value corresponding to the blue is missing according to the color mapping relationship. The object at the location, so there may be a lower step at this location.
图7d示出了本申请另一种实现场景的示意图,如图7d所示,用户前方为下行台阶。在该实现场景中,用户预先设定了其关注的距离范围为2m-4m,也即图中阴影标记的距离范围内的物体,此时以图4中在预设范围内的深度值与色彩组合的映射关系为例,患者在下边缘A处看到的为红色叠加至实际环境图像;在视野的下部附近B处以下看到的是蓝色叠加至实际环境图像;在视野中的A-B基本实现了“红色-绿色-蓝色”的全色彩组合的渐变;在B处以上则因为超出深度探测所以叠加的伪彩色为黑色,也即在AR眼镜上的该部分区域未进行处理,是透明的,FIG. 7d shows a schematic diagram of another implementation scenario of the present application. As shown in FIG. 7d, the front of the user is a descending step. In this implementation scenario, the user presets an object whose range of interest is 2m-4m, that is, within the distance range marked by the shadow in the figure, and the depth value and color in the preset range in FIG. 4 Taking the combined mapping relationship as an example, the patient sees red at the lower edge A superimposed to the actual environment image; below the lower portion of the field of view, B sees the blue overlay to the actual environment image; the AB basic realization in the field of view The gradient of the full color combination of "red-green-blue"; above B, the superimposed pseudo color is black because the depth detection is exceeded, that is, the partial area on the AR glasses is not processed, and is transparent. ,
可见该种情况下能够在较小的视野范围内显示更多的色彩以便用户区分。当用户前进时,将由于前方台阶边缘未达到用户关注的距离范围的最远值而缺失蓝色的显示,使用户能够更明显的感知到3主色彩中某种的缺失而注意前方道路情况。It can be seen that in this case, more colors can be displayed in a smaller field of view for the user to distinguish. When the user advances, the blue display is missing because the front step edge does not reach the farthest value of the distance range that the user pays attention to, so that the user can more clearly perceive some missing in the 3 main colors and pay attention to the road ahead.
图7c示出的场景中,用户视野内叠加的伪彩色主要为绿色,与之相比,容易理解在预设了更小的用户关注的距离范围时,在同样的视野范围内能够显示更丰富的色彩,更加便于低视力患者进行区分。In the scene shown in FIG. 7c, the pseudo color superimposed in the user's field of view is mainly green, and it is easy to understand that when the distance range of the smaller user is preset, the display can be more abundant in the same field of view. The color is more convenient for patients with low vision to distinguish.
实施例五:Embodiment 5:
基于同一发明构思,本申请实施例中还提供了一种显示装置,由于这些装置解决问题的原理与显示方法相似,因此这些装置的实施可以参见方法的实施,重复之处不再赘述。如图8所示,所述显示装置500包括:Based on the same inventive concept, a display device is also provided in the embodiment of the present application. Since the principle of solving the problem is similar to the display method, the implementation of the device may refer to the implementation of the method, and the repeated description is not repeated. As shown in FIG. 8, the display device 500 includes:
数据采集模块501,用于采集环境的深度数据;The data collection module 501 is configured to collect depth data of the environment;
色彩映射模块502,用于确定各深度值与色彩组合的映射关系;a color mapping module 502, configured to determine a mapping relationship between each depth value and a color combination;
伪彩色图像生成模块503,用于根据所述深度数据和所述映射关系,生成所述环境的伪彩色图像;The pseudo color image generating module 503 is configured to generate a pseudo color image of the environment according to the depth data and the mapping relationship;
显示模块504,用于将所述伪彩色图像与所述环境的图像叠加显示。The display module 504 is configured to superimpose and display the pseudo color image and the image of the environment.
在一些实施方式中,所述色彩映射模块502,用于确定在预设范围内的各深度值与色彩组合的映射关系;In some implementations, the color mapping module 502 is configured to determine a mapping relationship between each depth value and a color combination within a preset range;
所述伪彩色图像生成模块503,用于根据在预设范围内的所述环境的深度数据和所述映射关系,生成所述环境的伪彩色图像。The pseudo color image generating module 503 is configured to generate a pseudo color image of the environment according to the depth data of the environment and the mapping relationship within a preset range.
在一些实施方式中,所述装置500还包括:In some embodiments, the apparatus 500 further includes:
灰度图生成模块505,用于根据所述环境的深度数据生成灰度图;a grayscale map generating module 505, configured to generate a grayscale map according to the depth data of the environment;
所述色彩映射模块502还用于,根据所述各深度值与所述色彩组合的映射关系确定各灰度值与色彩组合的映射关系;The color mapping module 502 is further configured to determine a mapping relationship between each gray value and a color combination according to a mapping relationship between the depth values and the color combination;
所述伪彩色图像生成模块503,用于根据由所述深度数据生成的灰度图 和所述各灰度值与色彩组合的映射关系,生成所述环境的伪彩色图像。The pseudo color image generation module 503 is configured to generate a pseudo color image of the environment according to a grayscale map generated by the depth data and a mapping relationship between the grayscale values and color combinations.
在一些实施方式中,所述色彩映射模块502,用于确定在预设范围内的各深度值与色彩组合的映射关系;以及,根据所述在预设范围内的各深度值与所述色彩组合的映射关系确定对应范围内各灰度值与色彩组合的映射关系;In some embodiments, the color mapping module 502 is configured to determine a mapping relationship between each depth value and a color combination within a preset range; and, according to the depth values and the color within the preset range. The combined mapping relationship determines a mapping relationship between each gray value and a color combination in the corresponding range;
所述伪彩色图像生成模块503,用于根据由所述深度数据生成的灰度图和所述对应范围内各灰度值与色彩组合的映射关系,生成所述环境的伪彩色图像。The pseudo color image generation module 503 is configured to generate a pseudo color image of the environment according to a grayscale map generated by the depth data and a mapping relationship between each grayscale value and a color combination in the corresponding range.
在一些实施方式中,所述装置500还包括:In some embodiments, the apparatus 500 further includes:
用户信息获取模块506,用于获取当前用户信息;The user information obtaining module 506 is configured to acquire current user information.
所述色彩映射模块502,用于根据所述用户信息确定各深度值与色彩组合的映射关系。The color mapping module 502 is configured to determine a mapping relationship between each depth value and a color combination according to the user information.
实施例六:Example 6:
基于同一发明构思,本申请实施例中还提供了一种电子设备,由于其原理与显示方法相似,因此其实施可以参见方法的实施,重复之处不再赘述。如图9所示,所述电子设备600包括:存储器601,一个或多个处理器602;以及一个或多个模块,所述一个或多个模块被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述一个或多个模块包括用于执行任一上述方法中各个步骤的指令。Based on the same inventive concept, an electronic device is also provided in the embodiment of the present application. Since the principle is similar to the display method, the implementation of the method may refer to the implementation of the method, and the repeated description is not repeated. As shown in FIG. 9, the electronic device 600 includes: a memory 601, one or more processors 602; and one or more modules, the one or more modules being stored in the memory and configured to Executed by the one or more processors, the one or more modules include instructions for performing the various steps of any of the above methods.
实施例七:Example 7:
基于同一发明构思,本申请实施例还提供了一种与电子设备结合使用的计算机程序产品,所述计算机程序产品包括内嵌于计算机可读的存储介质中的计算机程序,所述计算机程序包括用于使所述电子设备执行任一上述方法中的各个步骤的指令。Based on the same inventive concept, an embodiment of the present application further provides a computer program product for use in combination with an electronic device, the computer program product comprising a computer program embedded in a computer readable storage medium, the computer program comprising An instruction to cause the electronic device to perform each of the steps of any of the above methods.
为了描述的方便,以上所述装置的各部分以功能分为各种模块分别描述。当然,在实施本申请时可以把各模块或单元的功能在同一个或多个软件或硬件中实现。For the convenience of description, the various parts of the above-described apparatus are separately described by functions into various modules. Of course, the functions of each module or unit may be implemented in the same software or hardware in the implementation of the present application.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the present application can be provided as a method, system, or computer program product. Thus, the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware. Moreover, the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流 程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block of the flowchart and/or block diagrams, and combinations of flows and/or blocks in the flowcharts and/or block diagrams can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。While the preferred embodiment of the present application has been described, it will be apparent that those skilled in the art can make further changes and modifications to the embodiments. Therefore, the appended claims are intended to be interpreted as including the preferred embodiments and the modifications and

Claims (12)

  1. 一种显示方法,其特征在于,所述方法包括:A display method, characterized in that the method comprises:
    采集环境的深度数据;Collect depth data of the environment;
    确定各深度值与色彩组合的映射关系;Determining a mapping relationship between each depth value and a color combination;
    根据所述深度数据和所述映射关系,生成所述环境的伪彩色图像;Generating a pseudo color image of the environment according to the depth data and the mapping relationship;
    将所述伪彩色图像与所述环境的图像叠加显示。The pseudo color image is superimposed on the image of the environment.
  2. 如权利要求1所述的方法,其特征在于,The method of claim 1 wherein
    所述确定各深度值与色彩组合的映射关系包括:The determining a mapping relationship between each depth value and a color combination includes:
    确定在预设范围内的各深度值与色彩组合的映射关系;Determining a mapping relationship between each depth value and a color combination within a preset range;
    所述根据所述深度数据和所述映射关系,生成所述环境的伪彩色图像包括:The generating a pseudo color image of the environment according to the depth data and the mapping relationship includes:
    根据在预设范围内的所述环境的深度数据和所述映射关系,生成所述环境的伪彩色图像。A pseudo color image of the environment is generated based on depth data of the environment within a preset range and the mapping relationship.
  3. 如权利要求1所述的方法,其特征在于,The method of claim 1 wherein
    在所述根据所述深度数据和所述映射关系,生成所述环境的伪彩色图像之前,还包括:Before the generating the pseudo color image of the environment according to the depth data and the mapping relationship, the method further includes:
    根据所述环境的深度数据生成灰度图;Generating a grayscale map according to depth data of the environment;
    根据所述各深度值与所述色彩组合的映射关系确定各灰度值与色彩组合的映射关系;Determining, according to a mapping relationship between each depth value and the color combination, a mapping relationship between each gray value and a color combination;
    所述根据所述深度数据和所述映射关系,生成所述环境的伪彩色图像包括:The generating a pseudo color image of the environment according to the depth data and the mapping relationship includes:
    根据由所述深度数据生成的灰度图和所述各灰度值与色彩组合的映射关系,生成所述环境的伪彩色图像。A pseudo color image of the environment is generated based on a grayscale map generated by the depth data and a mapping relationship between the respective grayscale values and a color combination.
  4. 如权利要求3所述的方法,其特征在于,The method of claim 3 wherein:
    所述确定各深度值与色彩组合的映射关系包括:The determining a mapping relationship between each depth value and a color combination includes:
    确定在预设范围内的各深度值与色彩组合的映射关系;Determining a mapping relationship between each depth value and a color combination within a preset range;
    所述根据所述各深度值与所述色彩组合的映射关系确定各灰度值与色彩组合的映射关系;包括:And determining a mapping relationship between each gray value and a color combination according to the mapping relationship between the depth values and the color combination;
    根据所述在预设范围内的各深度值与所述色彩组合的映射关系确定对应范围内各灰度值与色彩组合的映射关系;Determining, according to the mapping relationship between each depth value and the color combination in the preset range, a mapping relationship between each gray value and a color combination in the corresponding range;
    所述根据由所述深度数据生成的灰度图和所述各灰度值与色彩组合的映射关系,生成所述环境的伪彩色图像包括:And generating, according to the grayscale image generated by the depth data and the mapping relationship between the grayscale values and the color combination, generating the pseudo color image of the environment includes:
    根据由所述深度数据生成的灰度图和所述对应范围内各灰度值与色彩组合的映射关系,生成所述环境的伪彩色图像。A pseudo color image of the environment is generated based on a grayscale map generated by the depth data and a mapping relationship between each grayscale value and color combination in the corresponding range.
  5. 如权利要求1至4中任一所述的方法,其特征在于,在所述确定各深度值与色彩组合的映射关系之前,还包括:The method according to any one of claims 1 to 4, further comprising: before the determining the mapping relationship between the depth values and the color combination, the method further comprising:
    获取当前用户信息;Obtain current user information;
    所述确定各深度值与色彩组合的映射关系包括:The determining a mapping relationship between each depth value and a color combination includes:
    根据所述用户信息确定各深度值与色彩组合的映射关系。A mapping relationship between each depth value and a color combination is determined according to the user information.
  6. 一种显示装置,其特征在于,所述装置包括:A display device, characterized in that the device comprises:
    数据采集模块,用于采集环境的深度数据;a data acquisition module for collecting depth data of the environment;
    色彩映射模块,用于确定各深度值与色彩组合的映射关系;a color mapping module, configured to determine a mapping relationship between each depth value and a color combination;
    伪彩色图像生成模块,用于根据所述深度数据和所述映射关系,生成所述环境的伪彩色图像;a pseudo color image generating module, configured to generate a pseudo color image of the environment according to the depth data and the mapping relationship;
    显示模块,用于将所述伪彩色图像与所述环境的图像叠加显示。And a display module, configured to superimpose and display the pseudo color image and the image of the environment.
  7. 如权利要求6所述的装置,其特征在于,The device of claim 6 wherein:
    所述色彩映射模块,用于确定在预设范围内的各深度值与色彩组合的映射关系;The color mapping module is configured to determine a mapping relationship between each depth value and a color combination within a preset range;
    所述伪彩色图像生成模块,用于根据在预设范围内的所述环境的深度数据和所述映射关系,生成所述环境的伪彩色图像。The pseudo color image generating module is configured to generate a pseudo color image of the environment according to the depth data of the environment and the mapping relationship within a preset range.
  8. 如权利要求6所述的装置,其特征在于,所述装置还包括:The device of claim 6 wherein said device further comprises:
    灰度图生成模块,用于根据所述环境的深度数据生成灰度图;a grayscale graph generating module, configured to generate a grayscale map according to the depth data of the environment;
    所述色彩映射模块还用于,根据所述各深度值与所述色彩组合的映射关系确定各灰度值与色彩组合的映射关系;The color mapping module is further configured to determine a mapping relationship between each gray value and a color combination according to a mapping relationship between the depth values and the color combination;
    所述伪彩色图像生成模块,用于根据由所述深度数据生成的灰度图和所述各灰度值与色彩组合的映射关系,生成所述环境的伪彩色图像。The pseudo color image generating module is configured to generate a pseudo color image of the environment according to a grayscale map generated by the depth data and a mapping relationship between the grayscale values and a color combination.
  9. 如权利要求8所述的装置,其特征在于,The device of claim 8 wherein:
    所述色彩映射模块,用于确定在预设范围内的各深度值与色彩组合的映射关系;以及,The color mapping module is configured to determine a mapping relationship between each depth value and a color combination within a preset range; and
    根据所述在预设范围内的各深度值与所述色彩组合的映射关系确定对应范围内各灰度值与色彩组合的映射关系;Determining, according to the mapping relationship between each depth value and the color combination in the preset range, a mapping relationship between each gray value and a color combination in the corresponding range;
    所述伪彩色图像生成模块,用于根据由所述深度数据生成的灰度图和所述对应范围内各灰度值与色彩组合的映射关系,生成所述环境的伪彩色图像。The pseudo color image generating module is configured to generate a pseudo color image of the environment according to a grayscale map generated by the depth data and a mapping relationship between each grayscale value and a color combination in the corresponding range.
  10. 如权利要求6至9中任一所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 6 to 9, wherein the device further comprises:
    用户信息获取模块,用于获取当前用户信息;a user information obtaining module, configured to acquire current user information;
    所述色彩映射模块,用于根据所述用户信息确定各深度值与色彩组合的映射关系。The color mapping module is configured to determine a mapping relationship between each depth value and a color combination according to the user information.
  11. 一种电子设备,其特征在于,所述电子设备包括:An electronic device, comprising:
    存储器,一个或多个处理器;以及一个或多个模块,所述一个或多个模块被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述一个或多个模块包括用于执行权利要求1至5中任一所述方法中各个步骤的指令。a memory, one or more processors; and one or more modules, the one or more modules being stored in the memory and configured to be executed by the one or more processors, the one or The plurality of modules includes instructions for performing the various steps of the method of any of claims 1 to 5.
  12. 一种与电子设备结合使用的计算机程序产品,所述计算机程序产 品包括内嵌于计算机可读的存储介质中的计算机程序,所述计算机程序包括用于使所述电子设备执行权利要求1至5中任一所述方法中的各个步骤的指令。A computer program product for use in conjunction with an electronic device, the computer program product comprising a computer program embedded in a computer readable storage medium, the computer program comprising means for causing the electronic device to perform claims 1 to 5 The instructions of the various steps in any of the methods described.
PCT/CN2017/117837 2017-12-21 2017-12-21 Display method and device, electronic device and computer program product WO2019119372A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/117837 WO2019119372A1 (en) 2017-12-21 2017-12-21 Display method and device, electronic device and computer program product
CN201780002902.0A CN108140362B (en) 2017-12-21 2017-12-21 Display method, display device, electronic equipment and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/117837 WO2019119372A1 (en) 2017-12-21 2017-12-21 Display method and device, electronic device and computer program product

Publications (1)

Publication Number Publication Date
WO2019119372A1 true WO2019119372A1 (en) 2019-06-27

Family

ID=62400251

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117837 WO2019119372A1 (en) 2017-12-21 2017-12-21 Display method and device, electronic device and computer program product

Country Status (2)

Country Link
CN (1) CN108140362B (en)
WO (1) WO2019119372A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369448A (en) * 2019-12-28 2020-07-03 北京无线电计量测试研究所 Method for improving image quality
CN114339171A (en) * 2021-04-19 2022-04-12 阿波罗智联(北京)科技有限公司 Control method, device, equipment and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109324417A (en) * 2018-12-13 2019-02-12 宜视智能科技(苏州)有限公司 Typoscope and its control method, computer storage medium
CN113556517A (en) * 2021-09-06 2021-10-26 艾视雅健康科技(苏州)有限公司 Portable vision auxiliary device, intelligent equipment and head-mounted vision auxiliary equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101546426A (en) * 2009-04-30 2009-09-30 上海交通大学 Weak luminescence image processing method based on regional augmentation and regional extraction
CN102316355A (en) * 2011-09-15 2012-01-11 丁少华 Generation method of 3D machine vision signal and 3D machine vision sensor
CN104748729A (en) * 2015-03-19 2015-07-01 中国科学院半导体研究所 Optimized display device and optimized display method for range-gating super-resolution three-dimensional imaging distance map
CN105225256A (en) * 2015-06-10 2016-01-06 西安交通大学 A kind of display packing of high gray depth image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5128658A (en) * 1988-06-27 1992-07-07 Digital Equipment Corporation Pixel data formatting
CN102496167A (en) * 2011-12-07 2012-06-13 天津理工大学 Pseudo-color coding method for phase modulated digital image
CN103808804A (en) * 2014-03-06 2014-05-21 北京理工大学 Method for rapidly mapping and imaging pseudo-colors through ultrasonic microscopic technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101546426A (en) * 2009-04-30 2009-09-30 上海交通大学 Weak luminescence image processing method based on regional augmentation and regional extraction
CN102316355A (en) * 2011-09-15 2012-01-11 丁少华 Generation method of 3D machine vision signal and 3D machine vision sensor
CN104748729A (en) * 2015-03-19 2015-07-01 中国科学院半导体研究所 Optimized display device and optimized display method for range-gating super-resolution three-dimensional imaging distance map
CN105225256A (en) * 2015-06-10 2016-01-06 西安交通大学 A kind of display packing of high gray depth image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369448A (en) * 2019-12-28 2020-07-03 北京无线电计量测试研究所 Method for improving image quality
CN114339171A (en) * 2021-04-19 2022-04-12 阿波罗智联(北京)科技有限公司 Control method, device, equipment and storage medium
CN114339171B (en) * 2021-04-19 2023-08-11 阿波罗智联(北京)科技有限公司 Control method, control device, control equipment and storage medium

Also Published As

Publication number Publication date
CN108140362A (en) 2018-06-08
CN108140362B (en) 2021-09-17

Similar Documents

Publication Publication Date Title
WO2019119372A1 (en) Display method and device, electronic device and computer program product
CN108519676B (en) Head-wearing type vision-aiding device
CN106022304B (en) A kind of real-time body's sitting posture situation detection method based on binocular camera
CN106056092B (en) The gaze estimation method for headset equipment based on iris and pupil
CN110168562B (en) Depth-based control method, depth-based control device and electronic device
CN103873840B (en) Display methods and display equipment
KR101260287B1 (en) Method for simulating spectacle lens image using augmented reality
US9162061B2 (en) Vision enhancement for a vision impaired user
US11375922B2 (en) Body measurement device and method for controlling the same
CN204744865U (en) Device for environmental information around reception and registration of visual disability personage based on sense of hearing
WO2014128749A1 (en) Shape recognition device, shape recognition program, and shape recognition method
KR101885473B1 (en) Smart glass to help visually impaired
CN107623817B (en) Video background processing method, device and mobile terminal
US20200202161A1 (en) Information processing apparatus, information processing method, and program
WO2019104548A1 (en) Image display method, smart glasses and storage medium
CN106840112A (en) A kind of space geometry measuring method of utilization free space eye gaze point measurement
WO2018173314A1 (en) Face recognition device
CN107592490A (en) Video background replacement method, device and mobile terminal
JP2021021889A (en) Display device and method for display
US20210256747A1 (en) Body measurement device and method for controlling the same
CN104483754A (en) Head-wearing type multimedia terminal assisted watching system aiming at patient with dysopia
JP2017191546A (en) Medical use head-mounted display, program of medical use head-mounted display, and control method of medical use head-mounted display
TWI419077B (en) System and method for compensating binocular vision deficiency
JP5900165B2 (en) Gaze detection device and gaze detection method
KR20240040727A (en) How to simulate optical products

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17935333

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17935333

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.10.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17935333

Country of ref document: EP

Kind code of ref document: A1