WO2022166819A1 - 图像处理方法、装置和电子设备 - Google Patents

图像处理方法、装置和电子设备 Download PDF

Info

Publication number
WO2022166819A1
WO2022166819A1 PCT/CN2022/074651 CN2022074651W WO2022166819A1 WO 2022166819 A1 WO2022166819 A1 WO 2022166819A1 CN 2022074651 W CN2022074651 W CN 2022074651W WO 2022166819 A1 WO2022166819 A1 WO 2022166819A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
image
pixel
grayscale
pixel point
Prior art date
Application number
PCT/CN2022/074651
Other languages
English (en)
French (fr)
Inventor
雷钊
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Priority to EP22749098.4A priority Critical patent/EP4280601A1/en
Publication of WO2022166819A1 publication Critical patent/WO2022166819A1/zh
Priority to US18/229,660 priority patent/US20230377503A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2007Display of intermediate tones
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/133Equalising the characteristics of different image components, e.g. their average brightness or colour balance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0233Improving the luminance or brightness uniformity across the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0242Compensation of deficiencies in the appearance of colours
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0271Adjustment of the gradation levels within the range of the gradation scale, e.g. by redistribution or clipping
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/16Determination of a pixel data signal depending on the signal applied in the previous frame
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/10Intensity circuits

Definitions

  • the present application belongs to the field of display technology, and in particular relates to an image processing method, apparatus and electronic device.
  • Augmented Reality (AR) technology is a technology that calculates the position and angle of camera images in real time and adds corresponding images.
  • AR Augmented Reality
  • AR glasses are a portable device, which is recognized by the industry as one of the most suitable product carriers for AR technology.
  • AR glasses are equipped with optical machines and waveguide sheets, and the light waves emitted by the optical machines need to be diffracted in multiple waveguide sheets to realize image projection.
  • the diffracting of light waves in different waveguide sheets will produce different degrees of attenuation, which leads to a poor display effect of the output image of the waveguide sheets, thereby affecting the imaging effect.
  • the purpose of the embodiments of the present application is to provide an image processing method, device and electronic device, which can solve the technical problem that the attenuation of light waves in different waveguide sheets is inconsistent, thereby affecting the imaging effect.
  • an embodiment of the present application provides an image processing method, including:
  • a compensation parameter is determined, and the compensation parameter is used to compensate the image generated by the augmented reality glasses.
  • an image processing apparatus including:
  • the acquisition module is used to acquire the first gray value corresponding to each pixel in the first image, and the second gray value corresponding to each pixel in the second image, the first image being generated by augmented reality glasses an image, the second image is an output image of the augmented reality glasses;
  • a determination module configured to determine a compensation parameter according to the first grayscale value and the second grayscale value corresponding to each pixel point, where the compensation parameter is used to compensate the image generated by the augmented reality glasses.
  • embodiments of the present application provide an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction being The processor implements the steps of the method according to the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method according to the first aspect are implemented .
  • an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, and implement the first aspect the method described.
  • an embodiment of the present application provides a computer program product, the computer program product is stored in a non-volatile storage medium, and the computer program product is executed by at least one processor to implement the first aspect. steps of the method.
  • an embodiment of the present application provides an electronic device configured to execute the steps of the method described in the first aspect.
  • the first gray value of each pixel in the first image is compared with the second gray value of each pixel in the second image, where the first image is an image generated by AR glasses , and the second image is the output image of the AR glasses. Then, based on the first grayscale value and the second grayscale value, a compensation parameter is determined to compensate the image generated by the AR glasses, so as to eliminate the optical signal attenuation caused by the diffraction of light waves in the waveguide sheet, thereby improving the output of the waveguide sheet Imaging effect of the image.
  • FIG. 1 is a schematic diagram of an application scenario of an image processing method provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 3 is a structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 4 is a structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of an application scenario of the image processing method provided by the embodiment of the present application.
  • the image processing apparatus shown in FIG. 1 may be AR glasses, and it should be understood that, in some embodiments, the image processing apparatus may also be a device such as a projector.
  • the AR glasses include an optical machine and a waveguide sheet set.
  • the waveguide sheet set includes at least three waveguide sheets.
  • the optical machine outputs an image to the waveguide sheet set. It should be noted that the optical machine outputs an image by outputting light waves, and each color The wavelengths of the corresponding light waves are different, and the light waves corresponding to different colors propagate in different waveguide sheets.
  • light waves corresponding to red propagate in the first waveguide sheet
  • light waves corresponding to green propagate in the second waveguide sheet
  • light waves corresponding to blue propagate in the second waveguide sheet. Since the production process of each waveguide sheet is not uniform, the light waves transmitted in different waveguide sheets will have different attenuation, which makes the color depth of the output image of the waveguide sheet inconsistent, thus affecting the imaging effect.
  • the gray value of the output image of the AR glasses is compared with the gray value of the image generated by the AR glasses, so as to adjust the luminous parameters of the AR glasses, so as to eliminate the optical signal attenuation caused by the diffraction of light waves in the waveguide sheet, Thereby, the imaging effect of the output image of the waveguide sheet is improved.
  • FIG. 2 is a flowchart of an image processing method provided by an embodiment of the present application.
  • the image processing method provided by the embodiment of the present application includes:
  • S101 Obtain a first grayscale value corresponding to each pixel in the first image and a second grayscale value corresponding to each pixel in the second image.
  • the first image is the image generated by the AR glasses, that is, the image generated and output by the optical machine in FIG. 1, which can also be understood as the input image of the waveguide sheet;
  • the second image is the output image of the AR reality glasses.
  • the second image can also be understood as the image processing of the output image of the waveguide sheet. It should be understood that the above-mentioned first image and the second image represent the same content.
  • the light waves emitted at the pixel include at least three basic colors of red, green and blue. Since light waves of different colors propagate in different waveguide sheets, the light waves emitted by the optomechanical at one pixel point may be attenuated to different degrees in different waveguide sheets.
  • the image sensor or the relevant data of the output image stored in the optical machine can be used to obtain the grayscale values of all pixels in the first image, and the above-mentioned grayscale values are also referred to as the first grayscale values;
  • the output image is captured by an image sensor or a camera, and grayscale values of all pixels in the second image are acquired, and the above-mentioned grayscale values are also referred to as second grayscale values.
  • a first table and a second table may be created, wherein the first table includes the first gray value of each pixel, and the second table includes the second gray value of each pixel. In this way , to visually display the change of the gray value of each pixel.
  • the gray value reflects the depth of the color, and the gray value ranges from 0 to 255.
  • the grayscale value of an image is 0, the image is a black image; when the grayscale value of an image is 255, the image is a white image.
  • S102 Determine a compensation parameter according to the first grayscale value and the second grayscale value corresponding to each pixel.
  • the compensation parameter corresponding to the AR glasses can be determined.
  • the image generated by the AR glasses is compensated by compensation parameters.
  • the specific compensation method may be to adjust the grayscale or brightness of the image generated by the AR glasses based on the compensation parameters to eliminate the optical signal attenuation caused by the diffraction of light waves in the waveguide sheet.
  • the specific compensation method may be based on the compensation parameters to adjust the grayscale of the light wave emitted at each pixel of the AR glasses, so as to achieve the purpose of eliminating the attenuation of the light signal.
  • the above compensation method may also be based on compensation parameters to adjust the brightness of the light wave emitted at each pixel of the AR glasses, so as to achieve the purpose of eliminating the attenuation of the light signal.
  • An application scenario is to obtain the first gray value corresponding to each pixel of the image generated by the AR glasses, and use a camera to capture the output image of the AR glasses to obtain the second corresponding to each pixel in the output image. grayscale value. Based on the first grayscale value and the second grayscale value corresponding to each pixel, the compensation parameter corresponding to the pixel is determined, and the compensation parameter is input to the optomechanical. The parameter adjusts the grayscale or brightness of the image produced by the optomechanical.
  • the first gray value of each pixel in the first image is compared with the second gray value of each pixel in the second image, where the first image is an image generated by AR glasses , and the second image is the output image of the AR glasses. Then, based on the first grayscale value and the second grayscale value, a compensation parameter is determined to compensate the image generated by the AR glasses, so as to eliminate the optical signal attenuation caused by the diffraction of light waves in the waveguide sheet, thereby improving the output of the waveguide sheet Imaging effect of the image.
  • the determining the compensation parameter according to the first grayscale value and the second grayscale value corresponding to each pixel includes:
  • the M target pixels in the first image and the compensation value corresponding to each target pixel are determined, M is a positive integer; the M target pixel points are compensated according to the M compensation values.
  • the pixel point can be defined as the target pixel point , and determine the compensation value corresponding to the target pixel point based on the corresponding attenuation value of the target pixel point.
  • the attenuation value can be understood as the attenuation of the gray value of the target pixel point, and the compensation parameter includes the compensation value corresponding to all the target pixel points.
  • the M target pixels in the input image of the waveguide sheet and the compensation value of each target pixel can be determined, and the compensation value is
  • the attenuated gray value of the target pixel in the propagation process please refer to the following embodiments for the specific technical solutions for determining the target pixel and determining the compensation value corresponding to the target pixel.
  • the target pixel is all the pixels of the input image of the waveguide sheet, that is, the attenuation of the optical signal generated by all the pixels of the input image of the waveguide sheet exceeds the preset range.
  • the value of M is the number of all pixels in the input image of the waveguide sheet.
  • the target pixel is a partial pixel of the input image of the waveguide sheet. In this case, the value of M is less than the number of all pixels in the input image of the waveguide sheet.
  • M is a positive integer, and for specific technical solutions, please refer to the subsequent embodiments.
  • the gray value of each target pixel point may be adjusted based on the compensation value.
  • the grayscale value of the target pixel can be adjusted to the target grayscale value corresponding to the target pixel to compensate for the attenuation of the optical signal during the propagation of the waveguide sheet,
  • the target gray value is the sum of the compensation value of the target pixel and the first gray value of the target pixel.
  • the grayscale value of the target pixel can be adjusted to 240 to realize compensation for optical signal attenuation. .
  • the compensation value corresponding to the target pixel point includes:
  • the third grayscale value corresponding to the pixel point is determined by the N light waves corresponding to the pixel point and the attenuation value corresponding to each of the light waves, each The wavelengths corresponding to the light waves are different, and N is a positive integer; the absolute value of the difference between the second grayscale value corresponding to the pixel point and the third grayscale value is determined as the grayscale difference value; In the case that the degree difference value is greater than or equal to the preset threshold value, the pixel point is determined as the target pixel point, and the grayscale difference value is determined as the compensation value.
  • the optical machine can emit at least one light wave at any pixel point of the AR glasses to generate an image.
  • the light waves corresponding to pixels include at least three light waves: red light waves, green light waves and blue light waves, and the wavelengths corresponding to each light wave are different, and different light waves are in different waveguide sheets. Diffraction, whereby light waves of different wavelengths correspond to different degrees of attenuation of optical signals.
  • the third grayscale value corresponding to the pixel point can be determined based on the N light waves corresponding to the pixel point and the attenuation value corresponding to each light wave.
  • the attenuation value is used to characterize the attenuation degree of the corresponding light wave based on the transmission efficiency.
  • the third grayscale value of is used to characterize the grayscale value corresponding to the pixel in the case that the optical signal is not attenuated due to the influence of the waveguide.
  • the absolute value of the difference between the second grayscale value corresponding to the pixel and the corresponding third grayscale value can be determined as the grayscale difference value.
  • the grayscale difference value of a pixel point can represent the light wave emitted at the pixel point and the attenuation of the light signal generated during the diffraction process of the waveguide sheet.
  • the grayscale difference is greater than or equal to the preset threshold, it means that when the light wave emitted at the pixel is diffracted in the waveguide sheet, the optical signal has undergone a lot of attenuation, and the pixel is determined as the target pixel, The grayscale difference value is determined as the compensation value corresponding to the target pixel point, so as to eliminate the optical signal attenuation caused by the diffraction of the light wave in the waveguide sheet.
  • the grayscale difference is less than the preset threshold, it means that when the light wave emitted at the pixel is diffracted in the waveguide sheet, the attenuation of the light signal is within a reasonable range, and the pixel does not need to be determined as the target pixel.
  • the third grayscale value of a pixel is 240 and the second grayscale value is 200
  • the absolute value of the difference between the third grayscale value and the second grayscale value is 40. If the preset threshold is 10, it can be determined that the pixel is the target pixel; if the preset threshold is 50, the pixel is not the target pixel.
  • the N light waves include a first light wave, a second light wave and a third light wave
  • the determining the third grayscale value corresponding to the pixel point includes:
  • the product of the gray value of the first light wave and the corresponding first attenuation value is determined as the first sub gray value; the product of the gray value of the second light wave and the corresponding second attenuation value is determined to be is the second sub-gray value; the product of the gray value of the third light wave and the corresponding third attenuation value is determined as the third sub-gray value; the first sub-gray value, the third sub-gray value The sum of the second grayscale value and the third sub-grayscale value is determined as the third grayscale value.
  • the light waves corresponding to the pixels may be red light waves, green light waves and blue light waves.
  • the red light wave can be called the first light wave
  • the green light wave is called the second light wave
  • the blue light wave is called the first light wave.
  • the grayscale values of the first lightwave, the second lightwave, and the third lightwave are acquired, and the grayscale values of the above three light waves are respectively associated with the corresponding cursors.
  • the attenuation values are multiplied to obtain the first sub-gray value corresponding to the first light wave, the second sub-gray value corresponding to the second light wave, and the third sub-gray value corresponding to the third light wave.
  • the attenuation value corresponding to each cursor may be a preset empirical value
  • the first sub-gray value represents the gray value of the first light wave after attenuation
  • the second sub-gray value represents the gray value of the second light wave after attenuation
  • the third sub-grayscale value represents the attenuated grayscale value of the third light wave.
  • the sum of the first sub-gray value, the second gray value and the third sub-gray value is determined as the third gray value corresponding to the pixel.
  • the grayscale values of the first light wave, the second light wave and the third light wave are all 100, wherein the attenuation value corresponding to the first light wave and the second light wave is 0.8, and the attenuation value corresponding to the third light wave is 0.7, then It can be obtained that the first sub-gray value and the second sub-gray value are 80, the third sub-gray value is 70, and the third sub-gray value is 230.
  • a pixel may correspond to one type of light wave or two types of light waves.
  • the light wave emitted by a pixel is a red light wave.
  • the above technical solution can also be applied to obtain the pixel.
  • the corresponding third grayscale value is not described here in this embodiment.
  • the method further includes:
  • the gray value corresponding to the third image is increased.
  • the augmented reality glasses after the compensation parameter is determined, if the augmented reality glasses generate a third image, wherein the third image is an image different from the first image, in this case, the gray value corresponding to the third image is increased.
  • An optional implementation manner may be to increase the grayscale value at the target pixel point in the third image, where the increased grayscale value is a compensation value.
  • the acquiring the first grayscale value corresponding to each pixel in the first image includes:
  • any pixel point of the first image obtain the first pixel value, second pixel value and third pixel value of the pixel point; calculate the first product of the first pixel value and the first weight value, the second product of the second pixel value and the second weight value, and the third product of the third pixel value and the third weight value; combining the first product, the second product and the third The average value of the product is determined as the first gray value of the pixel point.
  • an image sensor may be used to directly obtain the grayscale value of each pixel point.
  • the grayscale value of the pixel point may be determined based on the first pixel value, the second pixel value and the third pixel value of the pixel point.
  • the pixel value of the pixel is the RGB value of the pixel, the first pixel value can be understood as the R value; the second pixel value can be understood as the G value; the third pixel value can be understood as the B value.
  • the first pixel value, the second pixel value and the third pixel value of an image are all 255, it means that the image is a white image; if the first pixel value, the second pixel value and the third pixel value of an image are all 255 If the pixel values are all 0, it means that the image is a black image; if the first pixel value, the second pixel value and the third pixel value of an image are equal, and the three pixel values are not 255 or 0, it means the image is grayscale image.
  • An optional implementation is to obtain the first product of the first pixel value of the pixel and the first weight value, the second product of the second pixel value and the second weight value, and the third pixel value and the third weight value.
  • the average value of the first product, the second product and the third product is determined as the first grayscale value of the pixel.
  • the above-mentioned first weight value, second weight value and third weight value are all empirical values, which can be set by yourself, which is not specifically limited in this embodiment.
  • the first pixel value of a pixel is 230, the first weight value is 0.8; the second pixel value is 240, the second weight value is 0.7; the third pixel value is 200, and the third weight value is 0.6.
  • the first product can be calculated as 184, the second product is 168, the third product is 120, and the first gray value of the pixel is 124.
  • the image processing apparatus 200 includes:
  • an acquisition module 201 configured to acquire a first grayscale value corresponding to each pixel in the first image, and a second grayscale value corresponding to each pixel in the second image;
  • the determining module 202 is configured to determine a compensation parameter according to the first grayscale value and the second grayscale value corresponding to each pixel point.
  • the determining module 201 further includes:
  • a determining unit configured to determine M target pixels and each target pixel in the first image according to the first grayscale value and the second grayscale value corresponding to all pixels in the first image
  • the corresponding compensation value, M is a positive integer
  • a compensation unit configured to compensate the M target pixel points according to the M compensation values.
  • the determining unit is also used for:
  • the third grayscale value corresponding to the pixel point is determined by the N light waves corresponding to the pixel point and the attenuation value corresponding to each of the light waves, each The wavelengths corresponding to the light waves are different, and N is a positive integer;
  • the pixel point is determined as a target pixel point, and the grayscale difference value is determined as a compensation value.
  • the determining unit is also used for:
  • the sum of the first sub-gray value, the second gray value and the third sub-gray value is determined as the third gray value.
  • the image processing apparatus 200 further includes:
  • An adding module is configured to increase the gray value corresponding to the third image when the augmented reality glasses generate the third image.
  • the obtaining module 201 is further configured to:
  • the average value of the first product, the second product and the third product is determined as the first grayscale value of the pixel point.
  • the image processing apparatus in this embodiment of the present application may be AR glasses or other projection devices, or may be a mobile terminal, or may be a component, an integrated circuit, or a chip in the terminal.
  • the apparatus may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant).
  • UMPC ultra-mobile personal computer
  • netbook or a personal digital assistant
  • non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
  • Network Attached Storage NAS
  • personal computer personal computer, PC
  • television television
  • teller machine or self-service machine etc.
  • the image processing apparatus in this embodiment of the present application may be an apparatus having an operating system.
  • the operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
  • the image processing apparatus provided in this embodiment of the present application can implement each process implemented by the image processing method in the method embodiment of FIG. 2 , and to avoid repetition, details are not described here.
  • the first gray value of each pixel in the first image is compared with the second gray value of each pixel in the second image, where the first image is an image generated by AR glasses , and the second image is the output image of the AR glasses. Then, based on the first grayscale value and the second grayscale value, a compensation parameter is determined to compensate the image generated by the AR glasses, so as to eliminate the optical signal attenuation caused by the diffraction of light waves in the waveguide sheet, thereby improving the output of the waveguide sheet Imaging effect of the image.
  • an embodiment of the present application further provides an electronic device, including a processor 310, a memory 309, a program or instruction stored in the memory 309 and executable on the processor 310, the program or instruction being executed by the processor
  • an electronic device including a processor 310, a memory 309, a program or instruction stored in the memory 309 and executable on the processor 310, the program or instruction being executed by the processor
  • the electronic devices in the embodiments of the present application include the aforementioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 4 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 300 includes but is not limited to: a radio frequency unit 301, a network module 302, an audio output unit 303, an input unit 304, a sensor 305, a display unit 306, a user input unit 307, an interface unit 308, a memory 309, and a processor 310, etc. part.
  • the electronic device 300 may also include a power source (such as a battery) for supplying power to various components, and the power source may be logically connected to the processor 310 through a power management system, so that the power management system can manage charging, discharging, and power management. consumption management and other functions.
  • a power source such as a battery
  • the structure of the electronic device shown in FIG. 4 does not constitute a limitation to the electronic device.
  • the electronic device may include more or less components than the one shown, or combine some components, or arrange different components, which will not be repeated here. .
  • the processor 310 is configured to obtain the first gray value corresponding to each pixel in the first image, and the second gray value corresponding to each pixel in the second image;
  • a compensation parameter is determined according to the first grayscale value and the second grayscale value corresponding to each pixel point.
  • the processor 310 is further configured to determine the M target pixel points in the first image according to the first grayscale value and the second grayscale value corresponding to all the pixels in the first image. a compensation value corresponding to each of the target pixels;
  • the M target pixel points are compensated according to the M compensation values.
  • the processor 310 is further configured to, for any pixel point, determine the third grayscale value corresponding to the pixel point;
  • the pixel point is determined as a target pixel point, and the grayscale difference value is determined as a compensation value.
  • the processor 310 is further configured to determine the product of the grayscale value of the first light wave and the corresponding first attenuation value as the first sub-grayscale value;
  • the sum of the first sub-gray value, the second gray value and the third sub-gray value is determined as the third gray value.
  • the processor 310 is further configured to increase the grayscale value corresponding to the third image when the augmented reality glasses generate a third image.
  • the processor 310 is further configured to obtain, for any pixel of the first image, the first pixel value, the second pixel value and the third pixel value of the pixel;
  • the average value of the first product, the second product and the third product is determined as the first grayscale value of the pixel point.
  • the first gray value of each pixel in the first image is compared with the second gray value of each pixel in the second image, where the first image is an image generated by AR glasses , and the second image is the output image of the AR glasses. Then, based on the first grayscale value and the second grayscale value, a compensation parameter is determined to compensate the image generated by the AR glasses, so as to eliminate the optical signal attenuation caused by the diffraction of light waves in the waveguide sheet, thereby improving the output of the waveguide sheet Imaging effect of the image.
  • Embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium.
  • a program or an instruction is stored on the readable storage medium.
  • An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
  • the chip includes a processor and a communication interface
  • the communication interface is coupled to the processor
  • the processor is configured to run a program or an instruction to implement the above image processing method embodiments.
  • the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.
  • An embodiment of the present application provides a computer program product, the computer program product is stored in a non-volatile storage medium, the computer program product is executed by at least one processor to implement the various processes of the foregoing method embodiments, and The same technical effect can be achieved, and in order to avoid repetition, details are not repeated here.
  • An embodiment of the present application provides an electronic device, which is configured to perform each process of each embodiment of the above method, and can achieve the same technical effect. To avoid repetition, details are not repeated here.
  • the terms “comprising”, “comprising” or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device comprising a series of elements does not include those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase “comprising a" does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.
  • the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in the reverse order depending on the functions involved. To perform functions, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to some examples may be combined in other examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

本申请提供了一种图像处理方法、装置和电子设备,图像处理方法包括:获取第一图像中每一像素点对应的第一灰度值,以及第二图像中每一像素点对应的第二灰度值(S101);根据每一像素点对应的第一灰度值和第二灰度值,确定补偿参数(S102),补偿参数用于对增强现实眼镜产生的图像进行补偿。其中,第一图像为AR眼镜产生的图像,第二图像为AR眼镜的输出图像。

Description

图像处理方法、装置和电子设备
相关申请的交叉引用
本申请主张在2021年02月04日在中国提交的中国专利申请No.202110156335.2的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于显示技术领域,具体涉及一种图像处理方法、装置和电子设备。
背景技术
增强现实(Augmented Reality,AR)技术是一种实时地计算摄影机影像的位置及角度并加上相应图像的技术,近年来,AR技术飞速发展已广泛应用到消费、医疗、物流等领域。其中,AR眼镜是一种便携式设备,被行业公认为AR技术最合适的产品载体之一。
目前,AR眼镜中配置有光机和波导片,光机发出的光波需要在多个波导片中进行衍射,才能实现图像的投射。然而,光波在不同的波导片中进行衍射,会产生不同程度的衰减,这导致波导片输出图像的显示效果不佳,从而影响了成像效果。
发明内容
本申请实施例的目的是提供一种图像处理方法、装置和电子设备,能够解决光波在不同波导片中衰减程度不一致,进而影响成像效果的技术问题。
为了解决上述技术问题,本申请是这样实现的:
第一方面,本申请实施例提供了一种图像处理方法,包括:
获取第一图像中每一像素点对应的第一灰度值,以及第二图像中每一像素点对应的第二灰度值,所述第一图像为所述增强现实眼镜产生的图像,所述第二图像为所述增强现实眼镜的输出图像;
根据每一像素点对应的所述第一灰度值和所述第二灰度值,确定补偿参 数,所述补偿参数用于对所述增强现实眼镜产生的图像进行补偿。
第二方面,本申请实施例提供了一种图像处理装置,包括:
获取模块,用于获取第一图像中每一像素点对应的第一灰度值,以及第二图像中每一像素点对应的第二灰度值,所述第一图像为增强现实眼镜产生的图像,所述第二图像为增强现实眼镜的输出图像;
确定模块,用于根据每一像素点对应的所述第一灰度值和所述第二灰度值,确定补偿参数,所述补偿参数用于对所述增强现实眼镜产生的图像进行补偿。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
第六方面,本申请实施例提供一种计算机程序产品,所述计算机程序产品被存储在非易失的存储介质中,所述计算机程序产品被至少一个处理器执行以实现如第一方面所述的方法的步骤。
第七方面,本申请实施例提供了一种电子设备,被配置为执行如第一方面所述的方法的步骤。
本申请实施例中,将第一图像中每一像素点的第一灰度值与第二图像中每一像素点的第二灰度值进行比较,其中,第一图像为AR眼镜产生的图像,第二图像为AR眼镜的输出图像。进而基于第一灰度值和该第二灰度值,确定补偿参数,对AR眼镜产生的图像进行补偿,以此消除由光波在波导片中衍射而产生的光信号衰减,从而提高波导片输出图像的成像效果。
附图说明
图1是本申请实施例提供的图像处理方法的应用场景示意图;
图2是本申请实施例提供的图像处理方法的流程图;
图3是本申请实施例提供的图像处理装置的结构图;
图4是本申请实施例提供的电子设备的结构图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的图像处理方法进行详细地说明。
请参阅图1,图1是本申请实施例提供的图像处理方法的应用场景示意图。
图1所示的图像处理装置可以是AR眼镜,应理解,在一些实施例中,该图像处理装置也可以是投影仪等装置。
该AR眼镜包括光机和波导片组,该波导片组至少包括3个波导片,光机输出图像至波导片组,需要说明的是,光机以输出光波的方式输出图像,而每个颜色对应光波的波长不同,且对应不同颜色的光波在不同的波导片中传播。
例如,红色对应的光波在第一波导片中传播,绿色对应的光波在第二波导片中传播,蓝色对应的光波在第二波导片中传播。由于每个波导片的生产制程不统一,因此在不同波导片传输的光波会产生不同的衰减,使得波导片 输出图像的颜色深浅程度不一致,从而影响了成像效果。
基于上述可能存在的技术问题,本申请提出以下技术构思:
将AR眼镜的输出图像的灰度值,与AR眼镜产生的图像的灰度值进行比较,以此调整AR眼镜的发光参数,以此消除由光波在波导片中衍射而产生的光信号衰减,从而提高波导片输出图像的成像效果。
请参阅图2,图2是本申请实施例提供的图像处理方法的流程图。本申请实施例提供的图像处理方法包括:
S101,获取第一图像中每一像素点对应的第一灰度值,以及第二图像中每一像素点对应的第二灰度值。
本步骤中,第一图像为AR眼镜产生的图像,即图1中由光机产生并输出的图像,也可以理解为是波导片的输入图像;第二图像为AR现实眼镜的输出图像,第二图像也可理解为是波导片的输出图像图像处理。应理解,上述第一图像和第二图像表征的内容相同。
应理解,对于图像中的一个像素点,这个像素点处发射的光波至少包括红色、绿色和蓝色这三种基础颜色。由于不同颜色的光波在不同的波导片中进行传播,因此光机在一个像素点处发射的光波可能会在不同波导片中产生不同程度的衰减。
本步骤中,可以通过图像传感器,或者,读取光机存储的输出图像的相关数据,获取第一图像中所有像素点的灰度值,上述灰度值又称为第一灰度值;可以通过图像传感器,或者,摄像头对输出图像进行捕捉,获取第二图像中所有像素点的灰度值,上述灰度值又称为第二灰度值。
本步骤中,可以创建第一表格和第二表格,其中,第一表格包括每个像素点的第一灰度值,第二表格包括每个像素点的第二灰度值,通过这种方式,直观的显示每个像素点的灰度值变化情况。
应理解,灰度值反映颜色的深浅程度,灰度值的大小范围为0至255。当一图像的灰度值为0时,该图像为黑色图像;当一图像的灰度值为255时,该图像为白色图像。
S102,根据每一像素点对应的所述第一灰度值和所述第二灰度值,确定补偿参数。
本步骤中,在确定每一像素点对应的第一灰度值和第二灰度值之后,可以确定AR眼镜对应的补偿参数,具体的技术方案请参阅后续实施例。通过补偿参数对AR眼镜产生的图像进行补偿,具体的补偿方式可以是基于补偿参数调整AR眼镜所产生图像的灰度或亮度,以消除光波在波导片中衍射而产生的光信号衰减。
具体的补偿方式可以是基于补偿参数,调整AR眼镜每一像素点处发射光波的灰度,实现消除光信号衰减的目的。
上述补偿方式也可以是基于补偿参数,调整AR眼镜每一像素点处发射光波的亮度,实现消除光信号衰减的目的。
一种应用场景为,获取AR眼镜所产生图像的每一像素点对应的第一灰度值,以及使用摄像头对AR眼镜的输出图像进行捕捉,得到该输出图像中每一像素点对应的第二灰度值。基于每一像素对应的第一灰度值和第二灰度值,确定该像素点对应的补偿参数,并将该补偿参数输入至光机,在光机再次产生图像的过程中,依据该补偿参数调整光机所产生图像的灰度或亮度。
本申请实施例中,将第一图像中每一像素点的第一灰度值与第二图像中每一像素点的第二灰度值进行比较,其中,第一图像为AR眼镜产生的图像,第二图像为AR眼镜的输出图像。进而基于第一灰度值和该第二灰度值,确定补偿参数,对AR眼镜产生的图像进行补偿,以此消除由光波在波导片中衍射而产生的光信号衰减,从而提高波导片输出图像的成像效果。
可选地,所述根据每一像素点对应的所述第一灰度值和所述第二灰度值,确定补偿参数包括:
根据所述第一图像中所有像素点对应的第一灰度值和第二灰度值,确定所述第一图像中的M个目标像素点和每一所述目标像素点对应的补偿值,M为正整数;根据M个补偿值,对所述M个目标像素点进行补偿。
本实施例中,若一像素点处发射的光波,在波导片中进行传播的过程中发生了光信号的衰减,且衰减程度超出了预设范围,则可以将该像素点定义为目标像素点,基于该目标像素点的对应衰减值确定该目标像素点对应的补偿值,上述衰减值可以理解为是目标像素点灰度值的衰减量,上述补偿参数包括所有目标像素点对应的补偿值。
基于波导片输入图像中所有像素点对应的第一灰度值和第二灰度值,可以确定波导片输入图像中的M个目标像素点和每个目标像素点的补偿值,上述补偿值为目标像素点在传播过程中衰减的灰度值,具体的确定目标像素点,以及确定目标像素点对应的补偿值的技术方案请参阅后续实施例。
应理解,一种可能存在的情况为,目标像素点是波导片输入图像的所有像素点,即波导片输入图像的所有像素点产生的光信号衰减均超出了预设范围,这种情况下,M的数值为波导片输入图像中所有像素点的数量。另一种可能存在的情况为,目标像素点是波导片输入图像的部分像素点,这种情况下,M的数值小于波导片输入图像中所有像素点的数量。
M为正整数,具体的技术方案请参阅后续实施例。
本实施例中,在得到每个目标像素点的补偿值后,可以基于补偿值对每个目标像素点的灰度值进行调整。
可选的实施方式为,对于任一目标像素点,可以将该目标像素点的灰度值调整为该目标像素点对应的目标灰度值,以补偿光信号在波导片传播过程中的衰减,其中,目标灰度值为该目标像素点的补偿值与该目标像素点的第一灰度值的和。
示例性的,一目标像素点的第一灰度值为200,该目标像素点对应的补偿值为40,则可以将该目标像素点的灰度值调整为240,实现对于光信号衰减的补偿。
可选地,所述根据所述第一图像中所有像素点对应的所述第一灰度值和所述第二灰度值,确定所述第一图像中的M个目标像素点和每一所述目标像素点对应的补偿值包括:
对任一像素点,确定所述像素点对应的第三灰度值,所述第三灰度值由所述像素点对应的N个光波和每一所述光波对应的衰减值确定,每一所述光波对应的波长不同,N为正整数;将所述像素点对应的第二灰度值与所述第三灰度值的差的绝对值,确定为灰度差值;在所述灰度差值大于或等于预设阈值的情况下,确定所述像素点为目标像素点,并将所述灰度差值确定为补偿值。
本实施例中,光机在AR眼镜产生图像的任一像素点处,至少可以发射1 种光波。需要说明的是,基于生成图像的三原色原理,像素点对应的光波至少包括红色光波、绿色光波以及蓝色光波这3种光波,且每种光波对应的波长不同,不同光波在不同的波导片中衍射,由此不同波长的光波对应的光信号衰减程度不同。
可以基于像素点对应的N个光波和每个光波对应的衰减值确定该像素点对应的第三灰度值,该衰减值用于表征对应的光波基于传输效率而导致的衰减程度,一像素点的第三灰度值用于表征,在不受波导片影响而导致光信号衰减的情况下,该像素点对应的灰度值。
具体的如何确定第三灰度值的技术方案请参阅后续实施例。
对于波导片输入图像的任一像素点,可以将该像素对应的第二灰度值与对应的第三灰度值的差的绝对值,确定为灰度差值。应理解,一像素点的灰度差值可以表征该像素点处发射的光波,在波导片衍射过程中产生的光信号衰减量。
在灰度差值大于或等于预设阈值的情况下,表示该像素点处发射的光波在波导片中进行衍射时,光信号发生了大量的衰减,则将该像素点确定为目标像素点,并将该灰度差值确定为该目标像素点对应的补偿值,以消除由光波在波导片中衍射而产生的光信号衰减。
在灰度差值小于预设阈值的情况下,表示该像素点处发射的光波在波导片中进行衍射时,光信号的衰减程度处于合理范围,则无需将该像素点确定为目标像素点。
示例性的,一像素点的第三灰度值为240,第二灰度值为200,则第三灰度值与第二灰度值的差的绝对值为40。若预设阈值为10,则可以确定该像素点为目标像素点;若预设阈值为50,则该像素点不是目标像素点。
可选地,所述N个光波包括第一光波、第二光波和第三光波,所述确定所述像素点对应的第三灰度值包括:
将所述第一光波的灰度值与对应的第一衰减值的乘积,确定为第一子灰度值;将所述第二光波的灰度值与对应的第二衰减值的乘积,确定为第二子灰度值;将所述第三光波的灰度值与对应的第三衰减值的乘积,确定为第三子灰度值;将所述第一子灰度值、所述第二灰度值与所述第三子灰度值的和, 确定为所述第三灰度值。
如上所述,像素点对应的光波可能红色光波、绿色光波以及蓝色光波这3种光波,可以将红色光波称为第一光波,将绿色光波称为第二光波,将蓝色光波称为第三光波,基于上述内容,每种光波在不同波导片中传播造成光信号存在不同程度的衰减程度。
本实施例中,在像素点对应存在三种光波的情况下,获取第一光波、第二光波和第三光波的灰度值,并将上述三种光波的灰度值分别与各光标对应的衰减值相乘,得到第一光波对应的第一子灰度值,第二光波对应的第二子灰度值,以及第三光波对应的第三子灰度值。
其中,各光标对应的衰减值可以是预先设置的经验数值,第一子灰度值表征第一光波衰减后的灰度值,第二子灰度值表征第二光波衰减后的灰度值,第三子灰度值表征第三光波衰减后的灰度值。
将第一子灰度值、第二灰度值与第三子灰度值的和,确定为该像素点对应的第三灰度值。
示例性的,第一光波、第二光波和第三光波的灰度值均为100,其中,第一光波和第二光波对应的衰减值为0.8,第三光波对应的衰减值为0.7,则可以得到第一子灰度值和第二子灰度值为80,第三子灰度值为70,第三灰度值为230。
应理解,在一些实施例中,一像素点可能对应存在一种光波或两种光波,例如,一像素点发射的光波为红色光波,这种情况下,也可以适用上述技术方案,得到该像素对应的第三灰度值,本实施例在此不做过多阐述。
可选地,所述目标像素点进行补偿之后,所述方法还包括:
在所述增强现实眼镜产生第三图像的情况下,增加所述第三图像对应的灰度值。
本实施例中,在确定补偿参数之后,若增强现实眼镜产生第三图像,其中,第三图像是与第一图像不同的图像,这种情况下,增加第三图像对应的灰度值。可选的实施方式可以是,提高第三图像中目标像素点处的灰度值,其中,增加的灰度值为补偿值。
以下,具体说明如何获取每一像素点对应的第一灰度值:
可选地,所述获取第一图像中每一像素点对应的第一灰度值包括:
对于所述第一图像的任一像素点,获取所述像素点的第一像素值、第二像素值和第三像素值;计算所述第一像素值与第一权重值的第一乘积,所述第二像素值与第二权重值的第二乘积,以及所述第三像素值与第三权重值的第三乘积;将所述第一乘积、所述第二乘积与所述第三乘积的平均值,确定为所述像素点的第一灰度值。
应理解,在一些实施例中,可以使用图像传感器直接获取每个像素点的灰度值。
在本实施例中,可以基于像素点的第一像素值、第二像素值和第三像素值,确定该像素点的灰度值。
其中,像素点的像素值即该像素点的RGB值,第一像素值可以理解为R值;第二像素值可以理解为G值;第三像素值可以理解为B值。
需要说明的是,若一图像的第一像素值、第二像素值和第三像素值均为255,表示该图像为白色图像;若一图像的第一像素值、第二像素值和第三像素值均为0,表示该图像为黑色图像;若一图像的第一像素值、第二像素值和第三像素值相等,这三个像素值不为255或0,表示该图像为灰度图像。
可选的实施方式为,获得像素点的第一像素值与第一权重值的第一乘积,第二像素值与第二权重值的第二乘积,以及第三像素值与第三权重值的第三乘积,将第一乘积、第二乘积与第三乘积的平均值,确定为该像素点的第一灰度值。其中,上述第一权重值、第二权重值和第三权重值均为经验数值,可自定义设置,本实施例在此不作具体限定。
例如,一像素点的第一像素值为230,第一权重值为0.8;第二像素值为240,第二权重值为0.7;第三像素值为200,第三权重值为0.6。则可以计算第一乘积为184,第二乘积为168,第三乘积为120,该像素点的第一灰度值为124。
如图3所示,图像处理装置200包括:
获取模块201,用于获取第一图像中每一像素点对应的第一灰度值,以及第二图像中每一像素点对应的第二灰度值;
确定模块202,用于根据每一像素点对应的所述第一灰度值和所述第二 灰度值,确定补偿参数。
可选地,所述确定模块201还包括:
确定单元,用于根据所述第一图像中所有像素点对应的第一灰度值和第二灰度值,确定所述第一图像中的M个目标像素点和每一所述目标像素点对应的补偿值,M为正整数;
补偿单元,用于根据M个补偿值,对所述M个目标像素点进行补偿。
可选地,所述确定单元,还用于:
对任一像素点,确定所述像素点对应的第三灰度值,所述第三灰度值由所述像素点对应的N个光波和每一所述光波对应的衰减值确定,每一所述光波对应的波长不同,N为正整数;
将所述像素点对应的第二灰度值与所述第三灰度值的差的绝对值,确定为灰度差值;
在所述灰度差值大于或等于预设阈值的情况下,确定所述像素点为目标像素点,并将所述灰度差值确定为补偿值。
可选地,所述确定单元,还用于:
将第一光波的灰度值与对应的第一衰减值的乘积,确定为第一子灰度值;
将第二光波的灰度值与对应的第二衰减值的乘积,确定为第二子灰度值;
将第三光波的灰度值与对应的第三衰减值的乘积,确定为第三子灰度值;
将所述第一子灰度值、所述第二灰度值与所述第三子灰度值的和,确定为所述第三灰度值。
可选地,所述图像处理装置200还包括:
增加模块,用于在所述增强现实眼镜产生第三图像的情况下,增加所述第三图像对应的灰度值。
可选地,所述获取模块201,还用于:
对所述第一图像的任一像素点,获取所述像素点的第一像素值、第二像素值和第三像素值;
计算所述第一像素值与对应的第一权重值的第一乘积,所述第二像素值与对应的第二权重值的第二乘积,以及所述第三像素值与对应的第三权重值的第三乘积;
将所述第一乘积、所述第二乘积与所述第三乘积的平均值,确定为所述像素点的第一灰度值。
图像处理本申请实施例中的图像处理装置可以是AR眼镜或其他投影设备,也可以是移动终端,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的图像处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的图像处理装置能够实现图2的方法实施例中图像处理方法实现的各个过程,为避免重复,这里不再赘述。
本申请实施例中,将第一图像中每一像素点的第一灰度值与第二图像中每一像素点的第二灰度值进行比较,其中,第一图像为AR眼镜产生的图像,第二图像为AR眼镜的输出图像。进而基于第一灰度值和该第二灰度值,确定补偿参数,对AR眼镜产生的图像进行补偿,以此消除由光波在波导片中衍射而产生的光信号衰减,从而提高波导片输出图像的成像效果。
可选地,本申请实施例还提供一种电子设备,包括处理器310,存储器309,存储在存储器309上并可在所述处理器310上运行的程序或指令,该程序或指令被处理器310执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要注意的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图4为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备300包括但不限于:射频单元301、网络模块302、音频输出 单元303、输入单元304、传感器305、显示单元306、用户输入单元307、接口单元308、存储器309、以及处理器310等部件。
本领域技术人员可以理解,电子设备300还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器310逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图4中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,处理器310,用于获取第一图像中每一像素点对应的第一灰度值,以及第二图像中每一像素点对应的第二灰度值;
根据每一像素点对应的所述第一灰度值和所述第二灰度值,确定补偿参数。
可选地,所述处理器310,还用于根据所述第一图像中所有像素点对应的第一灰度值和第二灰度值,确定所述第一图像中的M个目标像素点和每一所述目标像素点对应的补偿值;
根据M个补偿值,对所述M个目标像素点进行补偿。
可选地,所述处理器310,还用于对任一像素点,确定所述像素点对应的第三灰度值;
将所述像素点对应的第二灰度值与所述第三灰度值的差的绝对值,确定为灰度差值;
在所述灰度差值大于或等于预设阈值的情况下,确定所述像素点为目标像素点,并将所述灰度差值确定为补偿值。
可选地,所述处理器310,还用于将第一光波的灰度值与对应的第一衰减值的乘积,确定为第一子灰度值;
将第二光波的灰度值与对应的第二衰减值的乘积,确定为第二子灰度值;
将第三光波的灰度值与对应的第三衰减值的乘积,确定为第三子灰度值;
将所述第一子灰度值、所述第二灰度值与所述第三子灰度值的和,确定为所述第三灰度值。
可选地,所述处理器310,还用于在所述增强现实眼镜产生第三图像的情况下,增加所述第三图像对应的灰度值。
可选地,所述处理器310,还用于对所述第一图像的任一像素点,获取所述像素点的第一像素值、第二像素值和第三像素值;
计算所述第一像素值与对应的第一权重值的第一乘积,所述第二像素值与对应的第二权重值的第二乘积,以及所述第三像素值与对应的第三权重值的第三乘积;
将所述第一乘积、所述第二乘积与所述第三乘积的平均值,确定为所述像素点的第一灰度值。
本申请实施例中,将第一图像中每一像素点的第一灰度值与第二图像中每一像素点的第二灰度值进行比较,其中,第一图像为AR眼镜产生的图像,第二图像为AR眼镜的输出图像。进而基于第一灰度值和该第二灰度值,确定补偿参数,对AR眼镜产生的图像进行补偿,以此消除由光波在波导片中衍射而产生的光信号衰减,从而提高波导片输出图像的成像效果。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
本申请实施例提供了一种计算机程序产品,所述计算机程序产品被存储在非易失的存储介质中,所述计算机程序产品被至少一个处理器执行以实现上述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例提供了一种电子设备,被配置为执行如上述方法各个实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (17)

  1. 一种图像处理方法,应用于增强现实眼镜,所述方法包括:
    获取第一图像中每一像素点对应的第一灰度值,以及第二图像中每一像素点对应的第二灰度值,所述第一图像为所述增强现实眼镜产生的图像,所述第二图像为所述增强现实眼镜的输出图像;
    根据每一像素点对应的所述第一灰度值和所述第二灰度值,确定补偿参数,所述补偿参数用于对所述增强现实眼镜产生的图像进行补偿。
  2. 根据权利要求1所述的方法,其中,所述根据每一像素点对应的所述第一灰度值和所述第二灰度值,确定补偿参数包括:
    根据所述第一图像中所有像素点对应的第一灰度值和第二灰度值,确定所述第一图像中的M个目标像素点和每一所述目标像素点对应的补偿值,M为正整数;
    根据M个补偿值,对所述M个目标像素点进行补偿。
  3. 根据权利要求2所述的方法,其中,所述根据所述第一图像中所有像素点对应的第一灰度值和第二灰度值,确定所述第一图像中的M个目标像素点和每一所述目标像素点对应的补偿值包括:
    对任一像素点,确定所述像素点对应的第三灰度值,所述第三灰度值由所述像素点对应的N个光波和每一所述光波对应的衰减值确定,每一所述光波对应的波长不同,N为正整数;
    将所述像素点对应的第二灰度值与所述第三灰度值的差的绝对值,确定为灰度差值;
    在所述灰度差值大于或等于预设阈值的情况下,确定所述像素点为目标像素点,并将所述灰度差值确定为补偿值。
  4. 根据权利要求3所述的方法,其中,所述N个光波包括第一光波、第二光波和第三光波,所述确定所述像素点对应的第三灰度值包括:
    将所述第一光波的灰度值与对应的第一衰减值的乘积,确定为第一子灰度值;
    将所述第二光波的灰度值与对应的第二衰减值的乘积,确定为第二子灰度值;
    将所述第三光波的灰度值与对应的第三衰减值的乘积,确定为第三子灰度值;
    将所述第一子灰度值、所述第二灰度值与所述第三子灰度值的和,确定为所述第三灰度值。
  5. 根据权利要求2所述的方法,其中,所述对所述目标像素点进行补偿之后,所述方法还包括:
    在所述增强现实眼镜产生第三图像的情况下,增加所述第三图像对应的灰度值,其中,所述第三图像所增加的灰度值为所述补偿值。
  6. 根据权利要求1所述的方法,其中,所述获取第一图像中每一像素点对应的第一灰度值包括:
    对所述第一图像的任一像素点,获取所述像素点的第一像素值、第二像素值和第三像素值;
    计算所述第一像素值与对应的第一权重值的第一乘积,所述第二像素值与对应的第二权重值的第二乘积,以及所述第三像素值与对应的第三权重值的第三乘积;
    将所述第一乘积、所述第二乘积与所述第三乘积的平均值,确定为所述像素点的第一灰度值。
  7. 一种图像处理装置,所述装置包括:
    获取模块,用于获取第一图像中每一像素点对应的第一灰度值,以及第二图像中每一像素点对应的第二灰度值,所述第一图像为增强现实眼镜产生的图像,所述第二图像为增强现实眼镜的输出图像;
    确定模块,用于根据每一像素点对应的所述第一灰度值和所述第二灰度值,确定补偿参数,所述补偿参数用于对所述增强现实眼镜产生的图像进行补偿。
  8. 根据权利要求7所述的装置,其中,所述确定模块还包括:
    确定单元,用于根据所述第一图像中所有像素点对应的第一灰度值和第二灰度值,确定所述第一图像中的M个目标像素点和每一所述目标像素点对应的补偿值,M为正整数;
    补偿单元,用于根据M个补偿值,对所述M个目标像素点进行补偿。
  9. 根据权利要求8所述的装置,其中,所述确定单元,还用于:
    对任一像素点,确定所述像素点对应的第三灰度值,所述第三灰度值由所述像素点对应的N个光波和每一所述光波对应的衰减值确定,每一所述光波对应的波长不同,N为正整数;
    将所述像素点对应的第二灰度值与所述第三灰度值的差的绝对值,确定为灰度差值;
    在所述灰度差值大于或等于预设阈值的情况下,确定所述像素点为目标像素点,并将所述灰度差值确定为补偿值。
  10. 根据权利要求9所述的装置,其中,所述确定单元,还用于:
    将第一光波的灰度值与对应的第一衰减值的乘积,确定为第一子灰度值;
    将第二光波的灰度值与对应的第二衰减值的乘积,确定为第二子灰度值;
    将第三光波的灰度值与对应的第三衰减值的乘积,确定为第三子灰度值;
    将所述第一子灰度值、所述第二灰度值与所述第三子灰度值的和,确定为所述第三灰度值。
  11. 根据权利要求8所述的装置,其中,所述装置,还包括:
    增加模块,用于在所述增强现实眼镜产生第三图像的情况下,增加所述第三图像对应的灰度值,其中,所述第三图像所增加的灰度值为所述补偿值。
  12. 根据权利要求7所述的装置,其中,所述获取模块,还用于:
    对所述第一图像的任一像素点,获取所述像素点的第一像素值、第二像素值和第三像素值;
    计算所述第一像素值与对应的第一权重值的第一乘积,所述第二像素值与对应的第二权重值的第二乘积,以及所述第三像素值与对应的第三权重值的第三乘积;
    将所述第一乘积、所述第二乘积与所述第三乘积的平均值,确定为所述像素点的第一灰度值。
  13. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1-6中任一项所述的方法的步骤。
  14. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序 或指令被处理器执行时实现如权利要求1-6中任一项所述的方法的步骤。
  15. 一种芯片,包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至6中任一项所述的方法的步骤。
  16. 一种计算机程序产品,所述计算机程序产品被存储在非易失的存储介质中,所述计算机程序产品被至少一个处理器执行以实现如权利要求1至6中任一项所述的方法的步骤。
  17. 一种电子设备,被配置为执行如权利要求1至6中任一项所述的方法的步骤。
PCT/CN2022/074651 2021-02-04 2022-01-28 图像处理方法、装置和电子设备 WO2022166819A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22749098.4A EP4280601A1 (en) 2021-02-04 2022-01-28 Image processing method and apparatus, and electronic device
US18/229,660 US20230377503A1 (en) 2021-02-04 2023-08-02 Image processing method and apparatus, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110156335.2A CN112995645B (zh) 2021-02-04 2021-02-04 图像处理方法、装置和电子设备
CN202110156335.2 2021-02-04

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/229,660 Continuation US20230377503A1 (en) 2021-02-04 2023-08-02 Image processing method and apparatus, and electronic device

Publications (1)

Publication Number Publication Date
WO2022166819A1 true WO2022166819A1 (zh) 2022-08-11

Family

ID=76347152

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074651 WO2022166819A1 (zh) 2021-02-04 2022-01-28 图像处理方法、装置和电子设备

Country Status (4)

Country Link
US (1) US20230377503A1 (zh)
EP (1) EP4280601A1 (zh)
CN (1) CN112995645B (zh)
WO (1) WO2022166819A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112995645B (zh) * 2021-02-04 2022-12-27 维沃移动通信有限公司 图像处理方法、装置和电子设备
CN114245004B (zh) * 2021-11-29 2023-06-23 浙江大华技术股份有限公司 一种图像补偿方法、系统、硬盘录像机和可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130342585A1 (en) * 2012-06-20 2013-12-26 Samsung Display Co., Ltd. Image processing apparatus and method
WO2017219433A1 (zh) * 2016-06-20 2017-12-28 东莞市长资实业有限公司 波导式的头戴显示器的光学装置
US20190011735A1 (en) * 2016-10-28 2019-01-10 Boe Technology Group Co., Ltd. Display Panel and Display Device
US20200051483A1 (en) * 2018-08-07 2020-02-13 Facebook Technologies, Llc Error correction for display device
WO2020136306A1 (en) * 2018-12-27 2020-07-02 Nokia Technologies Oy Apparatus, method, and system for use in a display
CN111487774A (zh) * 2020-05-15 2020-08-04 北京至格科技有限公司 一种增强现实显示装置
CN112995645A (zh) * 2021-02-04 2021-06-18 维沃移动通信有限公司 图像处理方法、装置和电子设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105609061B (zh) * 2014-11-20 2019-03-01 深圳光峰科技股份有限公司 一种lcos液晶器件的调制方法、装置及系统
CN105991980B (zh) * 2015-02-06 2019-05-03 深圳光峰科技股份有限公司 投影系统及其控制方法
CN104575439B (zh) * 2015-02-15 2017-01-18 北京京东方多媒体科技有限公司 一种显示补偿方法、装置及显示设备
CN108281110B (zh) * 2018-01-12 2020-03-10 深圳市华星光电半导体显示技术有限公司 亮度补偿方法及相关产品
US11127356B2 (en) * 2019-01-04 2021-09-21 Chengdu Boe Optoelectronics Technology Co., Ltd. Method for compensating brightness unevenness of a display device and related display device
CN109831658A (zh) * 2019-04-03 2019-05-31 贵安新区新特电动汽车工业有限公司 投影光色调整方法及投影光色调整装置
CN110618528A (zh) * 2019-08-09 2019-12-27 成都理想境界科技有限公司 一种近眼显示装置及色彩反馈方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130342585A1 (en) * 2012-06-20 2013-12-26 Samsung Display Co., Ltd. Image processing apparatus and method
WO2017219433A1 (zh) * 2016-06-20 2017-12-28 东莞市长资实业有限公司 波导式的头戴显示器的光学装置
US20190011735A1 (en) * 2016-10-28 2019-01-10 Boe Technology Group Co., Ltd. Display Panel and Display Device
US20200051483A1 (en) * 2018-08-07 2020-02-13 Facebook Technologies, Llc Error correction for display device
WO2020136306A1 (en) * 2018-12-27 2020-07-02 Nokia Technologies Oy Apparatus, method, and system for use in a display
CN111487774A (zh) * 2020-05-15 2020-08-04 北京至格科技有限公司 一种增强现实显示装置
CN112995645A (zh) * 2021-02-04 2021-06-18 维沃移动通信有限公司 图像处理方法、装置和电子设备

Also Published As

Publication number Publication date
US20230377503A1 (en) 2023-11-23
EP4280601A1 (en) 2023-11-22
CN112995645B (zh) 2022-12-27
CN112995645A (zh) 2021-06-18

Similar Documents

Publication Publication Date Title
WO2022166819A1 (zh) 图像处理方法、装置和电子设备
US10388004B2 (en) Image processing method and apparatus
US11750785B2 (en) Video signal processing method and apparatus
EP3771204A1 (en) Projection system and image uniformity compensation method thereof
US9532023B2 (en) Color reproduction of display camera system
US20220092803A1 (en) Picture rendering method and apparatus, terminal and corresponding storage medium
JP6360965B2 (ja) 画像の表示方法及びディスプレイシステム
JP2017527848A (ja) 液晶パネルのグレースケール値の設定方法及び液晶ディスプレイ
WO2016045146A1 (zh) 用于显示器的图像色彩增强方法及装置
WO2023016320A1 (zh) 图像处理方法、装置、设备及介质
US11010879B2 (en) Video image processing method and apparatus thereof, display device, computer readable storage medium and computer program product
US8837829B2 (en) Image processing apparatus, storage medium storing image processing program, and image processing method
WO2022042754A1 (zh) 图像处理方法、装置及设备
CN110618852B (zh) 视图处理方法、视图处理装置及终端设备
CN113079362B (zh) 视频信号处理方法、装置及电子设备
JP5234849B2 (ja) 表示装置、画像補正システム、及び、画像補正方法
WO2022042753A1 (zh) 拍摄方法、装置及电子设备
WO2019061655A1 (zh) 液晶显示器驱动方法、系统及计算机可读取介质
JP5321089B2 (ja) 画像処理装置、画像表示装置、及び画像処理方法
WO2023125064A1 (zh) 一种绒毛渲染方法、装置、设备及介质
CN113727088B (zh) 3d眼镜控制系统、方法、装置、存储介质及终端
WO2016169205A1 (zh) 显示方法和显示设备
TWI566232B (zh) Image brightness adjustment circuit
CN115802174A (zh) 显示设备的白画面校正方法、装置、存储介质与电子设备
US20080122984A1 (en) Image processing method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22749098

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022749098

Country of ref document: EP

Effective date: 20230818

NENP Non-entry into the national phase

Ref country code: DE