CN111510636A - Method and device for acquiring brightness evaluation value and computer storage medium - Google Patents

Method and device for acquiring brightness evaluation value and computer storage medium Download PDF

Info

Publication number
CN111510636A
CN111510636A CN201910098291.5A CN201910098291A CN111510636A CN 111510636 A CN111510636 A CN 111510636A CN 201910098291 A CN201910098291 A CN 201910098291A CN 111510636 A CN111510636 A CN 111510636A
Authority
CN
China
Prior art keywords
image
rgb
sum
evaluation value
ratio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910098291.5A
Other languages
Chinese (zh)
Other versions
CN111510636B (en
Inventor
孙超伟
林一育
唐道龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910098291.5A priority Critical patent/CN111510636B/en
Publication of CN111510636A publication Critical patent/CN111510636A/en
Application granted granted Critical
Publication of CN111510636B publication Critical patent/CN111510636B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a method and a device for acquiring a brightness evaluation value and a computer storage medium, wherein the method can acquire RGB statistical information of a first image, the first image is acquired through an image sensor, and the RGB statistical information comprises an R value, a G value and a B value of each pixel contained in the first image; converting the RGB statistical information into color information of a three-dimensional space; according to the color information of the three-dimensional space and the shooting parameters of the first image, the brightness evaluation value is obtained, and the accuracy of the brightness evaluation value can be effectively improved.

Description

Method and device for acquiring brightness evaluation value and computer storage medium
Technical Field
The present application relates to the field of computer application technologies, and in particular, to a method and an apparatus for obtaining a luminance evaluation value, and a computer storage medium.
Background
The camera needs to adapt to the work of different scenes, and in a scene with high luminance, in order to sufficiently restore the color information of an actual scene, an infrared cut-off sheet is generally added at the lens of the camera to filter the near-infrared part in natural light. Under the scene that the luminance is lower, can open the visible light fill light and carry out the light filling to improve scene luminance, if open the visible light fill light and still can not satisfy the demand that the color shows, can switch into the black and white mode with the mode of camera's work, and remove infrared stop piece, in order to carry out infrared light filling, improve the imaging effect of camera under low light.
The switching between the color mode and the black-and-white mode and the switching between the color mode and the black-and-white mode of the fill light depend on the measurement of the evaluation value of the brightness of the visible light in the current scene by the image processing equipment. At present, a photoresistor is a hardware device for measuring the brightness evaluation value, the photoresistor is a resistor device manufactured based on a photoconductive effect, and the higher the brightness is, the smaller the resistance value is in the photosensitive range of the photoresistor; the lower the luminance, the larger the resistance value. However, the photoresistor has some inherent disadvantages that result in less accurate measurement of the brightness of the light. For example, if the photoresistor has a relatively long photoelectric relaxation process, i.e., the resistance of the photoresistor changes with the intensity of light for a certain period of time before it becomes stable, the photoresistor cannot accurately measure the brightness of the changed photoresistor when the brightness of the light changes from high to low or vice versa. For another example, the photoresistor is easily interfered by light sources such as car lights and street lamps, so that the brightness of natural light cannot be accurately obtained. In another example, the photoresistors are arranged at different positions, and the difference of the measured luminance evaluation values is large.
Disclosure of Invention
The embodiment of the invention provides a method and a device for acquiring a brightness evaluation value and a computer storage medium, which can effectively improve the accuracy of the brightness evaluation value.
In a first aspect, an embodiment of the present application provides a method for acquiring a luminance evaluation value, where an image processing device acquires RGB statistical information of a first image, converts the RGB statistical information into color information of a three-dimensional space, and obtains a luminance evaluation value according to the color information of the three-dimensional space and shooting parameters of the first image.
The first image is acquired through an image sensor, and the RGB statistical information comprises R values, G values and B values of all pixels contained in the first image.
In the technical scheme, the image processing device converts RGB statistical information of the first image into color information of a three-dimensional space, and obtains a brightness evaluation value according to the color information of the three-dimensional space and shooting parameters of the first image. Compared with the method for measuring the brightness evaluation value by using the hardware device, the method has the advantages that the brightness evaluation can be performed on the basis of the first image acquired by the image sensor, and the accuracy of the brightness evaluation value is improved.
In one possible implementation, the first image is collected by an image sensor in a black-and-white mode and mixed light scene, and the mixed light includes visible light and infrared light; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the image processing device obtains the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image, and specifically may be: multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor, acquiring a first RGB sum of the first image, and dividing the first RGB sum by the reference factor to obtain a visible light brightness evaluation value under a mixed light scene. Wherein the first RGB sum is the RGB sum of the first image for visible light.
In the technical scheme, the image processing equipment can eliminate the interference of the infrared light supplement lamp on the image brightness in the scene with the infrared light supplement lamp turned on, obtain the visible light brightness evaluation value in the mixed light scene, and improve the accuracy of the visible light brightness evaluation value.
In a possible implementation manner, before the image processing apparatus divides the first RGB sum by the reference factor to obtain the evaluation value of the brightness of the visible light in the mixed light scene, the image processing apparatus may obtain a second RGB sum of the first image, obtain a first spatial distance between the mixed light and the infrared light in the three-dimensional space, and obtain a second spatial distance between the infrared light and the visible light in the three-dimensional space, divide the first spatial distance by the second spatial distance to obtain a first ratio, and multiply the first ratio by the second RGB sum to obtain the first RGB sum. Wherein the second RGB sum is the first image RGB sum for the mixed light.
In one possible implementation, the first image is collected by an image sensor in a black-and-white mode and mixed light scene, and the mixed light includes visible light and infrared light; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the image sensor obtains a luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image, and specifically may be: multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor, obtaining a third RGB sum of the first image, and dividing the third RGB sum by the reference factor to obtain an infrared light brightness evaluation value under a mixed light scene. Wherein the third RGB sum is the sum of RGB of the first image for infrared light.
In the technical scheme, the image processing equipment can eliminate the interference of visible light on the image brightness in a scene with the infrared light supplement lamp turned on, obtain an infrared light brightness evaluation value in a mixed light scene, and improve the accuracy of the visible light brightness evaluation value.
In a possible implementation manner, before the image processing device divides the third RGB sum by the reference factor to obtain the infrared light brightness evaluation value in the mixed light scene, the image processing device may obtain the second RGB sum of the first image, obtain a first spatial distance between the mixed light and the infrared light in the three-dimensional space, and obtain a second spatial distance between the infrared light and the visible light in the three-dimensional space, divide a difference value between the second spatial distance and the first spatial distance by the second spatial distance to obtain a second ratio, and multiply the second ratio by the second RGB sum to obtain the third RGB sum. Wherein the second RGB sum is the first image RGB sum for the mixed light.
In a possible implementation manner, the acquiring, by the image processing device, a first spatial distance between the mixed light and the infrared light in the three-dimensional space may specifically be: the method comprises the steps of acquiring a second image acquired by an image sensor in a black-and-white mode in the presence of only infrared light, acquiring a third ratio between a first R mean value and a second RGB sum of the first image, a fourth ratio between a first G mean value and the second RGB sum, and a fifth ratio between a first B mean value and the second RGB sum, acquiring a sixth ratio between a second R mean value and a fourth RGB sum of the second image, a seventh ratio between the second G mean value and the fourth RGB sum, and an eighth ratio between the second B mean value and the fourth RGB sum of the second image, and performing distance operation on the third ratio, the fourth ratio, the fifth ratio, the sixth ratio, the seventh ratio and the eighth ratio by using an Euclidean distance algorithm to obtain a first spatial distance. Wherein the fourth RGB sum is the sum of RGB of the second image for infrared light.
In a possible implementation manner, the acquiring, by the image processing device, the second spatial distance between the infrared light and the visible light in the three-dimensional space may specifically be: acquiring a second image acquired by the image sensor in a black-and-white mode in the presence of only infrared light, and a third image acquired by the image sensor in a black-and-white mode and in the presence of only visible light scene, acquiring a sixth ratio between a second R mean value of the second image and a fourth RGB sum of the second image, and a seventh ratio between a second G mean value and the fourth RGB sum, and an eighth ratio between the second B mean and the sum of the fourth RGB, obtaining a ninth ratio between the third R mean of the third image and the sum of the fifth RGB of the third image, and a tenth ratio between the third G mean and the sum of the fifth RGB, and an eleventh ratio between the third B mean value and the sum of the fifth RGB is obtained, and the sixth ratio, the seventh ratio, the eighth ratio, the ninth ratio, the tenth ratio and the eleventh ratio are subjected to distance operation by using an Euclidean distance algorithm to obtain a second spatial distance. And the fourth RGB sum is the RGB sum of the second image aiming at the infrared light, and the fifth RGB sum is the RGB sum of the third image aiming at the visible light.
In a possible implementation manner, the image processing apparatus may further add the first R mean, the first G mean, and the first B mean of the first image to obtain a second RGB sum.
In a possible implementation manner, the image processing device adds the first R mean value, the first G mean value, and the first B mean value of the first image to obtain a second RGB sum, may perform region division on the first image to obtain a plurality of pixel blocks, perform average operation on the R values, the B values, or the G values of all pixels included in each pixel block to obtain the R mean value, the G mean value, or the B mean value of each pixel block, obtain pixel blocks in which the R mean value, the G mean value, and the B mean value are all smaller than a preset threshold, perform average operation on the R mean values of all the obtained pixel blocks to obtain the first R mean value, perform average operation on the G mean values of all the obtained pixel blocks to obtain the first G mean value, and perform average operation on the B mean values of all the obtained pixel blocks to obtain the first B mean value. The R value of any pixel is a color value of the pixel in a red channel, the G value of any pixel is a color value of the pixel in a green channel, and the B value of any pixel is a color value of the pixel in a blue channel.
In one possible implementation, the first image is acquired by the image sensor in a color mode; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the image processing device obtains the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image, and specifically may be: multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor, and dividing the RGB sum of the first image by the reference factor to obtain a visible light brightness evaluation value.
In a possible implementation manner, the image processing device may further acquire a plurality of frames of images acquired by the image sensor, acquire a luminance evaluation value of each frame of image, and set an operating mode of the image sensor according to the luminance evaluation value of each frame of image.
In a possible implementation manner, the image processing device sets an operation mode of the image sensor according to the luminance evaluation value of each frame of image, and specifically may be: when the number of the images with the luminance evaluation values smaller than the first preset brightness threshold value is larger than the preset number threshold value, the image sensor is switched from the color mode to the black-and-white mode, and the images with the luminance evaluation values smaller than the first preset brightness threshold value are continuously distributed in the multi-frame images.
In a possible implementation manner, the image processing device sets an operation mode of the image sensor according to the luminance evaluation value of each frame of image, and specifically may be: determining an image with a brightness evaluation value smaller than a first preset brightness threshold, acquiring the acquisition time length between the earliest acquired image and the latest acquired image in the determined image, wherein the determined images are continuously distributed in the multi-frame image, and when the acquisition time length is larger than a preset time threshold, switching the image sensor from a color mode to a black-and-white mode.
In a possible implementation manner, the image processing device sets an operation mode of the image sensor according to the luminance evaluation value of each frame of image, and specifically may be: when the number of the images with the luminance evaluation values larger than the second preset brightness threshold value is larger than the preset number threshold value, the image sensor is switched from the black-and-white mode to the color mode, and the images with the luminance evaluation values larger than the second preset brightness threshold value are continuously distributed in the multi-frame images.
In a possible implementation manner, the image processing device sets an operation mode of the image sensor according to the luminance evaluation value of each frame of image, and specifically may be: determining an image with a brightness evaluation value larger than a second preset brightness threshold, acquiring the acquisition time length between the earliest acquired image and the latest acquired image in the determined image, wherein the determined images are continuously distributed in the multi-frame image, and when the acquisition time length is larger than the preset time threshold, switching the image sensor from a black-and-white mode to a color mode.
In one possible implementation, the light brightness evaluation value includes a visible light brightness evaluation value;
after the image processing device obtains the brightness evaluation value according to the color information of the three-dimensional space and the shooting parameters of the first image, the image processing device can also control the visible light supplementary lighting lamp according to the visible light brightness evaluation value.
In a possible implementation manner, the image processing device controls the visible light fill-in lamp according to the visible light brightness evaluation value, and specifically may be: and according to the visible light brightness evaluation value, turning on or turning off the visible light supplementary lamp.
In a possible implementation manner, the image processing device controls the visible light fill-in lamp according to the visible light brightness evaluation value, and specifically may be: and adjusting the brightness of the visible light supplementary lighting lamp according to the visible light brightness evaluation value.
In one possible implementation, the light brightness evaluation value includes an infrared light brightness evaluation value;
after the image processing device obtains the brightness evaluation value according to the color information of the three-dimensional space and the shooting parameters of the first image, the image processing device can also control the infrared light supplement lamp according to the infrared light brightness evaluation value.
In a second aspect, an embodiment of the present application provides a method for switching an operating mode, where an image processing device acquires RGB statistical information of a first image, converts the RGB statistical information into color information of a three-dimensional space, obtains a luminance evaluation value according to the color information of the three-dimensional space and shooting parameters of the first image, and sets an operating mode of an image sensor according to the luminance evaluation value.
The first image is acquired through an image sensor, and the RGB statistical information comprises R values, G values and B values of all pixels contained in the first image.
In the technical scheme, the image processing device converts RGB statistical information of the first image into color information of a three-dimensional space, and obtains a brightness evaluation value according to the color information of the three-dimensional space and shooting parameters of the first image. Compared with the method for measuring the brightness evaluation value by using the hardware device, the method has the advantages that the brightness evaluation can be performed on the basis of the first image acquired by the image sensor, and the accuracy of the brightness evaluation value is improved. Further, the image processing apparatus sets the operation mode according to the luminance evaluation value, and can effectively perform switching of the operation mode.
In one possible implementation, the first image is collected by an image sensor in a black-and-white mode and mixed light scene, and the mixed light includes visible light and infrared light; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the image processing device obtains the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image, and specifically may be: multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor, acquiring a first RGB sum of the first image, and dividing the first RGB sum by the reference factor to obtain a visible light brightness evaluation value under a mixed light scene. Wherein the first RGB sum is the RGB sum of the first image for visible light.
In the technical scheme, the image processing equipment can eliminate the interference of the infrared light supplement lamp on the image brightness in the scene with the infrared light supplement lamp turned on, obtain the visible light brightness evaluation value in the mixed light scene, and improve the accuracy of the visible light brightness evaluation value.
In a possible implementation manner, before the image processing apparatus divides the first RGB sum by the reference factor to obtain the evaluation value of the brightness of the visible light in the mixed light scene, the image processing apparatus may obtain a second RGB sum of the first image, obtain a first spatial distance between the mixed light and the infrared light in the three-dimensional space, and obtain a second spatial distance between the infrared light and the visible light in the three-dimensional space, divide the first spatial distance by the second spatial distance to obtain a first ratio, and multiply the first ratio by the second RGB sum to obtain the first RGB sum. Wherein the second RGB sum is the first image RGB sum for the mixed light.
In one possible implementation, the first image is collected by an image sensor in a black-and-white mode and mixed light scene, and the mixed light includes visible light and infrared light; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the image sensor obtains a luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image, and specifically may be: multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor, obtaining a third RGB sum of the first image, and dividing the third RGB sum by the reference factor to obtain an infrared light brightness evaluation value under a mixed light scene. Wherein the third RGB sum is the sum of RGB of the first image for infrared light.
In the technical scheme, the image processing equipment can eliminate the interference of visible light on the image brightness in a scene with the infrared light supplement lamp turned on, obtain an infrared light brightness evaluation value in a mixed light scene, and improve the accuracy of the visible light brightness evaluation value.
In a possible implementation manner, before the image processing device divides the third RGB sum by the reference factor to obtain the infrared light brightness evaluation value in the mixed light scene, the image processing device may obtain the second RGB sum of the first image, obtain a first spatial distance between the mixed light and the infrared light in the three-dimensional space, and obtain a second spatial distance between the infrared light and the visible light in the three-dimensional space, divide a difference value between the second spatial distance and the first spatial distance by the second spatial distance to obtain a second ratio, and multiply the second ratio by the second RGB sum to obtain the third RGB sum. Wherein the second RGB sum is the first image RGB sum for the mixed light.
In a possible implementation manner, the acquiring, by the image processing device, a first spatial distance between the mixed light and the infrared light in the three-dimensional space may specifically be: the method comprises the steps of acquiring a second image acquired by an image sensor in a black-and-white mode in the presence of only infrared light, acquiring a third ratio between a first R mean value and a second RGB sum of the first image, a fourth ratio between a first G mean value and the second RGB sum, and a fifth ratio between a first B mean value and the second RGB sum, acquiring a sixth ratio between a second R mean value and a fourth RGB sum of the second image, a seventh ratio between the second G mean value and the fourth RGB sum, and an eighth ratio between the second B mean value and the fourth RGB sum of the second image, and performing distance operation on the third ratio, the fourth ratio, the fifth ratio, the sixth ratio, the seventh ratio and the eighth ratio by using an Euclidean distance algorithm to obtain a first spatial distance. Wherein the fourth RGB sum is the sum of RGB of the second image for infrared light.
In a possible implementation manner, the acquiring, by the image processing device, the second spatial distance between the infrared light and the visible light in the three-dimensional space may specifically be: acquiring a second image acquired by the image sensor in a black-and-white mode in the presence of only infrared light, and a third image acquired by the image sensor in a black-and-white mode and in the presence of only visible light scene, acquiring a sixth ratio between a second R mean value of the second image and a fourth RGB sum of the second image, and a seventh ratio between a second G mean value and the fourth RGB sum, and an eighth ratio between the second B mean and the sum of the fourth RGB, obtaining a ninth ratio between the third R mean of the third image and the sum of the fifth RGB of the third image, and a tenth ratio between the third G mean and the sum of the fifth RGB, and an eleventh ratio between the third B mean value and the sum of the fifth RGB is obtained, and the sixth ratio, the seventh ratio, the eighth ratio, the ninth ratio, the tenth ratio and the eleventh ratio are subjected to distance operation by using an Euclidean distance algorithm to obtain a second spatial distance. And the fourth RGB sum is the RGB sum of the second image aiming at the infrared light, and the fifth RGB sum is the RGB sum of the third image aiming at the visible light.
In a possible implementation manner, the image processing apparatus may further add the first R mean, the first G mean, and the first B mean of the first image to obtain a second RGB sum.
In a possible implementation manner, the image processing device adds the first R mean value, the first G mean value, and the first B mean value of the first image to obtain a second RGB sum, may perform region division on the first image to obtain a plurality of pixel blocks, perform average operation on the R values, the B values, or the G values of all pixels included in each pixel block to obtain the R mean value, the G mean value, or the B mean value of each pixel block, obtain pixel blocks in which the R mean value, the G mean value, and the B mean value are all smaller than a preset threshold, perform average operation on the R mean values of all the obtained pixel blocks to obtain the first R mean value, perform average operation on the G mean values of all the obtained pixel blocks to obtain the first G mean value, and perform average operation on the B mean values of all the obtained pixel blocks to obtain the first B mean value. The R value of any pixel is a color value of the pixel in a red channel, the G value of any pixel is a color value of the pixel in a green channel, and the B value of any pixel is a color value of the pixel in a blue channel.
In one possible implementation, the first image is acquired by the image sensor in a color mode; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the image processing device obtains the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image, and specifically may be: multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor, and dividing the RGB sum of the first image by the reference factor to obtain a visible light brightness evaluation value.
In a possible implementation manner, the image processing apparatus sets an operating mode of the image sensor according to the light brightness evaluation value, and specifically may be: when the number of the images with the luminance evaluation values smaller than the first preset brightness threshold value is larger than the preset number threshold value, the image sensor is switched from the color mode to the black-and-white mode, the images with the luminance evaluation values smaller than the first preset brightness threshold value are continuously distributed in the multi-frame images, and the multi-frame images are collected by the image sensor.
In a possible implementation manner, the image processing apparatus sets an operating mode of the image sensor according to the light brightness evaluation value, and specifically may be: determining an image with a brightness evaluation value smaller than a first preset brightness threshold, acquiring the acquisition time length between the earliest acquired image and the latest acquired image in the determined image, wherein the determined images are continuously distributed in a plurality of frames of images, and when the acquisition time length is larger than a preset time threshold, switching the image sensor from a color mode to a black-and-white mode, wherein the plurality of frames of images are acquired by the image sensor.
In a possible implementation manner, the image processing apparatus sets an operating mode of the image sensor according to the light brightness evaluation value, and specifically may be: when the number of the images with the luminance evaluation values larger than the second preset brightness threshold value is larger than the preset number threshold value, the image sensor is switched from a black-and-white mode to a color mode, the images with the luminance evaluation values larger than the second preset brightness threshold value are continuously distributed in a plurality of frames of images, and the plurality of frames of images are collected by the image sensor.
In a possible implementation manner, the image processing apparatus sets an operating mode of the image sensor according to the light brightness evaluation value, and specifically may be: determining an image with a luminance evaluation value larger than a second preset luminance threshold, and acquiring the acquisition time length between the earliest acquired image and the latest acquired image in the determined image, wherein the determined images are continuously distributed in a plurality of frames of images, and when the acquisition time length is larger than the preset time threshold, the image sensor is switched from a black-and-white mode to a color mode, and the plurality of frames of images are acquired by the image sensor.
In one possible implementation, the light brightness evaluation value includes a visible light brightness evaluation value;
after the image processing device obtains the brightness evaluation value according to the color information of the three-dimensional space and the shooting parameters of the first image, the image processing device can also control the visible light supplementary lighting lamp according to the visible light brightness evaluation value.
In one implementation manner, the image processing device controls the visible light fill-in lamp according to the visible light brightness evaluation value, and specifically may be: and according to the visible light brightness evaluation value, turning on or turning off the visible light supplementary lamp.
In one implementation manner, the image processing device controls the visible light fill-in lamp according to the visible light brightness evaluation value, and specifically may be: and adjusting the brightness of the visible light supplementary lighting lamp according to the visible light brightness evaluation value.
In one possible implementation, the light brightness evaluation value includes an infrared light brightness evaluation value;
after the image processing device obtains the brightness evaluation value according to the color information of the three-dimensional space and the shooting parameters of the first image, the image processing device can also control the infrared light supplement lamp according to the infrared light brightness evaluation value.
In a third aspect, an embodiment of the present application provides an image processing apparatus including means for implementing the method for acquiring a luminance evaluation value according to the first aspect or the method for switching an operation mode according to the second aspect.
In a fourth aspect, the present application provides a computer storage medium, wherein the computer storage medium stores a computer program or instructions, and when the program or instructions are executed by a processor, the processor is caused to execute the method according to the first aspect or the second aspect.
In a fifth aspect, an embodiment of the present application provides an image processing apparatus, including a processor, coupled with a memory,
the memory to store instructions;
the processor is configured to execute the instructions in the memory to cause the image processing apparatus to perform the method according to the first aspect or the second aspect.
In a sixth aspect, an embodiment of the present application provides a chip system, where the chip system includes a processor and an interface circuit, where the interface circuit is coupled to the processor,
the processor is configured to execute a computer program or instructions to implement the method according to the first aspect or the second aspect;
the interface circuit is used for communicating with other modules outside the chip system.
Drawings
Fig. 1 is a schematic flowchart of a method for switching operating modes according to an embodiment of the present application;
FIG. 2 is a schematic interface diagram illustrating the evaluation of luminance according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another method for switching operating modes, disclosed in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an apparatus for obtaining a luminance evaluation value according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image processing apparatus disclosed in an embodiment of the present application.
Detailed Description
In embodiments of the present application, it relates specifically to an image processing apparatus. The image processing device may be a Terminal, a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), a camera, or the like. The terminal device may be a mobile phone (mobile phone), a tablet computer (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal device, an Augmented Reality (AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in self driving (self driving), a wireless terminal in remote surgery (remote medical supply), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), and so on. The embodiment of the present application does not limit the specific technology and the specific device form adopted by the terminal device.
Taking an image sensor as an example of a camera, different operations need to be performed in different scenes. For example, in a scene with relatively high visible light intensity, in order to sufficiently restore the color information of the actual scene, an infrared cut-off sheet may be added at the lens of the camera to filter out the near-infrared part of the natural light. In a scene with low visible light brightness, such as a night scene with very weak external light, the light supplement lamp needs to be turned on to supplement light, so as to improve the scene brightness. If the light supplement lamp is turned on and the condition of color display cannot be achieved, the infrared cut-off sheet added at the lens of the camera can be removed, and the working mode is switched from the color mode to the black-and-white mode, so that the imaging effect of the camera in a low-illumination scene is improved. However, the control of the fill-in light (for example, turning on the fill-in light, or adjusting the intensity of the fill-in light), and the switching of the working mode all depend on the evaluation of the brightness of the current scene by the image processing device. The higher the accuracy of the brightness evaluation value in the current scene is, the switching of the working modes is performed based on the brightness evaluation value, and the imaging effect of the image sensor in the low-illumination scene can be effectively improved. In addition, the higher the accuracy of the luminance evaluation value in the current scene is, the control of the light supplement lamp is performed based on the luminance evaluation value, so that the imaging effect of the image sensor in the low-illumination scene can be effectively improved. Therefore, how to improve the accuracy of the luminance evaluation value is a technical problem which needs to be solved urgently at present.
In order to better understand the method, the apparatus, and the computer storage medium for switching the operating mode disclosed in the embodiments of the present application, first, a method for switching the operating mode implemented in the present application is described below. Fig. 1 is a schematic flowchart of a method for switching an operating mode, which specifically describes a specific scheme for switching an operating mode of an image sensor from a black-and-white mode to a color mode, and an execution subject of the method may be an image processing apparatus or a chip applied to the image processing apparatus. The following description will be given taking as an example that the execution subject is an image processing apparatus. The method includes, but is not limited to, the steps of:
step S101: the image processing device acquires RGB statistical information of the first image, the second image and the third image.
Specifically, the image sensor may acquire a first image in a black-and-white mode and in a mixed light scene, the image sensor sends the first image to the image processing device, and the image processing device obtains RGB statistical information of the first image. For example, the operation mode of the image sensor is controlled to be a black-and-white mode, and the image sensor is controlled to image the object under the mixed light scene to obtain a first image. For example, before the image sensor is controlled to image the object to be photographed, the infrared-CUT (IR-CUT) double filter may be removed, and in a mixed light scene, the camera sensor is controlled to image the object to be photographed, so as to obtain the first image. The IR-CUT double-filter is a group of filters built in a lens of the camera sensor, and when an infrared sensing point outside the lens detects the change of the intensity of light, the built-in IR-CUT automatic switching filter can automatically switch according to the intensity of external light. According to the embodiment of the application, the IR-CUT double filters are removed, so that the influence of the IR-CUT double filters on the brightness evaluation value can be avoided, and the RGB statistical information obtained through statistics can directly reflect the environmental information, such as the brightness evaluation value of the real environment.
Wherein, the RGB statistical information of the first image may include: the R value, the B value and the average value of each pixel included in the first image. The R value of any pixel is the color value of the pixel in the red channel, the G value of any pixel is the color value of the pixel in the green channel, and the B value of any pixel is the color value of the pixel in the blue channel.
The image sensor may be a camera or a camera. The image sensor may be built in the image processing apparatus, or may establish a communication connection with the image processing apparatus by wireless communication, wired communication, or the like.
The black-and-white mode and the color mode are both working modes of the image sensor. The image acquired by the image sensor in the black and white mode can be a binary image or a gray scale image. The image captured by the image sensor in the color mode may be a color image.
The mixed light may include visible light and infrared light, among others.
In one implementation, the image processing device may also obtain RGB statistics for the second image. For example, the second image may be pre-stored in a memory of the image processing apparatus, and the image processing apparatus may acquire the second image in the memory, thereby acquiring RGB statistics of the second image. For another example, the second image and the RGB statistical information of the second image may be pre-stored in a memory of the image processing apparatus, and the image processing apparatus may obtain the RGB statistical information of the second image in the memory. For example, the image sensor may acquire the second image in a black-and-white mode in a scene where only infrared light exists, the image sensor sends the second image to the image processing device, and the image processing device stores the second image in the memory. For example, the acquisition process of the second image may be: before the image sensor is controlled to image the shot object, the IR-CUT double filters can be removed, the image sensor faces to a preset baffle plate in the presence of infrared light only, an infrared light supplement lamp of the image sensor is turned on, the brightness of the infrared light supplement lamp is controlled to reach the maximum brightness value, and then the image sensor is controlled to image the shot object to obtain a second image. Illustratively, the preset baffles may be gray uniform baffles.
The obtaining mode of the RGB statistical information of the second image is the same as the obtaining mode of the RGB statistical information of the first image, and the mode of the image processing device obtaining the RGB statistical information of the second image may refer to the specific description of the image processing device obtaining the RGB statistical information of the first image, which is not described in detail in this embodiment of the present application.
In one implementation, the image processing device may further obtain RGB statistics of the third image. For example, the third image may be pre-stored in a memory of the image processing apparatus, and the image processing apparatus may acquire the third image in the memory, thereby acquiring RGB statistical information of the third image. For another example, the third image and the RGB statistical information of the third image may be pre-stored in a memory of the image processing apparatus, and the image processing apparatus may obtain the RGB statistical information of the third image in the memory. For example, the image sensor may capture a third image in a black-and-white mode in the presence of only visible light, the image sensor sends the third image to the image processing device, and the image processing device stores the third image in the memory. For example, the acquisition process of the third image may be: before the image sensor is controlled to image the shot object, the IR-CUT double filter can be removed, the image sensor faces to a preset baffle plate in the scene only with visible light, and the fluorescent lamp is turned on or the image sensor is controlled to image the shot object under the irradiation of natural light to obtain a third image. Illustratively, the preset baffles may be gray uniform baffles.
The obtaining mode of the RGB statistical information of the third image is the same as the obtaining mode of the RGB statistical information of the first image, and the mode of the image processing device obtaining the RGB statistical information of the third image may refer to the specific description of the image processing device obtaining the RGB statistical information of the first image, which is not described in detail in this embodiment of the present application.
In one implementation, the RGB statistics for an image (e.g., the first image, the second image, or the third image) may be acquired prior to color correcting the image. Since the influence of a series of processes of color correction (such as white balance, color correction, gamma correction, and the like) on the RGB statistical information is difficult to evaluate, the luminance evaluation is performed based on the RGB statistical information before the color correction, instead of performing the statistics on the RGB information based on the image displayed by the output device (such as a display screen) (in which the image displayed by the output device is the image after the color correction), the influence of the color correction on the luminance evaluation can be avoided, and the RGB statistical information obtained by the statistics can directly reflect the environmental information, such as the luminance evaluation value of the real environment.
Step S102: the image processing apparatus converts the RGB statistical information of each image into color information of a three-dimensional space.
Specifically, the image processing device may obtain a first R mean value, a first G mean value, and a first B mean value of the first image, and add the first R mean value, the first G mean value, and the first B mean value of the first image to obtain a second RGB sum of the first image. The image processing device divides the first R mean value by the second RGB sum to obtain a third ratio between the first R mean value and the second RGB sum. The image processing apparatus may further divide the first G mean by the second RGB sum to obtain a fourth ratio between the first G mean and the second RGB sum. The image processing apparatus may further divide the first B mean by the second RGB sum to obtain a fifth ratio between the first B mean and the second RGB sum. And the third ratio, the fourth ratio and the fifth ratio form color information of the three-dimensional space of the first image.
The manner in which the image processing device obtains the first R mean value of the first image may be: and obtaining the R value of each pixel point of the first image, and carrying out average operation on the R values of all the pixel points to obtain a first R mean value. The manner in which the image processing apparatus obtains the first G mean value of the first image may be: and obtaining the G value of each pixel point of the first image, and carrying out average operation on the G values of all the pixel points to obtain a first G mean value. The manner in which the image processing apparatus obtains the first B-mean of the first image may be: and obtaining the B value of each pixel point of the first image, and carrying out average operation on the B values of all the pixel points to obtain a first B mean value.
In one implementation, after the image processing device acquires the RGB statistical information of the second image, the RGB statistical information of the second image may be converted into color information of a three-dimensional space. For example, the image processing apparatus may obtain a second R mean, a second G mean, and a second B mean of the second image, and add the second R mean, the second G mean, and the second B mean of the second image to obtain a fourth RGB sum of the second image. The image processing device divides the second R mean value by the fourth RGB sum to obtain a sixth ratio between the second R mean value and the fourth RGB sum. The image processing apparatus may further divide the second G mean by the fourth RGB sum to obtain a seventh ratio between the second G mean and the fourth RGB sum. The image processing apparatus may further divide the second B mean value by the fourth RGB sum to obtain an eighth ratio between the second B mean value and the fourth RGB sum. And the sixth ratio, the seventh ratio and the eighth ratio form color information of the three-dimensional space of the second image. The image processing device obtains the second R mean value in the same manner as the first R mean value, obtains the second G mean value in the same manner as the first G mean value, and obtains the second B mean value in the same manner as the first B mean value, which is not described in detail in this embodiment of the present application.
In one implementation, after the image processing device acquires the RGB statistical information of the third image, the RGB statistical information of the third image may be converted into color information of a three-dimensional space. For example, the image processing apparatus may obtain a third R mean, a third G mean, and a third B mean of the third image, and add the third R mean, the third G mean, and the third B mean of the third image to obtain a fifth RGB sum of the third image. The image processing device divides the third R mean value by the sum of the fifth RGB to obtain a ninth ratio between the third R mean value and the sum of the fifth RGB. The image processing apparatus may further divide the third G mean by the sum of the fifth RGB to obtain a tenth ratio between the third G mean and the sum of the fifth RGB. The image processing apparatus may further divide the third B mean value by the sum of the fifth RGB to obtain an eleventh ratio between the third B mean value and the sum of the fifth RGB. And the ninth ratio, the tenth ratio and the eleventh ratio form color information of the three-dimensional space of the third image. The image processing device obtains the third R mean value in the same manner as the first R mean value, obtains the third G mean value in the same manner as the first G mean value, and obtains the third B mean value in the same manner as the first B mean value, which is not described in detail in this embodiment of the present application.
In an implementation manner, before the image processing device obtains the R mean value, the G mean value, and the B mean value of the image, it may filter out the pixels in the image whose R value is greater than the first preset threshold, the pixels whose G value is greater than the second preset threshold, or the pixels whose B value is greater than the third threshold, perform average operation on the R values of the remaining pixels to obtain the R mean value, perform average operation on the G values of the remaining pixels to obtain the G mean value, and perform average operation on the B values of the remaining pixels to obtain the B mean value. The image may be the first image, the second image or the third image. In the embodiment of the invention, as the shooting parameter is in a direct proportion relation with the RGB value responded by the image sensor, and the response of the RGB value of the pixel point is considered to have a dynamic range, when the external light brightness is very high or the set shooting parameter is relatively high, the response of the RGB value of the pixel point can be saturated, so that the actual environmental light brightness can not be met, and the statistical RGB statistical information can be ensured to directly reflect the environmental information by filtering the area with the relatively high RGB value counted in the image. The shooting parameters may include aperture gain (e.g., aperture size Iris), shutter time sht, and gain factor gain, among others.
Step S103: and the image processing equipment obtains the brightness evaluation value under the mixed light scene according to the color information of the three-dimensional space and the shooting parameters of each image.
Taking the interface schematic diagram of the luminance evaluation value shown in fig. 2 as an example, for an image sensor whose operation mode is a black-and-white mode, the position in three-dimensional space of an image acquired in a scene where only infrared light exists is a (ir), the position in three-dimensional space of an image acquired in a scene where only visible light exists is b (vi), and the position in three-dimensional space of an image acquired in a scene where mixed light exists is C. The sum of RGB of the image in a mixed light scene is contributed by both infrared and visible light. The closer the C point and the A (IR) point are located in three-dimensional space, the greater the infrared light component in the sum of RGB of the images in the mixed light scene. The further the C point and the a (ir) point are located in three-dimensional space, the smaller the infrared light component in the sum of RGB of the image in the mixed light scene is indicated. That is, the ratio between the distance between point C and point a (ir) and the distance between point C and point b (vi) may reflect the ratio of visible light intensity to mixed light intensity. The ratio of the visible light intensity to the mixed light intensity can be used for calculating the visible light brightness in the mixed light scene and the infrared light brightness in the mixed light scene. The three-dimensional space comprises an R/S axis, a G/S axis and a B/S axis, and S is R + G + B.
In one implementation, the image processing apparatus may obtain a first RGB sum of the first image according to color information of a three-dimensional space of the first image, and obtain the visible light brightness evaluation value in the mixed light scene according to the first RGB sum of the first image and the shooting parameter of the first image. Wherein the first RGB sum is the RGB sum of the first image for visible light.
The manner of obtaining the first RGB sum of the first image by the image processing device according to the color information of the three-dimensional space of the first image may be: the method comprises the steps of obtaining a second RGB sum of a first image, obtaining a first space distance of mixed light and infrared light in a three-dimensional space and a second space distance of the infrared light and visible light in the three-dimensional space, dividing the first space distance by the second space distance to obtain a first ratio, and multiplying the first ratio by the second RGB sum to obtain the first RGB sum. Wherein the second RGB sum is the first image RGB sum for the mixed light.
For example, the first RGB sum may be calculated by the following formula:
Figure BDA0001963305050000111
BW _ S _ V denotes a first spatial distance of the mixed light and the infrared light in the three-dimensional space, BW _ C _ I denotes a second spatial distance of the infrared light and the visible light in the three-dimensional space, BW _ S _ C denotes a second spatial distance of the mixed light and the infrared light in the three-dimensional space, and BW _ V _ I denotes a third spatial distance of the infrared light and the visible light in the three-dimensional space.
The manner of acquiring the first spatial distance between the mixed light and the infrared light in the three-dimensional space by the image processing device may be as follows: and performing distance operation on the third ratio, the fourth ratio, the fifth ratio, the sixth ratio, the seventh ratio and the eighth ratio by using an Euclidean distance algorithm to obtain a first spatial distance.
For example, the first spatial distance may be calculated by the following formula:
Figure BDA0001963305050000112
the BW _ C _ I represents a first spatial distance of the mixed light and the infrared light in the three-dimensional space, the BW _ C _ RS represents a third ratio between the first R mean value and the sum of the second RGB, the BW _ C _ GS represents a fourth ratio between the first G mean value and the sum of the second RGB, the BW _ C _ BS represents a fifth ratio between the first B mean value and the sum of the second RGB, the BW _ I _ RS represents a sixth ratio between the second R mean value and the sum of the fourth RGB, the BW _ I _ GS represents a seventh ratio between the second G mean value and the sum of the fourth RGB, and the BW _ I _ BS represents an eighth ratio between the second B mean value and the sum of the fourth RGB.
The manner of acquiring the second spatial distance between the infrared light and the visible light in the three-dimensional space by the image processing device may be as follows: and performing distance operation on the sixth ratio, the seventh ratio, the eighth ratio, the ninth ratio, the tenth ratio and the eleventh ratio by using a Euclidean distance algorithm to obtain a second spatial distance.
For example, the second spatial distance may be calculated by the following formula:
Figure BDA0001963305050000113
the BW _ V _ I represents a second spatial distance of the infrared light and the visible light in the three-dimensional space, the BW _ V _ RS represents a ninth ratio between the third R mean and the sum of the fifth RGB, the BW _ V _ GS represents a tenth ratio between the third G mean and the sum of the fifth RGB, the BW _ V _ BS represents an eleventh ratio between the third B mean and the sum of the fifth RGB, the BW _ I _ RS represents a sixth ratio between the second R mean and the sum of the fourth RGB, the BW _ I _ GS represents a seventh ratio between the second G mean and the sum of the fourth RGB, and the BW _ I _ BS represents an eighth ratio between the second B mean and the sum of the fourth RGB.
The method for obtaining the evaluation value of the brightness of the visible light in the mixed light scene by the image processing device according to the sum of the first RGB of the first image and the shooting parameter of the first image may be as follows: multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor, and dividing the first RGB sum by the reference factor to obtain a visible light brightness evaluation value under a mixed light scene.
For example, the evaluation value of the brightness of visible light in a mixed light scene can be calculated by the following formula:
Figure BDA0001963305050000114
l um (V1) represents the evaluation value of the visible light intensity in a mixed light scene, BW _ S _ V represents the sum of the first RGB, Iris represents the aperture gain, sht represents the shutter time, and gain represents the gain factor.
In one implementation, the image processing apparatus may obtain a third RGB sum of the first image according to color information of a three-dimensional space of the first image, and obtain the infrared light luminance evaluation value in the mixed light scene according to the third RGB sum of the first image and the shooting parameter of the first image. Wherein the third RGB sum is the sum of RGB of the first image for infrared light.
The method for obtaining the infrared brightness evaluation value under the mixed light scene by the image processing device according to the third RGB sum of the first image and the shooting parameter of the first image may be as follows: multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor, and dividing the third RGB sum by the reference factor to obtain the infrared light brightness evaluation value under the mixed light scene.
For example, the evaluation value of the infrared light intensity in the mixed light scene may be calculated by the following formula:
Figure BDA0001963305050000121
l um (IR) represents the evaluation value of the infrared light intensity in the mixed light scene, BW _ S _ I represents the sum of the third RGB, Iris represents the aperture gain, sht represents the shutter time, and gain represents the gain multiple.
The manner of obtaining the third RGB sum of the first image by the image processing device according to the color information of the three-dimensional space of the first image may be: the method comprises the steps of obtaining a second RGB sum of a first image, obtaining a first space distance of mixed light and infrared light in a three-dimensional space and a second space distance of the infrared light and visible light in the three-dimensional space, dividing a difference value between the second space distance and the first space distance by the second space distance to obtain a second ratio, and multiplying the second ratio by the second RGB sum to obtain a third RGB sum.
Illustratively, the third RGB sum may be calculated by the following formula:
Figure BDA0001963305050000122
BW _ S _ I represents the sum of the third RGB, DisBW _ C _ I represents a first spatial distance of the mixed light and the infrared light in the three-dimensional space, BW _ S _ C represents the sum of the second RGB, and DisBW _ V _ I represents a second spatial distance of the infrared light and the visible light in the three-dimensional space.
In one implementation, after the image processing apparatus acquires the luminance evaluation value, the luminance evaluation value may be further transmitted to a receiving apparatus, and the receiving apparatus may operate based on the luminance evaluation value.
Step S104: the image processing device switches the operation mode of the image sensor from the black-and-white mode to the color mode according to the luminance evaluation value.
Specifically, the image processing apparatus may acquire a plurality of frames of images acquired by the image sensor, acquire the luminance evaluation value of each frame of image in the above manner, and set the operating mode of the image sensor according to the luminance evaluation value of each frame of image.
In one implementation, the image processing apparatus may set the operation mode of the image sensor according to the luminance evaluation value of each frame image by: and when the number of the images with the luminance evaluation values larger than the second preset luminance threshold value is larger than the preset number threshold value, switching the image sensor from the black-and-white mode to the color mode, wherein the images with the luminance evaluation values larger than the second preset luminance threshold value are continuously distributed in the multi-frame images. The luminance evaluation value may be a visible light luminance evaluation value.
In this embodiment, when the evaluation value of the visible light intensity of the continuous multi-frame images is greater than the second preset brightness threshold, the image processing device switches the operation mode of the image sensor from the black-and-white mode to the color mode, so as to improve the switching efficiency of the operation mode.
In one implementation, the image processing apparatus may set the operation mode of the image sensor according to the luminance evaluation value of each frame image by: and when the number of the images with the brightness evaluation values larger than the second preset brightness threshold value is larger than the preset number threshold value and a twelfth ratio between the visible light brightness evaluation value and the mixed light brightness evaluation value is larger than a preset proportion threshold value, switching the image sensor from the black-and-white mode to the color mode, and continuously distributing the images with the brightness evaluation values larger than the second preset brightness threshold value in the multi-frame images. The mixed light brightness evaluation value is obtained by adding the visible light brightness evaluation value and the infrared light brightness evaluation value.
In this embodiment, when the number of images with the luminance evaluation value greater than the second preset luminance threshold is greater than the preset number threshold, and the twelfth ratio between the visible light luminance evaluation value and the mixed light luminance evaluation value is greater than the preset proportion threshold, the switching of the working mode is performed, so that mode switching error in an overexposure scene of the infrared light supplement lamp can be avoided, and the accuracy of the switching of the working mode is improved. The infrared light supplement lamp overexposure scene can be a scene that a plurality of devices start the infrared light supplement lamp to monitor the same area, the devices start the infrared light supplement lamp to monitor in a short distance, the brightness of the infrared light supplement lamp is too high, and the infrared light supplement lamp is concentrated in the middle area of an image in the zooming process of the devices.
In one implementation, the image processing apparatus may set the operation mode of the image sensor according to the luminance evaluation value of each frame image by: determining the image with the luminance evaluation value larger than a second preset luminance threshold value, acquiring the acquisition time length between the earliest acquired image and the latest acquired image in the determined image, determining the continuous distribution of the acquired images in the multi-frame image, and switching the image sensor from a black-and-white mode to a color mode when the acquisition time length is larger than a preset time threshold value. The luminance evaluation value may be a visible light luminance evaluation value.
In this embodiment, when the evaluation values of the visible light brightness of the acquired images within the preset time period are all greater than the second preset brightness threshold, the image processing device switches the working mode of the image sensor from the black-and-white mode to the color mode, so that the switching efficiency of the working mode can be improved.
In one implementation, the image processing apparatus may set the operation mode of the image sensor according to the luminance evaluation value of each frame image by: and when the acquisition time is longer than a preset time threshold and a twelfth ratio between the visible light brightness evaluation value and the mixed light brightness evaluation value is larger than a preset proportion threshold, switching the image sensor from a black-and-white mode to a color mode.
In this embodiment, when the acquisition duration is greater than the preset time threshold and the twelfth ratio between the visible light brightness evaluation value and the mixed light brightness evaluation value is greater than the preset proportion threshold, the switching of the working modes is performed, so that mode error switching in an overexposure scene of the infrared light supplement lamp can be avoided, and the switching accuracy of the working modes is improved.
In one implementation, the image processing apparatus may also perform other operations based on the light brightness evaluation value. For example, the fill light is controlled. For example, if the luminance evaluation value includes a visible light luminance evaluation value, the image processing apparatus may control the visible light fill-in lamp according to the visible light luminance evaluation value.
The image processing device may control the visible light fill-in lamp according to the visible light brightness evaluation value in the following two ways:
firstly, the visible light supplementary lighting lamp is turned on or off according to the evaluation value of the visible light brightness.
And secondly, adjusting the brightness of the visible light supplementary lighting lamp according to the evaluation value of the visible light brightness.
In one implementation, if the light brightness evaluation value includes an infrared light brightness evaluation value, the image processing device may control the infrared light supplement lamp according to the infrared light brightness evaluation value.
The image processing device may control the infrared light supplement lamp according to the infrared light brightness evaluation value in the following two ways:
firstly, the infrared light supplement lamp is turned on or turned off according to the infrared light brightness evaluation value.
And secondly, adjusting the brightness of the infrared light supplement lamp according to the infrared light brightness evaluation value.
In the embodiment of the application, the image processing device converts the RGB statistical information of the first image, the second image, and the third image into the color information of the three-dimensional space, and obtains the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameters of the images. Compared with the method for measuring the brightness evaluation value by using the hardware device, the method has the advantages that the brightness evaluation can be performed on the basis of the first image acquired by the image sensor, and the accuracy of the brightness evaluation value is improved. Further, the image processing device switches the working mode of the image sensor from a black-and-white mode to a color mode according to the brightness evaluation value, so that the working mode can be effectively switched, and the imaging effect of the image sensor in a low-illumination scene is improved.
Fig. 3 is a schematic flowchart of a method for switching an operating mode, which specifically describes a specific scheme for switching an operating mode of an image sensor from a color mode to a black-and-white mode, and an execution subject of the method may be an image processing apparatus or a chip applied to the image processing apparatus. The following description will be given taking as an example that the execution subject is an image processing apparatus. The method includes, but is not limited to, the steps of:
step S301: the image processing apparatus acquires RGB statistical information of the first image.
Specifically, the image sensor may acquire a first image in the color mode, the image sensor sends the first image to the image processing device, and the image processing device obtains RGB statistical information of the first image. For example, the operation mode of the image sensor is controlled to be a color mode, and the image sensor is controlled to image the object to be photographed to obtain a first image. For example, before the image sensor is controlled to image the object to be shot, the IR-CUT dual filter can be removed, and the camera sensor is controlled to image the object to be shot, so as to obtain the first image. According to the embodiment of the application, the IR-CUT double filters are removed, so that the influence of the IR-CUT double filters on the brightness evaluation value can be avoided, and the RGB statistical information obtained through statistics can directly reflect the environmental information, such as the brightness evaluation value of the real environment.
When the working mode of the image sensor is the color mode, the infrared filter is added in the image sensor, so that infrared light components in natural light are filtered, and the color of an object in the color mode is ensured. Based on this, the luminance evaluation value obtained by the image processing apparatus based on the first image is a visible light luminance evaluation value.
In one implementation, the RGB statistics of the first image may be acquired prior to color correcting the first image. According to the embodiment of the application, the influence of color correction on the brightness evaluation can be avoided, and the RGB statistical information obtained through statistics can directly reflect environment information, such as the brightness evaluation value of a real environment.
Step S302: the image processing apparatus converts the RGB statistical information into color information of a three-dimensional space.
The manner in which the image processing device converts the RGB statistical information into the color information of the three-dimensional space in the embodiment of the present application is the same as the manner in which the image processing device converts the RGB statistical information of the first image into the color information of the three-dimensional space in the embodiment of the present application, which can refer to the specific description in step S102, and is not described in detail in this embodiment of the present application.
Step S303: the image processing device obtains a visible light brightness evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image.
In one implementation, the image processing apparatus may multiply the aperture gain, the shutter time, and the gain multiple to obtain a reference factor, and divide the sum of RGB of the first image by the reference factor to obtain the visible light intensity evaluation value.
For example, the evaluation value of the visible light intensity may be calculated by the following formula:
Figure BDA0001963305050000141
l um (day) represents the evaluation value of the visible light intensity, Col _ S _ V represents the sum of RGB of the first image, Iris represents the aperture gain, sht represents the shutter time, and gain represents the gain factor.
The obtaining manner of the sum of RGB of the first image is the same as that of the sum of RGB of the second image in the above embodiment, and reference may be made to the description of step S102 in the above embodiment, which is not repeated in this embodiment.
In one implementation, after the image processing apparatus acquires the luminance evaluation value, the luminance evaluation value may be further transmitted to a receiving apparatus, and the receiving apparatus may operate based on the luminance evaluation value.
Step S304: the image processing device switches the operation mode of the image sensor from the color mode to the black-and-white mode according to the visible light brightness evaluation value.
Specifically, the image processing apparatus may acquire a plurality of frames of images acquired by the image sensor, acquire the luminance evaluation value of each frame of image in the above manner, and set the operating mode of the image sensor according to the luminance evaluation value of each frame of image.
In one implementation, the image processing apparatus may set the operation mode of the image sensor according to the luminance evaluation value of each frame image by: when the number of the images with the luminance evaluation values smaller than the first preset brightness threshold value is larger than the preset number threshold value, the image sensor is switched from the color mode to the black-and-white mode, and the images with the luminance evaluation values smaller than the first preset brightness threshold value are continuously distributed in the multi-frame images. The luminance evaluation value may be a visible light luminance evaluation value.
In this embodiment, when the evaluation value of the visible light intensity of the continuous multi-frame images is smaller than the first preset brightness threshold, the image processing device switches the operation mode of the image sensor from the color mode to the black-and-white mode, and the switching efficiency of the operation mode is improved.
In one implementation, the image processing apparatus may set the operation mode of the image sensor according to the luminance evaluation value of each frame image by: and when the number of the images with the brightness evaluation values larger than the second preset brightness threshold value is larger than the preset number threshold value and a twelfth ratio between the visible light brightness evaluation value and the mixed light brightness evaluation value is larger than a preset proportion threshold value, switching the image sensor from the black-and-white mode to the color mode, and continuously distributing the images with the brightness evaluation values larger than the second preset brightness threshold value in the multi-frame images.
In this embodiment, when the number of images with the luminance evaluation value greater than the second preset luminance threshold is greater than the preset number threshold, and the twelfth ratio between the visible light luminance evaluation value and the mixed light luminance evaluation value is greater than the preset proportion threshold, the switching of the working mode is performed, so that mode switching error in an overexposure scene of the infrared light supplement lamp can be avoided, and the accuracy of the switching of the working mode is improved.
In one implementation, the image processing apparatus may set the operation mode of the image sensor according to the luminance evaluation value of each frame image by: determining an image with a brightness evaluation value smaller than a first preset brightness threshold, acquiring the acquisition time length between the earliest acquired image and the latest acquired image in the determined image, wherein the determined images are continuously distributed in the multi-frame image, and when the acquisition time length is larger than a preset time threshold, switching the image sensor from a color mode to a black-and-white mode. The luminance evaluation value may be a visible light luminance evaluation value.
In this embodiment, when the evaluation values of the visible light brightness of the acquired images within the preset time period are all smaller than the first preset brightness threshold, the image processing device switches the working mode of the image sensor from the color mode to the black-and-white mode, so that the switching efficiency of the working mode can be improved.
In one implementation, the image processing apparatus may set the operation mode of the image sensor according to the luminance evaluation value of each frame image by: and when the acquisition time is longer than a preset time threshold and a twelfth ratio between the visible light brightness evaluation value and the mixed light brightness evaluation value is larger than a preset proportion threshold, switching the image sensor from a black-and-white mode to a color mode.
In this embodiment, when the acquisition duration is greater than the preset time threshold and the twelfth ratio between the visible light brightness evaluation value and the mixed light brightness evaluation value is greater than the preset proportion threshold, the switching of the working modes is performed, so that mode error switching in an overexposure scene of the infrared light supplement lamp can be avoided, and the switching accuracy of the working modes is improved.
In one implementation, the image processing apparatus may also perform other operations based on the light brightness evaluation value. For example, the fill light is controlled. For example, if the luminance evaluation value includes a visible light luminance evaluation value, the image processing apparatus may control the visible light fill-in lamp according to the visible light luminance evaluation value.
The image processing device may control the visible light fill-in lamp according to the visible light brightness evaluation value in the following two ways:
firstly, the visible light supplementary lighting lamp is turned on or off according to the evaluation value of the visible light brightness.
And secondly, adjusting the brightness of the visible light supplementary lighting lamp according to the evaluation value of the visible light brightness.
In the embodiment of the application, the image processing device converts the RGB statistical information of the first image into the color information of the three-dimensional space, and obtains the visible light brightness evaluation value according to the color information of the three-dimensional space and the shooting parameters of the first image. Compared with the method for measuring the brightness evaluation value by using the hardware device, the method has the advantages that the brightness evaluation can be performed on the basis of the first image acquired by the image sensor, and the accuracy of the brightness evaluation value is improved. Further, the image processing device switches the working mode of the image sensor from the color mode to the black-and-white mode according to the brightness evaluation value, so that the working mode can be effectively switched, and the imaging effect of the image sensor in a low-illumination scene is improved.
The method of the embodiments of the present application is explained in detail above, and the related apparatus of the embodiments of the present application is provided below.
Fig. 4 is a schematic structural diagram of an apparatus for obtaining a luminance evaluation value according to an embodiment of the present invention, where the apparatus is configured to perform the steps performed by the image processing device in the method embodiments corresponding to fig. 1 to 3, and the apparatus for obtaining a luminance evaluation value may include:
an obtaining unit 401, configured to obtain RGB statistical information of a first image, where the first image is obtained through an image sensor, and the RGB statistical information includes an R value, a G value, and a B value of each pixel included in the first image;
a conversion unit 402, configured to convert the RGB statistical information into color information of a three-dimensional space;
a processing unit 403, configured to obtain a luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image.
In one implementation, the first image is acquired by the image sensor in a black and white mode and in a mixed light scene, the mixed light including visible light and infrared light; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the processing unit 403 obtains a luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image, and includes:
multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor;
acquiring a first RGB sum of the first image, wherein the first RGB sum is the RGB sum of the first image aiming at visible light;
and dividing the sum of the first RGB by the reference factor to obtain a visible light brightness evaluation value under a mixed light scene.
In one implementation, the obtaining unit 401 is further configured to obtain a second RGB sum of the first image before the processing unit 403 divides the first RGB sum by the reference factor to obtain the evaluation value of the brightness of the visible light in the mixed light scene, where the second RGB sum is the sum of the RGB of the first image for the mixed light;
the acquiring unit 401 is further configured to acquire a first spatial distance between the mixed light and the infrared light in the three-dimensional space, and a second spatial distance between the infrared light and the visible light in the three-dimensional space;
the processing unit 403 is further configured to divide the first spatial distance by the second spatial distance to obtain a first ratio;
the processing unit 403 is further configured to multiply the first ratio by the sum of the second RGB to obtain the sum of the first RGB.
In one implementation, the first image is acquired by the image sensor in a black and white mode and in a mixed light scene, the mixed light including visible light and infrared light; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the processing unit 403 obtains a luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image, and includes:
multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor;
acquiring a third RGB sum of the first image, wherein the third RGB sum is the RGB sum of the first image aiming at the infrared light;
and dividing the sum of the third RGB by the reference factor to obtain an infrared light brightness evaluation value under a mixed light scene.
In one implementation, the obtaining unit 401 is further configured to obtain a second RGB sum of the first image before the processing unit 403 divides the third RGB sum by the reference factor to obtain the infrared light brightness evaluation value in the mixed light scene, where the second RGB sum is a sum of RGB of the first image for the mixed light;
the acquiring unit 401 is further configured to acquire a first spatial distance between the mixed light and the infrared light in the three-dimensional space, and a second spatial distance between the infrared light and the visible light in the three-dimensional space;
the processing unit 403 is further configured to divide a difference between the second spatial distance and the first spatial distance by the second spatial distance to obtain a second ratio;
the processing unit 403 is further configured to multiply the second ratio by the sum of the second RGB to obtain the sum of the third RGB.
In one implementation, the acquiring unit 401 acquires a first spatial distance between the mixed light and the infrared light in a three-dimensional space, and includes:
acquiring a second image acquired by the image sensor in a black-and-white mode in the presence of only infrared light;
acquiring a third ratio between the first R mean value and the sum of the second RGB of the first image, a fourth ratio between the first G mean value and the sum of the second RGB, and a fifth ratio between the first B mean value and the sum of the second RGB;
acquiring a sixth ratio between a second R mean value of the second image and the sum of fourth RGB of the second image, a seventh ratio between a second G mean value and the sum of the fourth RGB, and an eighth ratio between a second B mean value and the sum of the fourth RGB, wherein the sum of the fourth RGB is the sum of RGB of the second image for infrared light;
and performing distance operation on the third ratio, the fourth ratio, the fifth ratio, the sixth ratio, the seventh ratio and the eighth ratio by using a Euclidean distance algorithm to obtain the first spatial distance.
In one implementation, the acquiring unit 401 acquires the second spatial distance between the infrared light and the visible light in the three-dimensional space, and includes:
acquiring a second image acquired by the image sensor in a black-and-white mode in a scene only with infrared light, and acquiring a third image acquired by the image sensor in the black-and-white mode in a scene only with visible light;
acquiring a sixth ratio between a second R mean value of the second image and the sum of fourth RGB of the second image, a seventh ratio between a second G mean value and the sum of the fourth RGB, and an eighth ratio between a second B mean value and the sum of the fourth RGB, wherein the sum of the fourth RGB is the sum of RGB of the second image for infrared light;
acquiring a ninth ratio between a third R mean value of the third image and a fifth RGB sum of the third image, a tenth ratio between a third G mean value and the fifth RGB sum, and an eleventh ratio between a third B mean value and the fifth RGB sum, wherein the fifth RGB sum is the RGB sum of the third image for visible light;
and performing distance operation on the sixth ratio, the seventh ratio, the eighth ratio, the ninth ratio, the tenth ratio and the eleventh ratio by using a Euclidean distance algorithm to obtain the second spatial distance.
In an implementation manner, the processing unit 403 is further configured to add the first R mean, the first G mean, and the first B mean of the first image to obtain the second RGB sum.
In one implementation, before the processing unit 403 adds the first R mean, the first G mean, and the first B mean of the first image to obtain the second RGB sum, the method further includes:
performing region division on the first image to obtain a plurality of pixel blocks;
carrying out average operation on R values, B values or G values of all pixels contained in each pixel block to obtain an R mean value, a G mean value or a B mean value of each pixel block;
obtaining pixel blocks of which the R mean value, the G mean value and the B mean value are all smaller than a preset threshold value;
carrying out average operation on the obtained R mean values of all the pixel blocks to obtain a first R mean value;
carrying out average operation on the obtained G mean values of all the pixel blocks to obtain a first G mean value;
and carrying out average operation on the obtained B mean values of all the pixel blocks to obtain the first B mean value.
In one implementation, the first image is acquired by the image sensor in a color mode; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the processing unit 403 obtains a luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image, and includes:
multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor;
and dividing the sum of RGB of the first image by the reference factor to obtain a visible light brightness evaluation value.
In one implementation manner, the obtaining unit 401 is further configured to obtain multiple frames of images collected by the image sensor;
an obtaining unit 401, configured to obtain a luminance evaluation value of each frame of the image;
the processing unit 403 is further configured to set an operating mode of the image sensor according to the luminance evaluation value of the image in each frame.
In one implementation, the light brightness evaluation value includes a visible light brightness evaluation value;
the processing unit 403 is further configured to control the visible light fill-in lamp according to the visible light brightness evaluation value after obtaining the brightness evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image.
In one implementation, the light brightness evaluation value includes an infrared light brightness evaluation value;
the processing unit 403 is further configured to control the infrared light supplement lamp according to the infrared light luminance evaluation value after obtaining the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image.
It should be noted that, details that are not mentioned in the embodiment corresponding to fig. 4 and specific implementation manners of the steps executed by each unit may refer to the embodiments shown in fig. 1 to fig. 3 and the foregoing details, and are not described again here.
In one implementation, the relevant functions implemented by the various units in FIG. 4 may be implemented in connection with a processor and a communications interface. Fig. 5 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present invention, where the image processing apparatus includes a processor 501, a memory 502, a communication interface 503, and an image sensor 504, and the processor 501, the memory 502, the communication interface 503, and the image sensor 504 are connected through one or more communication buses.
The processor 501 is configured as an acquisition device supporting the evaluation value of the light brightness to execute the method described in fig. 1 to 3. The processor 501 may be a Central Processing Unit (CPU), a Network Processor (NP), a hardware chip, or any combination thereof.
The memory 502 is used to store program codes and the like. Memory 502 may include volatile memory (volatile), such as Random Access Memory (RAM); the memory 502 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD), or a solid-state drive (SSD); the memory 502 may also comprise a combination of memories of the kind described above.
The communication interface 503 is used to receive and transmit data, for example, the communication interface 503 is used to acquire a first image.
The image sensor 504 is used for acquiring an image, and may be specifically a camera or a video camera.
The processor 501 may call the program code stored in the memory 502 to perform the following operations:
acquiring RGB statistical information of a first image, wherein the first image is acquired through an image sensor 504, and the RGB statistical information comprises an R value, a G value and a B value of each pixel contained in the first image;
converting the RGB statistical information into color information of a three-dimensional space;
and obtaining a brightness evaluation value according to the color information of the three-dimensional space and the shooting parameters of the first image.
In one implementation, the first image is acquired by the image sensor 504 in a black and white mode and in a mixed light scene, the mixed light including visible light and infrared light; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the processor 501 may perform the following operations when obtaining the luminance evaluation value from the color information of the three-dimensional space and the shooting parameters of the first image:
multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor;
acquiring a first RGB sum of the first image, wherein the first RGB sum is the RGB sum of the first image aiming at visible light;
and dividing the sum of the first RGB by the reference factor to obtain a visible light brightness evaluation value under a mixed light scene.
In one implementation, before the processor 501 divides the first RGB sum by the reference factor to obtain the evaluation value of the brightness of visible light in the mixed light scene, the following operations may be further performed:
acquiring a second RGB sum of the first image, wherein the second RGB sum is the RGB sum of the first image aiming at the mixed light;
acquiring a first space distance between the mixed light and the infrared light in a three-dimensional space and a second space distance between the infrared light and the visible light in the three-dimensional space;
dividing the first spatial distance by the second spatial distance to obtain a first ratio;
and multiplying the first ratio by the second RGB sum to obtain the first RGB sum.
In one implementation, the first image is acquired by the image sensor 504 in a black and white mode and in a mixed light scene, the mixed light including visible light and infrared light; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the processor 501 may perform the following operations when obtaining the luminance evaluation value from the color information of the three-dimensional space and the shooting parameters of the first image:
multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor;
acquiring a third RGB sum of the first image, wherein the third RGB sum is the RGB sum of the first image aiming at the infrared light;
and dividing the sum of the third RGB by the reference factor to obtain an infrared light brightness evaluation value under a mixed light scene.
In one implementation, before the processor 501 divides the third RGB sum by the reference factor to obtain the evaluation value of the infrared light intensity in the mixed light scene, the following operations may be further performed:
acquiring a second RGB sum of the first image, wherein the second RGB sum is the RGB sum of the first image aiming at the mixed light;
acquiring a first space distance between the mixed light and the infrared light in a three-dimensional space and a second space distance between the infrared light and the visible light in the three-dimensional space;
dividing the difference between the second spatial distance and the first spatial distance by the second spatial distance to obtain a second ratio;
and multiplying the second ratio by the second RGB sum to obtain the third RGB sum.
In one implementation, the processor 501, when acquiring the first spatial distance of the mixed light and the infrared light in the three-dimensional space, may perform the following operations:
acquiring a second image acquired by the image sensor 504 in a black-and-white mode in the presence of only infrared light;
acquiring a third ratio between the first R mean value and the sum of the second RGB of the first image, a fourth ratio between the first G mean value and the sum of the second RGB, and a fifth ratio between the first B mean value and the sum of the second RGB;
acquiring a sixth ratio between a second R mean value of the second image and the sum of fourth RGB of the second image, a seventh ratio between a second G mean value and the sum of the fourth RGB, and an eighth ratio between a second B mean value and the sum of the fourth RGB, wherein the sum of the fourth RGB is the sum of RGB of the second image for infrared light;
and performing distance operation on the third ratio, the fourth ratio, the fifth ratio, the sixth ratio, the seventh ratio and the eighth ratio by using a Euclidean distance algorithm to obtain the first spatial distance.
In one implementation, the processor 501 may perform the following operations when acquiring the second spatial distance between the infrared light and the visible light in the three-dimensional space:
acquiring a second image acquired by the image sensor in a black-and-white mode in a scene only with infrared light, and acquiring a third image acquired by the image sensor in the black-and-white mode in a scene only with visible light;
acquiring a sixth ratio between a second R mean value of the second image and the sum of fourth RGB of the second image, a seventh ratio between a second G mean value and the sum of the fourth RGB, and an eighth ratio between a second B mean value and the sum of the fourth RGB, wherein the sum of the fourth RGB is the sum of RGB of the second image for infrared light;
acquiring a ninth ratio between a third R mean value of the third image and a fifth RGB sum of the third image, a tenth ratio between a third G mean value and the fifth RGB sum, and an eleventh ratio between a third B mean value and the fifth RGB sum, wherein the fifth RGB sum is the RGB sum of the third image for visible light;
and performing distance operation on the sixth ratio, the seventh ratio, the eighth ratio, the ninth ratio, the tenth ratio and the eleventh ratio by using a Euclidean distance algorithm to obtain the second spatial distance.
In one implementation, processor 501 may also perform the following operations:
and adding the first R mean value, the first G mean value and the first B mean value of the first image to obtain the second RGB sum.
In one implementation, before processor 501 adds the first R mean, the first G mean, and the first B mean of the first image to obtain the second RGB sum, the following operations may be further performed:
performing region division on the first image to obtain a plurality of pixel blocks;
carrying out average operation on R values, B values or G values of all pixels contained in each pixel block to obtain an R mean value, a G mean value or a B mean value of each pixel block;
obtaining pixel blocks of which the R mean value, the G mean value and the B mean value are all smaller than a preset threshold value;
carrying out average operation on the obtained R mean values of all the pixel blocks to obtain a first R mean value;
carrying out average operation on the obtained G mean values of all the pixel blocks to obtain a first G mean value;
and carrying out average operation on the obtained B mean values of all the pixel blocks to obtain the first B mean value.
In one implementation, the first image is acquired by the image sensor 504 in a color mode; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the processor 501 may perform the following operations when obtaining the luminance evaluation value from the color information of the three-dimensional space and the shooting parameters of the first image:
multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor;
and dividing the sum of RGB of the first image by the reference factor to obtain a visible light brightness evaluation value.
In one implementation, processor 501 may also perform the following operations:
acquiring a multi-frame image acquired by the image sensor;
acquiring a brightness evaluation value of each frame of the image;
the operation mode of the image sensor 504 is set according to the luminance evaluation value of the image of each frame.
In one implementation, the light brightness evaluation value includes a visible light brightness evaluation value;
the processor 501 may further perform the following operations after obtaining a luminance evaluation value according to the color information of the three-dimensional space and the shooting parameters of the first image:
and controlling a visible light supplementary lighting lamp according to the visible light brightness evaluation value.
In one implementation, the light brightness evaluation value includes an infrared light brightness evaluation value;
after obtaining the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameters of the first image, the processor 501 may further perform the following operations:
and controlling the infrared light supplement lamp according to the infrared light brightness evaluation value.
It should be noted that details that are not mentioned in the embodiment corresponding to fig. 5 and specific implementation manners of steps executed by each device may refer to the embodiments shown in fig. 1 to fig. 3 and the foregoing details, and are not described again here.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server that integrates one or more available media. The usable medium may be a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape; or an optical medium, such as a DVD; it may also be a semiconductor medium, such as a Solid State Disk (SSD).
In the embodiments of the present application, unless otherwise specified or conflicting with respect to logic, the terms and descriptions in different embodiments are consistent and may be mutually referenced, and the technical features in different embodiments may be combined to form a new embodiment according to their inherent logic relationship.
In the present application, "a plurality" means two or more. In the formula of the present application, the character "/" indicates that the preceding and following related objects are in a relationship of "division".
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for descriptive convenience and are not intended to limit the scope of the embodiments of the present application. The sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of the processes should be determined by their functions and inherent logic.

Claims (16)

1. A method for acquiring a luminance evaluation value, comprising:
acquiring RGB statistical information of a first image, wherein the first image is acquired through an image sensor, and the RGB statistical information comprises R values, G values and B values of all pixels contained in the first image;
converting the RGB statistical information into color information of a three-dimensional space;
and obtaining a brightness evaluation value according to the color information of the three-dimensional space and the shooting parameters of the first image.
2. The method of claim 1, wherein the first image is acquired by the image sensor in a black and white mode and in a mixed light scene, the mixed light comprising visible light and infrared light; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the obtaining of the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image includes:
multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor;
acquiring a first RGB sum of the first image, wherein the first RGB sum is the RGB sum of the first image aiming at visible light;
and dividing the sum of the first RGB by the reference factor to obtain a visible light brightness evaluation value under a mixed light scene.
3. The method as claimed in claim 2, wherein before dividing the first RGB sum by the reference factor to obtain the evaluation value of the brightness of the visible light in the mixed light scene, the method further comprises:
acquiring a second RGB sum of the first image, wherein the second RGB sum is the RGB sum of the first image aiming at the mixed light;
acquiring a first space distance between the mixed light and the infrared light in a three-dimensional space and a second space distance between the infrared light and the visible light in the three-dimensional space;
dividing the first spatial distance by the second spatial distance to obtain a first ratio;
and multiplying the first ratio by the second RGB sum to obtain the first RGB sum.
4. The method of claim 1, wherein the first image is acquired by the image sensor in a black and white mode and in a mixed light scene, the mixed light comprising visible light and infrared light; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the obtaining of the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image includes:
multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor;
acquiring a third RGB sum of the first image, wherein the third RGB sum is the RGB sum of the first image aiming at the infrared light;
and dividing the sum of the third RGB by the reference factor to obtain an infrared light brightness evaluation value under a mixed light scene.
5. The method as claimed in claim 4, wherein before dividing the third RGB sum by the reference factor to obtain the evaluation value of the infrared light brightness in the mixed light scene, the method further comprises:
acquiring a second RGB sum of the first image, wherein the second RGB sum is the RGB sum of the first image aiming at the mixed light;
acquiring a first space distance between the mixed light and the infrared light in a three-dimensional space and a second space distance between the infrared light and the visible light in the three-dimensional space;
dividing the difference between the second spatial distance and the first spatial distance by the second spatial distance to obtain a second ratio;
and multiplying the second ratio by the second RGB sum to obtain the third RGB sum.
6. The method of claim 3 or 5, wherein acquiring the first spatial distance of the mixed light and the infrared light in the three-dimensional space comprises:
acquiring a second image acquired by the image sensor in a black-and-white mode in the presence of only infrared light;
acquiring a third ratio between the first R mean value and the sum of the second RGB of the first image, a fourth ratio between the first G mean value and the sum of the second RGB, and a fifth ratio between the first B mean value and the sum of the second RGB;
acquiring a sixth ratio between a second R mean value of the second image and the sum of fourth RGB of the second image, a seventh ratio between a second G mean value and the sum of the fourth RGB, and an eighth ratio between a second B mean value and the sum of the fourth RGB, wherein the sum of the fourth RGB is the sum of RGB of the second image for infrared light;
and performing distance operation on the third ratio, the fourth ratio, the fifth ratio, the sixth ratio, the seventh ratio and the eighth ratio by using a Euclidean distance algorithm to obtain the first spatial distance.
7. The method of claim 3 or 5, wherein acquiring the second spatial distance between the infrared light and the visible light in the three-dimensional space comprises:
acquiring a second image acquired by the image sensor in a black-and-white mode in a scene only with infrared light, and acquiring a third image acquired by the image sensor in the black-and-white mode in a scene only with visible light;
acquiring a sixth ratio between a second R mean value of the second image and the sum of fourth RGB of the second image, a seventh ratio between a second G mean value and the sum of the fourth RGB, and an eighth ratio between a second B mean value and the sum of the fourth RGB, wherein the sum of the fourth RGB is the sum of RGB of the second image for infrared light;
acquiring a ninth ratio between a third R mean value of the third image and a fifth RGB sum of the third image, a tenth ratio between a third G mean value and the fifth RGB sum, and an eleventh ratio between a third B mean value and the fifth RGB sum, wherein the fifth RGB sum is the RGB sum of the third image for visible light;
and performing distance operation on the sixth ratio, the seventh ratio, the eighth ratio, the ninth ratio, the tenth ratio and the eleventh ratio by using a Euclidean distance algorithm to obtain the second spatial distance.
8. The method of claim 3 or 5, wherein said obtaining a sum of second RGB of the first image comprises:
and adding the first R mean value, the first G mean value and the first B mean value of the first image to obtain the second RGB sum.
9. The method of claim 8, wherein before adding the first R-means, the first G-means, and the first B-means of the first image to obtain the second RGB sum, the method further comprises:
performing region division on the first image to obtain a plurality of pixel blocks;
carrying out average operation on R values, B values or G values of all pixels contained in each pixel block to obtain an R mean value, a G mean value or a B mean value of each pixel block;
obtaining pixel blocks of which the R mean value, the G mean value and the B mean value are all smaller than a preset threshold value;
carrying out average operation on the obtained R mean values of all the pixel blocks to obtain a first R mean value;
carrying out average operation on the obtained G mean values of all the pixel blocks to obtain a first G mean value;
and carrying out average operation on the obtained B mean values of all the pixel blocks to obtain the first B mean value.
10. The method of claim 1, wherein the first image is acquired by the image sensor in a color mode; the shooting parameters comprise aperture gain, shutter time and gain multiple;
the obtaining of the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameter of the first image includes:
multiplying the aperture gain, the shutter time and the gain multiple to obtain a reference factor;
and dividing the sum of RGB of the first image by the reference factor to obtain a visible light brightness evaluation value.
11. The method of claim 1, further comprising:
acquiring a multi-frame image acquired by the image sensor;
acquiring a brightness evaluation value of each frame of the image;
and setting the working mode of the image sensor according to the brightness evaluation value of each frame of the image.
12. The method according to claim 1, wherein the light brightness evaluation value includes a visible light brightness evaluation value;
after obtaining the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameters of the first image, the method further comprises the following steps:
and controlling a visible light supplementary lighting lamp according to the visible light brightness evaluation value.
13. The method according to claim 1, wherein the light brightness evaluation value includes an infrared light brightness evaluation value;
after obtaining the luminance evaluation value according to the color information of the three-dimensional space and the shooting parameters of the first image, the method further comprises the following steps:
and controlling the infrared light supplement lamp according to the infrared light brightness evaluation value.
14. An apparatus for acquiring a luminance evaluation value, characterized in that the apparatus includes a unit for implementing the method of acquiring a luminance evaluation value according to any one of claims 1 to 13.
15. A computer storage medium storing a computer program or instructions that, when executed by a processor, cause the processor to execute the method of acquiring a light brightness evaluation value according to any one of claims 1 to 13.
16. An image processing apparatus comprising a processor, the processor being coupled to a memory,
the memory to store instructions;
the processor is configured to execute the instructions in the memory, so that the image processing apparatus executes the method of acquiring a light brightness evaluation value according to any one of claims 1 to 13.
CN201910098291.5A 2019-01-30 2019-01-30 Method and device for acquiring brightness evaluation value and computer storage medium Active CN111510636B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910098291.5A CN111510636B (en) 2019-01-30 2019-01-30 Method and device for acquiring brightness evaluation value and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910098291.5A CN111510636B (en) 2019-01-30 2019-01-30 Method and device for acquiring brightness evaluation value and computer storage medium

Publications (2)

Publication Number Publication Date
CN111510636A true CN111510636A (en) 2020-08-07
CN111510636B CN111510636B (en) 2021-07-09

Family

ID=71877373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910098291.5A Active CN111510636B (en) 2019-01-30 2019-01-30 Method and device for acquiring brightness evaluation value and computer storage medium

Country Status (1)

Country Link
CN (1) CN111510636B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352390A (en) * 2020-03-03 2020-06-30 马鞍山职业技术学院 Visual intelligent robot control system based on real-time analysis

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104639923A (en) * 2015-02-04 2015-05-20 华为技术有限公司 Method and device for processing image data, and terminal
CN106231179A (en) * 2016-07-29 2016-12-14 浙江大华技术股份有限公司 One the most double optical-filter switcher changing method and device
US20170330053A1 (en) * 2016-05-11 2017-11-16 Center For Integrated Smart Sensors Foundation Color night vision system and operation method thereof
CN108289164A (en) * 2017-01-10 2018-07-17 杭州海康威视数字技术股份有限公司 A kind of mode switching method and device of the video camera with infrared light compensating lamp
CN108307125A (en) * 2018-02-08 2018-07-20 腾讯科技(深圳)有限公司 A kind of image-pickup method, device and storage medium
CN109151426A (en) * 2017-06-28 2019-01-04 杭州海康威视数字技术股份有限公司 A kind of white balance adjustment method, device, camera and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104639923A (en) * 2015-02-04 2015-05-20 华为技术有限公司 Method and device for processing image data, and terminal
US20170330053A1 (en) * 2016-05-11 2017-11-16 Center For Integrated Smart Sensors Foundation Color night vision system and operation method thereof
CN106231179A (en) * 2016-07-29 2016-12-14 浙江大华技术股份有限公司 One the most double optical-filter switcher changing method and device
CN108289164A (en) * 2017-01-10 2018-07-17 杭州海康威视数字技术股份有限公司 A kind of mode switching method and device of the video camera with infrared light compensating lamp
CN109151426A (en) * 2017-06-28 2019-01-04 杭州海康威视数字技术股份有限公司 A kind of white balance adjustment method, device, camera and medium
CN108307125A (en) * 2018-02-08 2018-07-20 腾讯科技(深圳)有限公司 A kind of image-pickup method, device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352390A (en) * 2020-03-03 2020-06-30 马鞍山职业技术学院 Visual intelligent robot control system based on real-time analysis

Also Published As

Publication number Publication date
CN111510636B (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN107635102B (en) Method and device for acquiring exposure compensation value of high-dynamic-range image
JP7141428B2 (en) Apparatus, computer program and method for generating high dynamic range (HDR) pixel streams
CN108683862B (en) Imaging control method, imaging control device, electronic equipment and computer-readable storage medium
CN108989700B (en) Imaging control method, imaging control device, electronic device, and computer-readable storage medium
CN112235484B (en) System and method for generating digital images
CN110225248B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN105208281B (en) A kind of night scene image pickup method and device
CN112752023B (en) Image adjusting method and device, electronic equipment and storage medium
CN109040607B (en) Imaging control method, imaging control device, electronic device and computer-readable storage medium
CN108683861A (en) Shoot exposal control method, device, imaging device and electronic equipment
CN103227928B (en) White balance adjusting method and device
CN104253948A (en) Method and apparatus for distributed image processing in cameras for minimizing artifacts in stitched images
CN108881701B (en) Shooting method, camera, terminal device and computer readable storage medium
CN108337446B (en) High dynamic range image acquisition method, device and equipment based on double cameras
US11601600B2 (en) Control method and electronic device
CN111447372B (en) Control method, device, equipment and medium for brightness parameter adjustment
CN108965729A (en) Control method, device, electronic equipment and computer readable storage medium
CN114697628B (en) Image acquisition method, apparatus, device, and medium
CN111510636B (en) Method and device for acquiring brightness evaluation value and computer storage medium
US11451719B2 (en) Image processing apparatus, image capture apparatus, and image processing method
EP2658245B1 (en) System and method of adjusting camera image data
CN108337448B (en) High dynamic range image acquisition method and device, terminal equipment and storage medium
CN109345602A (en) Image processing method and device, storage medium, electronic equipment
CN109600547B (en) Photographing method and device, electronic equipment and storage medium
CN114125408A (en) Image processing method and device, terminal and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant