CN116076081A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN116076081A
CN116076081A CN202180002388.7A CN202180002388A CN116076081A CN 116076081 A CN116076081 A CN 116076081A CN 202180002388 A CN202180002388 A CN 202180002388A CN 116076081 A CN116076081 A CN 116076081A
Authority
CN
China
Prior art keywords
image
images
sub
module
signal processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180002388.7A
Other languages
Chinese (zh)
Inventor
武隽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Publication of CN116076081A publication Critical patent/CN116076081A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The image processing method and the device thereof can be applied to the technical field of image processing, wherein the method comprises the following steps: splitting the acquired first image into a plurality of first sub-images according to a set brightness threshold, wherein the brightness value corresponding to each pixel point in each first sub-image is smaller than or equal to the set brightness threshold (101); the plurality of first sub-images are sequentially sent to an image signal processor (102). Therefore, the image signal sensor divides the high-brightness image into a plurality of sub-images within the brightness threshold value, so that the image signal processor can process the sub-images normally, and the image signal sensor and the image signal processor can process the image under the condition that the bit depth parameters are not matched.

Description

Image processing method and device Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to an image processing method and an image processing device.
Background
During image processing, an image signal processor (image signal processor, ISP) may be responsible for receiving and processing raw signal data of an image sensor (sensor).
In general, ISP and image sensor are required to record and process brightness and the like using uniform bit depth parameters in the process of image processing. The larger the difference in luminance in the image, the larger the bit depth parameter required. If the ISP bit depth parameter does not match the image sensor bit depth parameter, the image cannot be processed.
Thus, how to process an image when the bit depth parameter of the ISP does not match the bit depth parameter of the image sensor is a current problem to be solved.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method and device, which can be applied to the technical field of image processing.
In a first aspect, an embodiment of the present disclosure provides a method for processing an image, including:
splitting an acquired first image into a plurality of first sub-images according to a set brightness threshold, wherein the brightness value corresponding to each pixel point in each first sub-image is smaller than or equal to the set brightness threshold;
and sequentially transmitting the plurality of first sub-images to an image signal processor.
Optionally, the method further comprises:
receiving the first bit depth parameter sent by the image signal processor;
And determining the set brightness threshold according to the first bit depth parameter.
Optionally, the method further comprises:
and determining the number of the plurality of first sub-images according to the first bit depth parameter and the second bit depth parameter corresponding to the image sensor.
Optionally, the method further comprises:
and receiving a set brightness threshold value sent by the image signal processor.
Optionally, the method further comprises:
and sending a second bit depth parameter corresponding to the image sensor to the image signal processor.
Optionally, the sequentially sending the plurality of first sub-images to the image signal processor includes:
and sequentially transmitting the plurality of first sub-images to the image signal processor at the same first time interval.
Optionally, the method further comprises:
and after the plurality of first sub-images are sent, a second time interval passes, and a plurality of second sub-images are sent to the image signal processor, wherein the plurality of second sub-images are acquired images after the second images are split, and the second time interval is different from the first time interval.
In a second aspect, an embodiment of the present disclosure provides a method for processing an image, including:
receiving a plurality of first sub-images, wherein the brightness value corresponding to each pixel point in each first sub-image is smaller than or equal to a set brightness threshold value;
Processing the plurality of first sub-images to generate a plurality of processed images;
and fusing the plurality of processed images to generate a fused image.
Optionally, the method further comprises:
transmitting the set brightness threshold to an image sensor;
or alternatively, the process may be performed,
and sending a first bit depth parameter corresponding to the image signal processor to the image sensor.
Optionally, the fusing the plurality of processed images to generate a fused image includes:
determining the number of image fusion according to the second bit depth parameter corresponding to the image sensor and the first bit depth parameter corresponding to the image signal processor;
and according to the image fusion quantity, sequentially fusing the plurality of processed images to generate each fused image.
Optionally, the fusing the plurality of processed images to generate a fused image includes:
determining first sub-images to be fused according to the receiving time intervals among the first sub-images;
and fusing the processed images corresponding to the first sub-images to be fused to generate the fused images.
Optionally, the determining the first sub-image to be fused according to the receiving time interval between the first sub-images includes:
Determining any two first sub-images as first sub-images to be fused in response to the receiving time interval between the any two first sub-images as a first time interval;
or alternatively, the process may be performed,
determining that any two first sub-images are non-to-be-fused first sub-images according to the second time interval of the receiving time interval between the any two first sub-images;
wherein the first time interval is different from the second time interval.
In a third aspect, an embodiment of the present disclosure provides a method for processing an image, including:
transmitting the acquired image to a processor;
receiving an image compressed by the processor and statistical information corresponding to the image, wherein a brightness value corresponding to each pixel point in the compressed image is smaller than or equal to a set brightness threshold value;
and processing the compressed image according to the statistical information to generate a fused image.
Optionally, the method further comprises:
transmitting a first bit depth parameter corresponding to an image signal processor to the processor;
or alternatively, the process may be performed,
and sending the set brightness threshold to the processor.
Optionally, the statistical information includes at least one of: automatic exposure control statistics, automatic focus control statistics, and automatic white balance statistics.
In a fourth aspect, an embodiment of the present disclosure provides a method for processing an image, including:
receiving an image sent by a camera module;
analyzing the image to determine statistical information corresponding to the image;
compressing the image according to a set brightness threshold to obtain a compressed image, wherein a brightness value corresponding to each pixel point in the compressed image is smaller than or equal to the set brightness threshold;
and sending the compressed image and the statistical information corresponding to the image to the camera module.
Optionally, the method further comprises:
and receiving the set brightness threshold sent by the camera module.
Optionally, the method further comprises:
receiving a first bit depth parameter corresponding to an image signal processor sent by the camera module;
and determining the set brightness threshold according to the first bit depth parameter.
In a fifth aspect, an embodiment of the present disclosure provides an image processing apparatus, including:
the splitting module is used for splitting the acquired first image into a plurality of first sub-images according to a set brightness threshold, wherein the brightness value corresponding to each pixel point in each first sub-image is smaller than or equal to the set brightness threshold;
And the transmitting module is used for sequentially transmitting the plurality of first sub-images to the image signal processor.
Optionally, the method further comprises:
the receiving module is used for receiving the first bit depth parameter sent by the image signal processor;
and the determining module is used for determining the set brightness threshold according to the first bit depth parameter.
Optionally, the determining module is further configured to:
and determining the number of the plurality of first sub-images according to the first bit depth parameter and the second bit depth parameter corresponding to the image sensor.
Optionally, the receiving module is further configured to:
and receiving a set brightness threshold value sent by the image signal processor.
Optionally, the sending module is further configured to:
and sending a second bit depth parameter corresponding to the image sensor to the image signal processor.
Optionally, the sending module is specifically configured to:
and sequentially transmitting the plurality of first sub-images to the image signal processor at the same first time interval.
Optionally, the sending module is further configured to:
and after the plurality of first sub-images are sent, a second time interval passes, and a plurality of second sub-images are sent to the image signal processor, wherein the plurality of second sub-images are acquired images after the second images are split, and the second time interval is different from the first time interval.
In a sixth aspect, an embodiment of the present disclosure provides an image processing apparatus, including:
the receiving module is used for receiving a plurality of first sub-images, wherein the brightness value corresponding to each pixel point in each first sub-image is smaller than or equal to a set brightness threshold value;
the processing module is used for processing the plurality of first sub-images to generate a plurality of processed images;
and the fusion module is used for fusing the plurality of processed images to generate a fused image.
Optionally, the device further comprises a sending module, configured to:
transmitting the set brightness threshold to an image sensor;
or alternatively, the process may be performed,
and sending a first bit depth parameter corresponding to the image signal processor to the image sensor.
Optionally, the fusion module is specifically configured to:
determining the number of image fusion according to the second bit depth parameter corresponding to the image sensor and the first bit depth parameter corresponding to the image signal processor;
and according to the image fusion quantity, sequentially fusing the plurality of processed images to generate each fused image.
Optionally, the fusion module includes:
a determining unit, configured to determine a first sub-image to be fused according to a receiving time interval between the first sub-images;
And the generating unit is used for fusing the processed images corresponding to the first sub-images to be fused so as to generate the fused images.
Optionally, the determining unit is specifically configured to:
determining any two first sub-images as first sub-images to be fused in response to the receiving time interval between the any two first sub-images as a first time interval;
or alternatively, the process may be performed,
determining that any two first sub-images are non-to-be-fused first sub-images according to the second time interval of the receiving time interval between the any two first sub-images;
wherein the first time interval is different from the second time interval.
In a seventh aspect, an embodiment of the present disclosure provides an image processing apparatus, including:
the sending module is used for sending the acquired image to the processor;
the receiving module is used for receiving the image compressed by the processor and the statistical information corresponding to the image, wherein the brightness value corresponding to each pixel point in the compressed image is smaller than or equal to a set brightness threshold value;
and the processing module is used for processing the compressed image according to the statistical information so as to generate a fused image.
Optionally, the sending module is further configured to:
transmitting a first bit depth parameter corresponding to an image signal processor to the processor;
or alternatively, the process may be performed,
and sending the set brightness threshold to the processor.
Optionally, the statistical information includes at least one of: automatic exposure control statistics, automatic focus control statistics, and automatic white balance statistics.
In an eighth aspect, an embodiment of the present disclosure provides an image processing apparatus, including:
the receiving module is used for receiving the image sent by the camera module;
the analysis module is used for analyzing the image to determine the statistical information corresponding to the image;
the acquisition module is used for compressing the image according to a set brightness threshold value to acquire a compressed image, wherein the brightness value corresponding to each pixel point in the compressed image is smaller than or equal to the set brightness threshold value;
and the sending module is used for sending the compressed image and the statistical information corresponding to the image to the camera module.
Optionally, the receiving module is further configured to:
and receiving the set brightness threshold sent by the camera module.
Optionally, the receiving module is further configured to receive a first bit depth parameter corresponding to the image signal processor sent by the image capturing module;
the obtaining module is further configured to determine the set brightness threshold according to the first bit depth parameter.
In a ninth aspect, an embodiment of the present disclosure provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to invoke and execute the executable instructions stored in the memory to implement the image processing method according to the embodiment of any of the above aspects of the disclosure.
In a tenth aspect, embodiments of the present disclosure provide a non-transitory computer readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the image processing method set forth in the embodiments of any one of the above aspects of the present disclosure.
In an eleventh aspect, embodiments of the present disclosure provide a computer program product, which when executed by a processor of an electronic device, enables the electronic device to perform the method for processing an image set forth in the embodiments of any one of the above aspects of the present disclosure.
According to the image processing method and device, the collected first image can be split into a plurality of first sub-images according to the set brightness threshold, and then the plurality of first sub-images can be sequentially sent to the image signal processor. Therefore, the image signal sensor divides the high-brightness image into a plurality of sub-images within the brightness threshold value, so that the image signal processor can process the sub-images normally, and the image signal sensor and the image signal processor can process the image under the condition that the bit depth parameters are not matched.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background of the present disclosure, the following description will explain the drawings that are required to be used in the embodiments or the background of the present disclosure.
FIG. 1 is a flow chart of a method for processing an image according to an embodiment of the disclosure;
FIG. 2 is a flow chart of a method for processing an image according to another embodiment of the present disclosure;
FIG. 3 is a flow chart of a method for processing an image according to another embodiment of the present disclosure;
FIG. 4 is a flow chart of a method for processing an image according to another embodiment of the present disclosure;
FIG. 5 is a flow chart of a method for processing an image according to another embodiment of the present disclosure;
FIG. 6 is a flow chart of a method for processing an image according to another embodiment of the present disclosure;
FIG. 7 is a flow chart of a method for processing an image according to another embodiment of the present disclosure;
fig. 8 is a schematic structural view of an image processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural view of an image processing apparatus according to another embodiment of the present disclosure;
fig. 10 is a schematic structural view of an image processing apparatus according to another embodiment of the present disclosure;
fig. 11 is a schematic structural view of an image processing apparatus according to another embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present disclosure and are not to be construed as limiting the present disclosure.
The image processing method and apparatus according to the embodiments of the present disclosure are described below with reference to the accompanying drawings.
The image processing method according to the embodiment of the present disclosure may be performed by the image processing apparatus provided by the embodiment of the present disclosure, and the apparatus may be configured in an electronic device.
Referring to fig. 1, fig. 1 is a flowchart illustrating an image processing method according to an embodiment of the disclosure. As shown in fig. 1, the method may include, but is not limited to, the steps of:
step 101, splitting the acquired first image into a plurality of first sub-images according to a set brightness threshold, wherein the brightness value corresponding to each pixel point in each first sub-image is smaller than or equal to the set brightness threshold.
In general, in an image processing process, an image sensor in an image capturing module may capture an image, and then send the captured image to an image signal processor, so that the image signal processor processes the image.
It can be understood that, in the case that the first bit depth parameter corresponding to the image signal processor is smaller and the second bit depth parameter corresponding to the image sensor is larger, if the image sensor directly sends the acquired image to the image signal processor, the image signal processor cannot process the image.
Therefore, in the embodiment of the disclosure, in the case that the bit depth parameter of the image sensor is not matched with the bit depth parameter of the image signal processor, in order to enable the image collected by the image sensor to be reliably processed by the image signal processor, the image sensor may split the collected first image to obtain a plurality of corresponding first sub-images, and then send the plurality of first sub-images to the image signal processor for processing.
The brightness threshold may be a value set in advance, or may be adjusted as needed, which is not limited in the present disclosure.
It can be appreciated that the number of the first sub-images may be preset, and may be a uniform value; alternatively, the relationship between the brightness value of each pixel point in the first image and the set brightness threshold value may be determined; alternatively, it may be determined in accordance with other manners, etc., which are not limited by the present disclosure
For example, if the number of the split first sub-images is set to n, the first images may be split according to the set brightness threshold, so as to obtain n corresponding first sub-images.
For another example, the brightness threshold value set is a, and the brightness value of each pixel in the first image may be:
Figure PCTCN2021111625-APPB-000001
the luminance value of the split first sub-image may be:
Figure PCTCN2021111625-APPB-000002
and
Figure PCTCN2021111625-APPB-000003
Alternatively, the luminance value of each pixel of the split first sub-image may be:
Figure PCTCN2021111625-APPB-000004
and
Figure PCTCN2021111625-APPB-000005
The above examples are only examples, and are not intended to limit the brightness threshold, the number of first sub-images, the brightness value of each pixel, and the like in the embodiments of the present disclosure.
Step 102, sequentially sending the plurality of first sub-images to an image signal processor.
For example, the first sub-image obtained by splitting the image sensor is: the first sub-image 1, the first sub-image 2, the first sub-image 3, and the first sub-image 4 may be sequentially transmitted to the image signal processor or the like in the order of the first sub-image 1, the first sub-image 2, the first sub-image 3, and the first sub-image 4, which is not limited in this disclosure.
Alternatively, a plurality of first sub-images may be sequentially transmitted to the image signal processor at the same first time interval.
The first time interval may be set in advance, or may be a value that matches a pixel value of the first sub-image. The present disclosure is not limited in this regard.
For example, the first time interval is T1. The plurality of first sub-images are respectively: a first sub-image 1, a first sub-image 2 and a first sub-image 3. If the first sub-image 1 is transmitted at time T0, the first sub-image 2 may be transmitted at time t0+t1, the first sub-image 3 may be transmitted at time t0+2t1, and so on, which is not limited in the present disclosure.
By implementing the embodiment of the disclosure, the collected first image can be split into a plurality of first sub-images according to the set brightness threshold, and then the plurality of first sub-images can be sequentially sent to the image signal processor. Therefore, the image signal sensor divides the high-brightness image into a plurality of sub-images within the brightness threshold value, so that the image signal processor can process the sub-images normally, and the image signal sensor and the image signal processor can process the image under the condition that the bit depth parameters are not matched.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for processing an image according to an embodiment of the disclosure. As shown in fig. 2, the method may include, but is not limited to, the steps of:
in step 201, a first bit depth parameter sent by an image signal processor is received.
The first bit depth parameter is a bit number of bits correspondingly supported by the image signal processor, for example, may be 10 bits, 12 bits, and the like, which is not limited in the disclosure.
Step 202, determining a set brightness threshold according to the first bit depth parameter.
For example, if the first bit depth parameter is 8, the set brightness threshold may be 256, or any value less than 256, etc., which is not limited in this disclosure.
Step 203, determining the number of the plurality of first sub-images according to the first bit depth parameter corresponding to the image signal processor and the second bit depth parameter corresponding to the image sensor.
For example, if the first bit depth parameter corresponding to the image signal processor is 8 and the second bit depth parameter corresponding to the image sensor is 10, it may be determined that the number of the first sub-images obtained after splitting may be 4.
Or the first bit depth parameter corresponding to the image signal processor is 7, and the second bit depth parameter corresponding to the image sensor is 10, it can be determined that the number of the first sub-images obtained after splitting can be 8.
It should be noted that the foregoing examples are only illustrative, and are not intended to limit the first bit depth parameter, the second bit depth parameter, the number of first sub-images, and the like in the embodiments of the present disclosure.
Optionally, the set brightness threshold may also be sent by the image signal processor, so in an embodiment of the disclosure, the image sensor may further receive the set brightness threshold sent by the image signal processor first, and then split the acquired first image according to the set brightness threshold and the number of first sub-images.
Step 204, sequentially sending the plurality of first sub-images to the image signal processor at the same first time interval.
Step 205, after the plurality of first sub-images are sent, a second time interval passes, and a plurality of second sub-images are sent to the image signal processor, where the plurality of second sub-images are acquired images after splitting the second image, and the second time interval is different from the first time interval.
For example, the first time interval is T1, and the second time interval is T2. The image sensor may transmit a first sub-image every first time interval T1, may transmit a second sub-image every second time interval T2 after the transmission of the first sub-image is completed, and so on, which is not limited in the present disclosure.
In the embodiment of the disclosure, different time intervals can be adopted for transmitting different sub-images corresponding to different images, so that the image signal processor can determine the images corresponding to the sub-images according to the receiving time intervals among the sub-images, thereby providing a guarantee for improving the accuracy and the reliability of image processing.
Optionally, a second bit depth parameter corresponding to the image sensor may be sent to the image signal processor, so that the image signal processor may combine with the second bit depth parameter to perform image processing, thereby improving accuracy and reliability of image processing.
By implementing the embodiment of the disclosure, the first bit depth parameter sent by the image signal processor may be received first, then a set brightness threshold may be determined according to the first bit depth parameter, then the collected first image may be split into a plurality of first sub-images according to the set brightness threshold, and the plurality of first sub-images may be sequentially sent to the image signal processor at the same first time interval, or a plurality of second sub-images may be sent to the image signal processor after the plurality of first sub-images are sent for a second time interval. Therefore, the image signal sensor splits the high-brightness image into a plurality of sub-images within the brightness threshold value, and sends the sub-images to the image signal processor according to the corresponding time intervals, so that the image signal processor can process the corresponding sub-images according to the received sub-images, and the image signal sensor and the image signal processor can process the image under the condition that the bit depth parameters are not matched.
Referring to fig. 3, fig. 3 is a flowchart illustrating an image processing method according to an embodiment of the disclosure. As shown in fig. 3, the method may include, but is not limited to, the steps of:
step 301, receiving a plurality of first sub-images, where a brightness value corresponding to each pixel point in each first sub-image is less than or equal to a set brightness threshold.
It is understood that the image signal processor may sequentially receive the plurality of first sub-images in order.
Step 302, a plurality of first sub-images are processed to generate a plurality of processed images.
It will be appreciated that there may be a plurality of processing operations performed by the image signal processor on the plurality of first sub-images.
For example, black level compensation (black level compensation, BLC), lens correction (lens shading correction, LSC), bad pixel correction (bad pixel correction, BPC), color interpolation (demosaic), bayer (Bayer) noise removal, automatic White Balance (AWB) correction, color correction (color correction), gamma correction (Gamma correction), color space conversion (RGB to YUV), color noise removal and edge enhancement in YUV color space, color and contrast enhancement, automatic exposure control, and the like may be used, which are not limited by the present disclosure.
Step 303, fusing the plurality of processed images to generate a fused image.
There are various ways to fuse the multiple processed images.
For example, the brightness of the pixel point at the same position in the plurality of processed images may be superimposed.
For example, the brightness values of the pixels of the processed 2 images are respectively:
Figure PCTCN2021111625-APPB-000006
and
Figure PCTCN2021111625-APPB-000007
The brightness values of the pixels at the corresponding positions can be summed according to the positions of the pixels in the 2 images, and the brightness values of the pixels in the fused image are
Figure PCTCN2021111625-APPB-000008
Alternatively, the weights corresponding to the respective luminance values may be set in advance.
For example, the larger the luminance value, the higher the weight it corresponds to, the smaller the luminance value, the lower the weight it corresponds to, and so on. Therefore, the brightness of the pixel points at the same position in the processed images can be weighted and fused according to the weight, and the fused image is generated.
The above examples are merely illustrative, and are not intended to limit the manner in which the plurality of processed images are fused in the embodiments of the present disclosure.
By implementing the embodiment of the disclosure, a plurality of first sub-images can be received first, then the plurality of first sub-images are processed to generate a plurality of processed images, and then the plurality of processed images are fused to generate a fused image. Therefore, the image signal processor can generate a fused image by processing and fusing the received sub-images, so that the image signal sensor and the image signal processor can process the image under the condition that the bit depth parameters are not matched.
Referring to fig. 4, fig. 4 is a flowchart illustrating an image processing method according to an embodiment of the disclosure. As shown in fig. 4, the method may include, but is not limited to, the steps of:
step 401, receiving a plurality of first sub-images, where a brightness value corresponding to each pixel point in each first sub-image is less than or equal to a set brightness threshold.
Step 402, the plurality of first sub-images are processed to generate a plurality of processed images.
It should be noted that, the specific content and implementation manner of step 401 and step 402 may refer to the descriptions of other embodiments of the present disclosure, and are not described herein again.
Step 403, determining the number of image fusion according to the second bit depth parameter corresponding to the image sensor and the first bit depth parameter corresponding to the image signal processor.
Alternatively, the second bit depth parameter may be sent by the image sensor, which is not limited by the present disclosure.
It can be understood that the larger the difference between the second bit depth parameter corresponding to the image sensor and the first bit depth parameter corresponding to the image signal processor, the larger the number of image fusion; the smaller the difference between the two, the smaller the number of image fusion.
For example, if the second bit depth parameter corresponding to the image sensor is 7 and the first bit depth parameter corresponding to the image signal processor is 10, the number of image fusion can be determined to be 2 3 I.e. 8. Or the second bit depth parameter corresponding to the image sensor is 10, the first bit depth parameter corresponding to the image signal processor is 12, and the image fusion number can be determined to be 2 2 I.e. 4.
It should be noted that the foregoing examples are only illustrative, and are not intended to limit the second bit depth parameter, the first bit depth parameter, the number of image fusion, etc. in the embodiments of the present disclosure.
In one possible implementation, the number of image fusions may be the same as the number of sub-images into which the image is split, which is not limited by the present disclosure.
And step 404, fusing the plurality of processed images in turn according to the number of image fusion to generate each fused image.
For example, the number of image fusions determined is 4. The image processor may sequentially fuse every 4 of the processed images to generate a corresponding fused image, etc., which is not limited by the present disclosure.
It will be appreciated that there may be a variety of situations when fusing a plurality of processed images.
For example, luminance values of pixel points at the same position in the plurality of processed images may be superimposed to generate a fused image.
Alternatively, the weights corresponding to the respective luminance values may be set in advance. Therefore, the brightness values of the pixel points at the same position in the processed images can be weighted and fused according to the weights, so that the fused image is generated.
The above examples are merely illustrative, and are not intended to limit the manner in which the processed images are fused in the embodiments of the present disclosure.
Thus, in the embodiments of the present disclosure, the image signal processor generates a fused image by fusing a plurality of processed images. The brightness of the fused image is higher than that of the received single sub-image, so that when the image signal processor processes the image, the image with higher brightness can be processed even if the corresponding first bit depth parameter is not matched with the corresponding second bit depth parameter of the image signal sensor, and the accuracy and reliability of the image processing are improved.
Optionally, the image signal processor may further send a first bit depth parameter corresponding to the image signal processor to the image sensor, so that the image sensor may determine a set brightness threshold according to the first bit depth parameter, and split the image according to the set brightness threshold to obtain a plurality of sub-images.
Optionally, the image signal processor may further determine a set brightness threshold according to the first bit depth parameter corresponding to the image signal processor, and then send the set brightness threshold to the image sensor, so that the image sensor may split the image according to the set brightness threshold to obtain a plurality of sub-images.
By implementing the embodiment of the disclosure, a plurality of first sub-images can be received first, then the plurality of first sub-images can be processed to generate a plurality of processed images, and then the number of image fusion is determined according to the second bit depth parameter corresponding to the image sensor and the first bit depth parameter corresponding to the image signal processor, so that the plurality of processed images are fused in sequence according to the number of image fusion to generate each fused image. Therefore, the image signal processor can generate a fused image by processing and fusing the plurality of sub-images, so that the image signal sensor and the image signal processor can process the image under the condition that bit depth parameters are not matched.
Referring to fig. 5, fig. 5 is a flowchart illustrating an image processing method according to an embodiment of the disclosure. As shown in fig. 5, the method may include, but is not limited to, the steps of:
In step 501, a plurality of first sub-images are received, where a brightness value corresponding to each pixel point in each first sub-image is less than or equal to a set brightness threshold.
Step 502, a plurality of first sub-images are processed to generate a plurality of processed images.
It should be noted that, the specific content and implementation manner of step 501 and step 502 may refer to the descriptions of other embodiments of the present disclosure, and are not described herein.
In step 503, the first sub-images to be fused are determined according to the receiving time intervals between the first sub-images.
It will be appreciated that the sub-images with the same receiving time interval may be determined as the sub-images to be fused, and if the receiving time intervals of the two sub-images are different, it may be determined that the two sub-images are not the sub-images to be fused, and the disclosure is not limited thereto.
Optionally, in the case that the receiving time interval between any two first sub-images is the first time interval, it may be determined that any two first sub-images are first sub-images to be fused. Or determining that any two first sub-images are non-to-be-fused first sub-images under the condition that the receiving time interval between any two first sub-images is the second time interval.
Wherein the first time interval is different from the second time interval.
For example, if the receiving time interval between the first sub-image 1, the first sub-image 2 and the first sub-image 3 is the first time interval T1 and the receiving time interval between the first sub-image 3 and the first sub-image 4 is the first time interval T2, it can be determined that the first sub-image 1, the first sub-image 2 and the first sub-image 3 are the first sub-images to be fused, and the first sub-image 4 and the first sub-image 3 are the first sub-images not to be fused.
It should be noted that the foregoing examples are only illustrative, and are not meant to be limiting of the first time interval, the second time interval, the first sub-image to be fused, the first sub-image not to be fused, and the like in the embodiments of the present disclosure.
Step 504, fusing the processed images corresponding to the first sub-images to be fused to generate fused images.
It will be appreciated that there may be a variety of ways to fuse the processed images corresponding to the first sub-image to be fused.
For example, the brightness of the pixel points at the same position in the processed image corresponding to the first sub-image to be fused may be superimposed to generate the fused image.
Alternatively, the weights corresponding to the respective luminance values may be set in advance. Therefore, the brightness of the pixel points at the same position of the processed image corresponding to the first sub-image to be fused can be weighted and fused according to the weight, so that the fused image is generated.
The foregoing examples are merely illustrative, and are not intended to be limiting of the manner in which the processed images corresponding to the first sub-images to be fused are fused in the embodiments of the present disclosure.
By implementing the embodiment of the disclosure, a plurality of first sub-images may be received first, then the plurality of first sub-images may be processed to generate a plurality of processed images, and according to a receiving time interval between the first sub-images, a first sub-image to be fused is determined, and then the processed images corresponding to the first sub-images to be fused may be fused to generate a fused image. Therefore, the image signal processor can generate a fused image by processing and fusing the plurality of sub-images, so that the image signal sensor and the image signal processor can process the image under the condition that bit depth parameters are not matched.
Referring to fig. 6, fig. 6 is a flowchart illustrating an image processing method according to an embodiment of the disclosure. As shown in fig. 6, the method may include, but is not limited to, the steps of:
step 601, the acquired image is sent to a processor.
The processor may be disposed outside the camera module, may be a digital signal processor (digital signal processor, DSP) disposed outside the camera module, or may be an image signal processor disposed outside the camera module, etc., which is not limited in this disclosure.
The image capturing module may include an image sensor, an image processing annunciator, and the like, which is not limited in the present disclosure.
It can be appreciated that in the embodiment of the disclosure, the image capturing module may sequentially send the images acquired by the image sensor to the processor, so that the processor may compress the images.
In one possible implementation, the processor may be an Image Signal Processor (ISP) or a Digital Signal Processor (DSP) that is provided separately from the original image processing platform, which is not limited by the present disclosure.
Step 602, receiving an image compressed by a processor and statistical information corresponding to the image, wherein a brightness value corresponding to each pixel point in the compressed image is smaller than or equal to a set brightness threshold.
Alternatively, the statistical information may be automatic exposure control (automatic exposure control, AEC) statistical information, or may also be automatic focus control (auto focus control, AFC) statistical information, or may also be automatic white balance (automatic white balance, AWB) statistical information, which is not limited in this disclosure.
It is to be understood that the statistical information may be one item or more of the above, for example, may be automatic exposure control statistical information and automatic focus control statistical information, or may be automatic exposure control, automatic focus control, automatic white balance, or the like, which is not limited in this disclosure.
Optionally, the processor may compress the received image and determine statistical information corresponding to the image, and then may send the compressed image and the statistical information corresponding to the image processing platform or the camera module. Therefore, in the embodiment of the disclosure, the image processing platform or the camera module may receive the image compressed by the processor and the statistical information corresponding to the image.
Optionally, the image processing platform or the image capturing module may send the first bit depth parameter corresponding to the image signal processor to the processor, so that the processor may compress the image according to the first bit depth parameter, so that a brightness value corresponding to each pixel point in the compressed image is less than or equal to a set brightness threshold value, which is not limited in the disclosure.
It is to be understood that the bit depth parameter corresponding to the image signal processor is inconsistent with the bit depth parameter corresponding to the processor, and the image signal processor may be any image signal processor matched with the image capturing module to process an image, which is not limited in this disclosure.
The image signal processor may be disposed in the image capturing module, or may be any image signal processor that is used in association with the image capturing module and is disposed on any image processing platform, which is not limited in this disclosure.
Optionally, the image processing platform or the camera module may also send the set brightness threshold to the processor, so that the processor may perform compression processing on the image, which is not limited in this disclosure.
The image processing platform or the camera module can determine a set brightness threshold according to a first bit depth parameter corresponding to the image signal processor, and send the set brightness threshold to the processor and the like, which is not limited in the disclosure.
Step 603, processing the compressed image according to the statistical information to generate a fused image.
The image processing platform may process the compressed image according to the statistical information, which is not limited in this disclosure.
For example, the image processing platform may perform AWB correction based on the received AWB statistics. Alternatively, BLC, LSC, BPC, color interpolation, bayer noise removal, automatic White Balance (AWB) correction, color correction, gamma correction, color space conversion, color noise removal and edge enhancement, color and contrast enhancement, and the like may also be performed based on the received statistical information, which is not limited in this disclosure.
By implementing the embodiment of the disclosure, the acquired image can be sent to the processor, the image compressed by the processor and the statistical information corresponding to the image are received, and then the compressed image can be processed according to the statistical information to generate the fused image. Therefore, the image processing platform can process the image according to the received compressed image and statistical information sent by the processor, so that the image signal sensor and the image signal processor can process the image under the condition that bit depth parameters are not matched.
Referring to fig. 7, fig. 7 is a flowchart illustrating an image processing method according to an embodiment of the disclosure. As shown in fig. 7, the method may include, but is not limited to, the steps of:
step 701, receiving an image sent by an image capturing module.
It will be appreciated that the processor may sequentially receive the images sent by the camera module in order, which is not limited in this disclosure.
Step 702, the image is parsed to determine statistical information corresponding to the image.
The processor may analyze the image in any desirable manner, so as to determine the statistical information corresponding to the image, which is not limited in this disclosure.
Alternatively, the statistical information may be at least one of: automatic exposure control statistics, automatic focus control statistics, automatic white balance statistics, and the like, which are not limited by the present disclosure.
Step 703, compressing the image according to the set brightness threshold to obtain a compressed image, wherein the brightness value corresponding to each pixel point in the compressed image is smaller than or equal to the set brightness threshold.
It will be appreciated that there may be a variety of situations when compressing an image.
For example, the set brightness threshold is a, and the brightness value of each pixel point in the image sent by the image capturing module is:
Figure PCTCN2021111625-APPB-000009
The brightness value of each pixel point in the compressed image may be:
Figure PCTCN2021111625-APPB-000010
alternatively, the specification, size, etc. of the compressed image may be set in advance, and the present disclosure is not limited thereto.
The above examples are merely illustrative, and are not intended to limit the luminance threshold value, the luminance value of each pixel, and the like set in the embodiments of the present disclosure.
It will be appreciated that the processor may have a variety of ways in determining the set brightness threshold, which is not limited by this disclosure.
Optionally, the processor may first receive a first bit depth parameter corresponding to the image signal processor sent by the image capturing module, and then determine the set brightness threshold according to the first bit depth parameter.
For example, if the first bit depth parameter is 8, the set brightness threshold may be determined, may be 256, or may be any value less than 256, etc., which is not limited in this disclosure.
Optionally, the processor may receive a set brightness threshold sent by the camera module.
For example, if the set brightness threshold sent by the image capturing module is 1024, the processor may determine that the set brightness threshold is 1024 or the like according to the received information, which is not limited in this disclosure.
Step 704, the compressed image and the statistical information corresponding to the image are sent to the camera module.
It can be understood that if the image capturing module does not include the image signal processor, the image capturing module may further send the compressed image and the statistical information corresponding to the image signal processor after receiving the compressed image and the statistical information corresponding to the image, which is not limited in this disclosure. Alternatively, the compressed image and the statistical information corresponding to the image may be sent to the image processing platform, which is not limited in this disclosure.
Optionally, the processor may send the compressed image and the statistical information corresponding to the image capturing module, or may also send the compressed image and the statistical information corresponding to the image capturing module according to a corresponding time interval, which is not limited in this disclosure.
For example, the processor may send the compressed image 1 and the statistical information 1 corresponding to the image 1 to the image capturing module according to the time interval T1, and then may send the compressed image 2 and the statistical information 2 corresponding to the image 2 to the image capturing module according to the time interval T2, which is not limited in the disclosure.
By implementing the embodiment of the disclosure, the processor may receive the image sent by the image capturing module, then analyze the image to determine statistical information corresponding to the image, compress the image according to the set brightness threshold value to obtain a compressed image, and then send the compressed image and the statistical information corresponding to the image capturing module. Therefore, the image pickup module can process the image according to the received compressed image sent by the processor and the statistical information, and the image signal sensor and the image signal processor can process the image under the condition that the bit depth parameters are not matched.
The embodiment of the disclosure also provides an image processing device, and fig. 8 is a schematic structural diagram of the image processing device according to the embodiment of the disclosure.
As shown in fig. 8, the image processing apparatus 80 includes: splitting module 810 and transmitting module 820.
The splitting module 810 is configured to split the collected first image into a plurality of first sub-images according to a set brightness threshold, where a brightness value corresponding to each pixel point in each first sub-image is less than or equal to the set brightness threshold.
And a transmitting module 820 for sequentially transmitting the plurality of first sub-images to the image signal processor.
Optionally, the method further comprises:
the receiving module is used for receiving the first bit depth parameter sent by the image signal processor;
and the determining module is used for determining the set brightness threshold according to the first bit depth parameter.
Optionally, the determining module is further configured to:
and determining the number of the plurality of first sub-images according to the first bit depth parameter and the second bit depth parameter corresponding to the image sensor.
Optionally, the receiving module is further configured to:
and receiving a set brightness threshold value sent by the image signal processor.
Optionally, the sending module 820 is further configured to:
and sending a second bit depth parameter corresponding to the image sensor to the image signal processor.
Optionally, the sending module 820 is specifically configured to:
and sequentially transmitting the plurality of first sub-images to the image signal processor at the same first time interval.
Optionally, the sending module 820 is further configured to:
and after the plurality of first sub-images are sent, a second time interval passes, and a plurality of second sub-images are sent to the image signal processor, wherein the plurality of second sub-images are acquired images after the second images are split, and the second time interval is different from the first time interval.
The functions and specific implementation principles of the foregoing modules in the embodiments of the present disclosure may refer to the foregoing method embodiments, and are not repeated herein.
According to the image processing device disclosed by the embodiment of the disclosure, the collected first image can be split into a plurality of first sub-images according to the set brightness threshold, and then the plurality of first sub-images can be sequentially sent to the image signal processor. Therefore, the image signal sensor divides the high-brightness image into a plurality of sub-images within the brightness threshold value, so that the image signal processor can process the sub-images normally, and the image signal sensor and the image signal processor can process the image under the condition that the bit depth parameters are not matched.
The embodiment of the disclosure also provides an image processing device, and fig. 9 is a schematic structural diagram of the image processing device according to the embodiment of the disclosure.
As shown in fig. 9, the image processing apparatus 90 includes: a receiving module 910, a processing module 920, and a fusing module 930.
The receiving module 910 is configured to receive a plurality of first sub-images, where a luminance value corresponding to each pixel point in each of the first sub-images is less than or equal to a set luminance threshold.
A processing module 920, configured to process the plurality of first sub-images to generate a plurality of processed images.
A fusion module 930, configured to fuse the plurality of processed images to generate a fused image.
Optionally, the device further comprises a sending module, configured to:
transmitting the set brightness threshold to an image sensor;
or alternatively, the process may be performed,
and sending a first bit depth parameter corresponding to the image signal processor to the image sensor.
Optionally, the fusion module 930 is specifically configured to:
determining the number of image fusion according to the second bit depth parameter corresponding to the image sensor and the first bit depth parameter corresponding to the image signal processor;
and according to the image fusion quantity, sequentially fusing the plurality of processed images to generate each fused image.
Optionally, the fusing module 930 includes:
a determining unit, configured to determine a first sub-image to be fused according to a receiving time interval between the first sub-images;
and the generating unit is used for fusing the processed images corresponding to the first sub-images to be fused so as to generate the fused images.
Optionally, the determining unit is specifically configured to:
Determining any two first sub-images as first sub-images to be fused in response to the receiving time interval between the any two first sub-images as a first time interval;
or alternatively, the process may be performed,
determining that any two first sub-images are non-to-be-fused first sub-images according to the second time interval of the receiving time interval between the any two first sub-images;
wherein the first time interval is different from the second time interval.
The functions and specific implementation principles of the foregoing modules in the embodiments of the present disclosure may refer to the foregoing method embodiments, and are not repeated herein.
The image processing device of the embodiment of the disclosure may first receive a plurality of first sub-images, then process the plurality of first sub-images to generate a plurality of processed images, and then fuse the plurality of processed images to generate a fused image. Therefore, the image signal processor can generate a fused image by processing and fusing the received sub-images, so that the image signal sensor and the image signal processor can process the image under the condition that the bit depth parameters are not matched.
The embodiment of the disclosure also provides an image processing device, and fig. 10 is a schematic structural diagram of the image processing device according to the embodiment of the disclosure.
As shown in fig. 10, the image processing apparatus 100 includes: a transmitting module 1001, a receiving module 1002, and a processing module 1003.
The sending module 1001 is configured to send the acquired image to the processor.
The receiving module 1002 is configured to receive the compressed image and statistical information corresponding to the image, where a luminance value corresponding to each pixel point in the compressed image is less than or equal to a set luminance threshold.
And a processing module 1003, configured to process the compressed image according to the statistics information, so as to generate a fused image.
Optionally, the sending module 1001 is further configured to:
transmitting a first bit depth parameter corresponding to an image signal processor to the processor;
or alternatively, the process may be performed,
and sending the set brightness threshold to the processor.
Optionally, the statistical information includes at least one of: automatic exposure control statistics, automatic focus control statistics, and automatic white balance statistics.
The functions and specific implementation principles of the foregoing modules in the embodiments of the present disclosure may refer to the foregoing method embodiments, and are not repeated herein.
The image processing device of the embodiment of the disclosure can send the collected image to the processor, receive the image compressed by the processor and the statistical information corresponding to the image, and then process the compressed image according to the statistical information to generate the fused image. Therefore, the image processing platform can process the image according to the received compressed image and statistical information sent by the processor, so that the image signal sensor and the image signal processor can process the image under the condition that bit depth parameters are not matched.
The embodiment of the disclosure also provides an image processing device, and fig. 11 is a schematic structural diagram of the image processing device according to the embodiment of the disclosure.
As shown in fig. 11, the image processing apparatus 110 includes: a receiving module 1101, a parsing module 1102, an obtaining module 1103 and a sending module 1104.
The receiving module 1101 is configured to receive an image sent by the image capturing module.
And the parsing module 1102 is configured to parse the image to determine statistical information corresponding to the image.
The obtaining module 1103 is configured to compress the image according to a set brightness threshold, so as to obtain a compressed image, where a brightness value corresponding to each pixel point in the compressed image is less than or equal to the set brightness threshold.
And the sending module 1104 is configured to send the compressed image and statistical information corresponding to the image to the camera module.
Optionally, the receiving module 1101 is further configured to:
and receiving the set brightness threshold sent by the camera module.
Optionally, the receiving module 1101 is further configured to receive a first bit depth parameter corresponding to the image signal processor sent by the image capturing module.
The obtaining module 1103 is further configured to determine the set brightness threshold according to the first bit depth parameter.
The functions and specific implementation principles of the foregoing modules in the embodiments of the present disclosure may refer to the foregoing method embodiments, and are not repeated herein.
According to the image processing device disclosed by the embodiment of the disclosure, the processor can firstly receive the image sent by the camera module, then analyze the image to determine the statistical information corresponding to the image, compress the image according to the set brightness threshold value to obtain the compressed image, and then send the compressed image and the statistical information corresponding to the image to the camera module. Therefore, the image pickup module can process the image according to the received compressed image sent by the processor and the statistical information, and the image signal sensor and the image signal processor can process the image under the condition that the bit depth parameters are not matched.
Fig. 12 is a block diagram of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 12, the electronic device 120 includes: memory 1210 and processor 1220, bus 1230 connecting the different components (including memory 1210 and processor 1220).
Wherein the memory 1210 is used for storing executable instructions of the processor 1220; the processor 1201 is configured to invoke and execute executable instructions stored in the memory 1202 to implement the image processing method proposed by the above-described embodiment of the present disclosure.
Bus 1230 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 120 typically includes a variety of electronic device readable media. Such media can be any available media that is accessible by electronic device 120 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 1210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 1240 and/or cache memory 1250. Electronic device 120 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, the storage system 1260 may be used to read from or write to non-removable, non-volatile magnetic media (not shown in FIG. 12, commonly referred to as a "hard disk drive"). Although not shown in fig. 12, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 1230 through one or more data medium interfaces. Memory 1210 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the various embodiments of the disclosure.
Program/utility 1280 having a set (at least one) of program modules 1270 may be stored in, for example, memory 1210, such program modules 1270 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 1270 generally perform the functions and/or methods in the embodiments described in this disclosure.
The electronic device 120 may also communicate with one or more external devices 1290 (e.g., keyboard, pointing device, display 1291, etc.), one or more devices that enable a user to interact with the electronic device 120, and/or any device (e.g., network card, modem, etc.) that enables the electronic device 120 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1292. Also, electronic device 120 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 293. As shown, network adapter 1293 communicates with other modules of electronic device 120 via bus 1230. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 120, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
Processor 1220 executes various functional applications and data processing by executing programs stored in memory 1210.
It should be noted that, the implementation process of the electronic device in the embodiment of the present disclosure refers to the foregoing explanation of the image processing method in the embodiment of the present disclosure, and will not be repeated herein.
According to the electronic device disclosed by the embodiment of the disclosure, the collected first image can be split into a plurality of first sub-images according to the set brightness threshold, and then the plurality of first sub-images can be sequentially sent to the image signal processor. Therefore, the image signal sensor divides the high-brightness image into a plurality of sub-images within the brightness threshold value, so that the image signal processor can process the sub-images normally, and the image signal sensor and the image signal processor can process the image under the condition that the bit depth parameters are not matched.
In order to implement the above-described embodiments, the embodiments of the present disclosure also propose a non-transitory computer-readable storage medium, instructions in which, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method in any of the embodiments described above.
To achieve the above embodiments, the embodiments of the present disclosure further provide a computer program product, which when executed by a processor of an electronic device, enables the electronic device to perform the method of processing an image in any of the embodiments as described above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer programs. When the computer program is loaded and executed on a computer, the flow or functions described in accordance with the embodiments of the present disclosure are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer program may be stored in or transmitted from one computer readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, server, or data center. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that: the various numbers of first, second, etc. referred to in this disclosure are merely for ease of description and are not intended to limit the scope of embodiments of this disclosure, nor to indicate sequencing.
At least one of the present disclosure may also be described as one or more, a plurality may be two, three, four or more, and the present disclosure is not limited. In the embodiment of the disclosure, for a technical feature, the technical features in the technical feature are distinguished by "first", "second", "third", "a", "B", "C", and "D", and the technical features described by "first", "second", "third", "a", "B", "C", and "D" are not in sequence or in order of magnitude.
The correspondence relationships shown in the tables in the present disclosure may be configured or predefined. The values of the information in each table are merely examples, and may be configured as other values, and the present disclosure is not limited thereto. In the case of the correspondence between the configuration information and each parameter, it is not necessarily required to configure all the correspondence shown in each table. For example, in the table in the present disclosure, the correspondence shown by some rows may not be configured. For another example, appropriate morphing adjustments, e.g., splitting, merging, etc., may be made based on the tables described above. The names of the parameters indicated in the tables may be other names which are understood by the communication device, and the values or expressions of the parameters may be other values or expressions which are understood by the communication device. When the tables are implemented, other data structures may be used, for example, an array, a queue, a container, a stack, a linear table, a pointer, a linked list, a tree, a graph, a structure, a class, a heap, a hash table, or a hash table.
Predefined in this disclosure may be understood as defining, predefining, storing, pre-negotiating, pre-configuring, curing, or pre-sintering.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (38)

  1. A method of processing an image, comprising:
    splitting an acquired first image into a plurality of first sub-images according to a set brightness threshold, wherein the brightness value corresponding to each pixel point in each first sub-image is smaller than or equal to the set brightness threshold;
    and sequentially transmitting the plurality of first sub-images to an image signal processor.
  2. The method as recited in claim 1, further comprising:
    receiving the first bit depth parameter sent by the image signal processor;
    and determining the set brightness threshold according to the first bit depth parameter.
  3. The method as recited in claim 2, further comprising:
    and determining the number of the plurality of first sub-images according to the first bit depth parameter and the second bit depth parameter corresponding to the image sensor.
  4. The method as recited in claim 2, further comprising:
    and receiving a set brightness threshold value sent by the image signal processor.
  5. The method as recited in claim 1, further comprising:
    and sending a second bit depth parameter corresponding to the image sensor to the image signal processor.
  6. The method according to any one of claims 1-5, wherein sequentially sending the plurality of first sub-images to an image signal processor comprises:
    And sequentially transmitting the plurality of first sub-images to the image signal processor at the same first time interval.
  7. The method as recited in claim 6, further comprising:
    and after the plurality of first sub-images are sent, a second time interval passes, and a plurality of second sub-images are sent to the image signal processor, wherein the plurality of second sub-images are acquired images after the second images are split, and the second time interval is different from the first time interval.
  8. A method of processing an image, comprising:
    receiving a plurality of first sub-images, wherein the brightness value corresponding to each pixel point in each first sub-image is smaller than or equal to a set brightness threshold value;
    processing the plurality of first sub-images to generate a plurality of processed images;
    and fusing the plurality of processed images to generate a fused image.
  9. The method as recited in claim 8, further comprising:
    transmitting the set brightness threshold to an image sensor;
    or alternatively, the process may be performed,
    and sending a first bit depth parameter corresponding to the image signal processor to the image sensor.
  10. The method of claim 8, wherein fusing the plurality of processed images to generate a fused image comprises:
    Determining the number of image fusion according to the second bit depth parameter corresponding to the image sensor and the first bit depth parameter corresponding to the image signal processor;
    and according to the image fusion quantity, sequentially fusing the plurality of processed images to generate each fused image.
  11. The method of any of claims 8-10, wherein fusing the plurality of processed images to generate a fused image comprises:
    determining first sub-images to be fused according to the receiving time intervals among the first sub-images;
    and fusing the processed images corresponding to the first sub-images to be fused to generate the fused images.
  12. The method of claim 11, wherein the determining the first sub-image to be fused based on the reception time interval between the respective first sub-images comprises:
    determining any two first sub-images as first sub-images to be fused in response to the receiving time interval between the any two first sub-images as a first time interval;
    or alternatively, the process may be performed,
    determining that any two first sub-images are non-to-be-fused first sub-images according to the second time interval of the receiving time interval between the any two first sub-images;
    Wherein the first time interval is different from the second time interval.
  13. A method of processing an image, comprising:
    transmitting the acquired image to a processor;
    receiving an image compressed by the processor and statistical information corresponding to the image, wherein a brightness value corresponding to each pixel point in the compressed image is smaller than or equal to a set brightness threshold value;
    and processing the compressed image according to the statistical information to generate a fused image.
  14. The method as recited in claim 13, further comprising:
    transmitting a first bit depth parameter corresponding to an image signal processor to the processor;
    or alternatively, the process may be performed,
    and sending the set brightness threshold to the processor.
  15. The method of claim 13, wherein the statistical information comprises at least one of: automatic exposure control statistics, automatic focus control statistics, and automatic white balance statistics.
  16. A method of processing an image, comprising:
    receiving an image sent by a camera module;
    analyzing the image to determine statistical information corresponding to the image;
    Compressing the image according to a set brightness threshold to obtain a compressed image, wherein a brightness value corresponding to each pixel point in the compressed image is smaller than or equal to the set brightness threshold;
    and sending the compressed image and the statistical information corresponding to the image to the camera module.
  17. The method as recited in claim 16, further comprising:
    and receiving the set brightness threshold sent by the camera module.
  18. The method as recited in claim 16, further comprising:
    receiving a first bit depth parameter corresponding to an image signal processor sent by the camera module;
    and determining the set brightness threshold according to the first bit depth parameter.
  19. An image processing apparatus, comprising:
    the splitting module is used for splitting the acquired first image into a plurality of first sub-images according to a set brightness threshold, wherein the brightness value corresponding to each pixel point in each first sub-image is smaller than or equal to the set brightness threshold;
    and the transmitting module is used for sequentially transmitting the plurality of first sub-images to the image signal processor.
  20. The apparatus as recited in claim 19, further comprising:
    the receiving module is used for receiving the first bit depth parameter sent by the image signal processor;
    and the determining module is used for determining the set brightness threshold according to the first bit depth parameter.
  21. The apparatus of claim 20, wherein the determination module is further for:
    and determining the number of the plurality of first sub-images according to the first bit depth parameter and the second bit depth parameter corresponding to the image sensor.
  22. The apparatus of claim 20, wherein the receiving module is further for:
    and receiving a set brightness threshold value sent by the image signal processor.
  23. The apparatus of claim 19, wherein the transmitting module is further for:
    and sending a second bit depth parameter corresponding to the image sensor to the image signal processor.
  24. The apparatus according to any of claims 19-23, wherein the sending module is specifically configured to:
    and sequentially transmitting the plurality of first sub-images to the image signal processor at the same first time interval.
  25. The apparatus of claim 24, wherein the transmitting module is further for:
    And after the plurality of first sub-images are sent, a second time interval passes, and a plurality of second sub-images are sent to the image signal processor, wherein the plurality of second sub-images are acquired images after the second images are split, and the second time interval is different from the first time interval.
  26. An image processing apparatus, comprising:
    the receiving module is used for receiving a plurality of first sub-images, wherein the brightness value corresponding to each pixel point in each first sub-image is smaller than or equal to a set brightness threshold value;
    the processing module is used for processing the plurality of first sub-images to generate a plurality of processed images;
    and the fusion module is used for fusing the plurality of processed images to generate a fused image.
  27. The apparatus of claim 26, further comprising a transmission module for:
    transmitting the set brightness threshold to an image sensor;
    or alternatively, the process may be performed,
    and sending a first bit depth parameter corresponding to the image signal processor to the image sensor.
  28. The apparatus of claim 26, wherein the fusion module is specifically configured to:
    determining the number of image fusion according to the second bit depth parameter corresponding to the image sensor and the first bit depth parameter corresponding to the image signal processor;
    And according to the image fusion quantity, sequentially fusing the plurality of processed images to generate each fused image.
  29. The apparatus of any one of claims 26-28, wherein the fusion module comprises:
    a determining unit, configured to determine a first sub-image to be fused according to a receiving time interval between the first sub-images;
    and the generating unit is used for fusing the processed images corresponding to the first sub-images to be fused so as to generate the fused images.
  30. The apparatus according to claim 29, wherein the determining unit is specifically configured to:
    determining any two first sub-images as first sub-images to be fused in response to the receiving time interval between the any two first sub-images as a first time interval;
    or alternatively, the process may be performed,
    determining that any two first sub-images are non-to-be-fused first sub-images according to the second time interval of the receiving time interval between the any two first sub-images;
    wherein the first time interval is different from the second time interval.
  31. An image processing apparatus, comprising:
    the sending module is used for sending the acquired image to the processor;
    The receiving module is used for receiving the image compressed by the processor and the statistical information corresponding to the image, wherein the brightness value corresponding to each pixel point in the compressed image is smaller than or equal to a set brightness threshold value;
    and the processing module is used for processing the compressed image according to the statistical information so as to generate a fused image.
  32. The apparatus of claim 31, wherein the transmitting module is further for:
    transmitting a first bit depth parameter corresponding to an image signal processor to the processor;
    or alternatively, the process may be performed,
    and sending the set brightness threshold to the processor.
  33. The apparatus of claim 31, wherein the statistical information comprises at least one of: automatic exposure control statistics, automatic focus control statistics, and automatic white balance statistics.
  34. An image processing apparatus, comprising:
    the receiving module is used for receiving the image sent by the camera module;
    the analysis module is used for analyzing the image to determine the statistical information corresponding to the image;
    the acquisition module is used for compressing the image according to a set brightness threshold value to acquire a compressed image, wherein the brightness value corresponding to each pixel point in the compressed image is smaller than or equal to the set brightness threshold value;
    And the sending module is used for sending the compressed image and the statistical information corresponding to the image to the camera module.
  35. The apparatus of claim 34, wherein the receiving module is further for:
    and receiving the set brightness threshold sent by the camera module.
  36. The apparatus of claim 34, wherein the device comprises a plurality of sensors,
    the receiving module is further used for receiving a first bit depth parameter corresponding to the image signal processor sent by the camera module;
    the obtaining module is further configured to determine the set brightness threshold according to the first bit depth parameter.
  37. An electronic device, comprising:
    a processor;
    a memory for storing executable instructions of the processor;
    wherein the processor is configured to invoke and execute the executable instructions stored by the memory to implement the method of processing an image according to any of claims 1-7, or to implement the method of processing an image according to any of claims 8-12, or to implement the method of processing an image according to any of claims 13-15, or to implement the method of processing an image according to any of claims 16-18.
  38. A non-transitory computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of processing an image as claimed in any one of claims 1-7, or the method of processing an image as claimed in any one of claims 8-12, or the method of processing an image as claimed in any one of claims 13-15, or the method of processing an image as claimed in any one of claims 16-18
CN202180002388.7A 2021-08-09 2021-08-09 Image processing method and device Pending CN116076081A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/111625 WO2023015422A1 (en) 2021-08-09 2021-08-09 Image processing method and apparatus

Publications (1)

Publication Number Publication Date
CN116076081A true CN116076081A (en) 2023-05-05

Family

ID=85200384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180002388.7A Pending CN116076081A (en) 2021-08-09 2021-08-09 Image processing method and device

Country Status (2)

Country Link
CN (1) CN116076081A (en)
WO (1) WO2023015422A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5303399B2 (en) * 2009-08-18 2013-10-02 日本放送協会 Moving picture bit depth reduction apparatus and program
CN106851138B (en) * 2017-04-11 2019-07-16 成都聚像光学技术有限公司 A kind of image processing method based on HDR
EP3669542B1 (en) * 2017-08-15 2023-10-11 Dolby Laboratories Licensing Corporation Bit-depth efficient image processing
CN111107274B (en) * 2018-10-26 2021-01-08 北京图森智途科技有限公司 Image brightness statistical method and imaging device
CN112907468A (en) * 2021-02-04 2021-06-04 浙江大华技术股份有限公司 Image noise reduction method, device and computer storage medium

Also Published As

Publication number Publication date
WO2023015422A1 (en) 2023-02-16

Similar Documents

Publication Publication Date Title
US8600158B2 (en) Method and system operative to process color image data
KR101703931B1 (en) Surveillance system
US10212350B2 (en) Display control apparatus, display control method, and program
US8493464B2 (en) Resolution adjusting method
US10110806B2 (en) Electronic device and method for operating the same
JP5201203B2 (en) Image processing apparatus, image processing method, and program
US11410286B2 (en) Information processing apparatus, system, method for controlling information processing apparatus, and non-transitory computer-readable storage medium
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device
CN110049310B (en) Video image acquisition method and device, video quality detection method and device
CN111918065A (en) Information compression/decompression method and device
CN112085775A (en) Image processing method, device, terminal and storage medium
CN113455013B (en) Electronic device for processing image and image processing method thereof
WO2019144262A1 (en) Smudge detection method and apparatus and mobile electronic device
KR20200009922A (en) electronic device and method for revising image based on transfer status of image
CN110933313B (en) Dark light photographing method and related equipment
CN116076081A (en) Image processing method and device
CN109698933B (en) Data transmission method, camera, electronic device, and computer-readable storage medium
JP2012533922A (en) Video processing method and apparatus
CN113706429B (en) Image processing method, device, electronic equipment and storage medium
CN108475432B (en) 360-degree panoramic image and 360-degree panoramic video identification method and electronic device
JP2017195442A (en) Image processing apparatus, image processing method, and program
KR101758616B1 (en) Surveillance system
CN115474061A (en) Image data transmission method and device, terminal equipment and storage medium
CN117475013A (en) Computer equipment and video data processing method
CN115623191A (en) Webcam testing method, electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination