CN115460390A - Image color processing method, image color processing device, vehicle, electronic device, and storage medium - Google Patents

Image color processing method, image color processing device, vehicle, electronic device, and storage medium Download PDF

Info

Publication number
CN115460390A
CN115460390A CN202111389251.XA CN202111389251A CN115460390A CN 115460390 A CN115460390 A CN 115460390A CN 202111389251 A CN202111389251 A CN 202111389251A CN 115460390 A CN115460390 A CN 115460390A
Authority
CN
China
Prior art keywords
color
target
image
camera
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111389251.XA
Other languages
Chinese (zh)
Inventor
刘锋
李倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Rockwell Technology Co Ltd
Original Assignee
Beijing Rockwell Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Rockwell Technology Co Ltd filed Critical Beijing Rockwell Technology Co Ltd
Priority to CN202111389251.XA priority Critical patent/CN115460390A/en
Publication of CN115460390A publication Critical patent/CN115460390A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The present disclosure provides an image color processing method, an apparatus, a vehicle, an electronic device, and a storage medium, wherein the method comprises: the method comprises the steps of obtaining a target image collected by a first camera, obtaining a reference image collected by a second camera adjacent to the first camera, wherein an overlapping area exists between the target image and the reference image, determining a target color balance parameter of the target image according to color information of the overlapping area in the reference image so as to adjust the color of the image collected by the first camera, adjusting the color of the image collected by the subsequent first camera by determining the target color balance parameter of the image collected by the first camera, ensuring the consistency of the color information in the images collected by a plurality of cameras in a plurality of camera scenes, avoiding color difference after splicing, and improving visual effect.

Description

Image color processing method, image color processing device, vehicle, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of graphics processing technologies, and in particular, to an image color processing method and apparatus, a vehicle, an electronic device, and a storage medium.
Background
At present, panoramic mosaic images are more and more widely applied, for example, when a vehicle runs, images collected by a plurality of cameras carried on the vehicle are spliced into panoramic images, such as panoramic images, and the colors of the images in different areas are different due to different collection sources of the current panoramic mosaic images, so that the obtained panoramic mosaic images visually feel obvious mosaic sense, the mosaic effect is poor, and therefore, how to improve the color consistency of the panoramic mosaic images is a technical problem to be solved.
Disclosure of Invention
The present disclosure is directed to solving, at least in part, one of the technical problems in the related art.
Therefore, the present disclosure provides an image color processing method, an image color processing apparatus, a vehicle, an electronic device, and a storage medium, so as to improve color difference in a panoramic mosaic image and improve the mosaic effect.
An embodiment of one aspect of the present disclosure provides an image color processing method, including:
acquiring a target image acquired by a first camera;
acquiring a reference image acquired by a second camera adjacent to the first camera, wherein an overlapping area exists between the target image and the reference image;
and determining a target color balance parameter of the target image according to the color information of the overlapping area in the reference image so as to adjust the color of the image acquired by the first camera.
An embodiment of another aspect of the present disclosure provides an image color processing apparatus, including:
the first acquisition module is used for acquiring a target image acquired by the first camera;
the second acquisition module is used for acquiring a reference image acquired by a second camera adjacent to the first camera, wherein an overlapping area exists between the target image and the reference image;
and the determining module is used for determining a target color balance parameter of the target image according to the color information of the overlapping area in the reference image so as to adjust the color of the image acquired by the first camera.
An embodiment of another aspect of the present disclosure provides an electronic device, including:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of the preceding aspect.
In another aspect, the embodiment of the present disclosure provides a vehicle including the electronic device in another aspect.
Another aspect of the present disclosure proposes a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method of the preceding aspect.
Another embodiment of the present disclosure provides a computer program product, which includes computer instructions, when executed by a processor, implement the method of the foregoing aspect.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
the method comprises the steps of obtaining a target image collected by a first camera, obtaining a reference image collected by a second camera adjacent to the first camera, wherein an overlapping area exists between the target image and the reference image, determining a target color balance parameter of the target image according to color information of the overlapping area in the reference image so as to adjust the image color of the target image, ensuring the consistency of the color information in the images collected by a plurality of cameras, avoiding color difference after splicing and improving visual effect.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an image color processing method according to an embodiment of the disclosure;
FIG. 2 is a schematic illustration of a mosaic provided by the present common embodiment;
fig. 3 is a schematic flowchart of another image color processing method according to an embodiment of the disclosure;
fig. 4 is a schematic flowchart of another image color processing method provided in the embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image color processing apparatus according to an embodiment of the present disclosure; and
fig. 6 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary and intended to be illustrative of the present disclosure, and should not be construed as limiting the present disclosure.
An image color processing method, an apparatus, a vehicle, an electronic device, and a storage medium of the embodiments of the present disclosure are described below with reference to the drawings.
Fig. 1 is a schematic flowchart of an image color processing method according to an embodiment of the present disclosure.
As shown in fig. 1, the method comprises the steps of:
step 101, acquiring a target image acquired by a first camera.
The first camera may be any one of a plurality of cameras, the first camera is only set for convenience of distinguishing, a second camera and a third camera exist subsequently, and the name does not indicate priority.
In the embodiment of the disclosure, the target image acquired by the first camera is an image which needs to be subjected to color balance adjustment, and the target image may be a frame of image acquired at one moment or a plurality of frames of images acquired at a plurality of moments, that is, a group of images. The acquired target image acquired by the first camera is related to the acquisition frequency of the camera, the acquisition frequency of the camera may be one frame in 10 seconds, or multiple frames, for example, 10 frames, may be acquired in one second, which is not limited herein, that is, the image acquired by the first camera in the embodiment of the present disclosure may not be continuous frames, so that the vehicle processing efficiency is improved, or may also be continuous frames, so as to obtain a better processing effect.
For example, the multiple cameras are 4 cameras, one camera 1 is used as a first camera, and in one scene, a target image acquired by the first camera is a frame; in another scene, the acquired target image acquired by the first camera is 5 frames of images, the images acquired by the first camera at 5 moments are respectively S11, S12, S13, S14 and S15, that is, the images S11, S12, S13, S14 and S15 are the target images acquired by the first camera, and the target image is a group of images acquired by the first camera, that is, the image acquired by the same camera is a group of images.
And 102, acquiring a reference image acquired by a second camera adjacent to the first camera, wherein an overlapping area exists between the target image and the reference image.
The reference image is an image collected by a second camera adjacent to the first camera. The number of the second cameras may be one or more, that is, the reference images may be one or more groups, where the images included in one group are one or more frames. The acquisition time of the reference image and the target image has a corresponding relationship, that is, the acquisition time of one frame of target image and the acquisition time of one frame of reference image have a corresponding relationship, and the corresponding relationship may be the same time or a preset time difference.
In the embodiment of the disclosure, the first camera and the second camera have an adjacent relationship, in order to realize panoramic image stitching, the same image area acquired between the target image acquired by the first camera and the reference image acquired by the second camera exists, and the overlapping area existing between the target image and the reference image can be determined according to the same image area acquired. As an example, fig. 2 is a schematic diagram of a splicing map provided in an embodiment of the present application, and a front shot collected image, a left shot collected image, a right shot collected image, and a rear shot collected image are collected by 4 cameras respectively, as shown in a left image in fig. 2, in a top view scene, the left shot collected image is a target image, a corresponding reference image is a front shot collected image, and an overlapping area between the left shot collected image and the front shot collected image is determined to be an area indicated by a in a left image in fig. 2, and an overlapping area indicated by a left shot collected image A1 and a front shot collected image A2 in a right image in fig. 2 is expanded. Therefore, a group of left-view captured images and a corresponding group of reference images, that is, a group of pre-view captured images, are determined, the overlapping regions of the captured images corresponding to each time are the regions indicated by a in fig. 2, and the overlapping regions are respectively indicated by A1 in the left-view captured image and A2 in the pre-view captured image in the right-view captured image in fig. 2 when the overlapping regions are unfolded.
And 103, determining a target color balance parameter of the target image according to the color information of the overlapping area in the reference image so as to adjust the color of the image acquired by the first camera.
The color information comprises a plurality of color channels, wherein the color channels are Red, green and Blue channels.
In the embodiment of the present disclosure, the color information of the overlapping area in the reference image is determined according to the target color balance parameter corresponding to the reference image, and the target color balance parameter of the reference image may be determined, and the target color balance parameter of the reference image may be a target color balance parameter used for indicating the color information of the reference image itself, or a set target color balance parameter, or a color balance parameter determined according to the color information of the reference image corresponding to the target color balance parameter. Therefore, the color information of the overlapping region of the reference image is determined in different ways in different scenes, and the following description is made for different scenes:
in the first scenario, the color information of the reference image may be the color information of the reference image itself, that is, in an initial stage of performing color equalization on the images acquired by the multiple cameras, an image acquired by any one of the multiple cameras is taken as the reference image, and thus, the color information of the overlapping area of the reference image is the color information of the overlapping area of the reference image itself.
For example, as shown in fig. 2, for 4 frames of images captured by 4 cameras, one frame of image may be selected from the 4 frames of images as a reference image, for example, a pre-shot image is selected as the reference image, so that the color information of the overlapping area of the pre-shot image is the color information of the overlapping area itself, that is, the color information that is not adjusted. In the second scenario, the color information of the reference image may be color information adjusted according to a set target color balance parameter, that is, in an initial stage of performing color balance on images collected by a plurality of cameras, an image collected by any one of the plurality of cameras is used as the reference image, and the color information adjusted by the reference image is determined according to the set target color balance parameter. It should be noted that the target color balance parameter set for the reference image may be set by a person skilled in the art according to a business requirement or a scene requirement, and is not limited in this embodiment.
In the third scenario, the target color balance parameter of the reference image is determined according to the color information of the reference image corresponding to the target color balance parameter, that is, the color information corresponding to the overlapping region of the reference image is adjusted by the reference image according to the determined target color balance parameter, so as to obtain the adjusted color information of the overlapping region in the reference image.
Further, in a case that the reference image has the corresponding target color balance parameter, in an implementation manner of the embodiment of the present disclosure, according to color information of each pixel point in the overlapping region of each group of reference images in the color space, that is, the RGB space, the color information of each pixel point in the overlapping region is adjusted by the corresponding target color balance parameter, so as to obtain the adjusted color information of each pixel point, according to the adjusted color information of each pixel point, a color average value of the overlapping region in the adjusted reference image is determined, and the color average value of the overlapping region in the adjusted reference image is used as the color information of the overlapping region in the reference image.
In the embodiment of the present disclosure, there is an overlapping area of shooting between a target image acquired by a first camera and a reference image acquired by an adjacent second camera, and lens parameters of the first camera and the second camera may be different, or light rays during acquisition are different due to different setting positions of the cameras, so that colors of images in the overlapping area are different in images acquired by different cameras, and therefore, colors of images acquired by adjacent cameras need to be balanced, that is, colors of images acquired by two adjacent cameras are adjusted to make colors of images consistent. For example, for each pixel point in the overlapping region in the left captured image and the previous captured image, the corresponding values of the pixels in the color space, that is, the RGB space, should be the same or similar after the color equalization process. Therefore, as an implementation manner, when the camera has a low acquisition frequency, for example, the acquired target image is a single-frame image, the target color balance parameter of the target image corresponding to the reference image can be determined according to the color information of the overlapping region of the corresponding reference image, and the target color balance parameter enables the color of the target image acquired by the first camera and the color of the corresponding reference image to be consistent, so that the accuracy of the target color balance parameter is improved.
As another implementation manner, when the acquisition frequency of the camera is high, for example, the acquired target image is a group of images, that is, a multi-frame image, and according to the color information of the overlapping region of the corresponding multi-frame reference image, the target color balance parameter of the multi-frame target image corresponding to the multi-frame reference image can be determined in a linear fitting manner, and since the color information included in the multi-frame reference image is richer, the accuracy of determining the target color information of the target image is improved compared with a manner that the reference image is a single-frame image.
For a scene with a plurality of cameras, for example, in a scene of splicing a panoramic annular view by images acquired by the plurality of cameras, the above process is repeatedly executed, target equalization parameters of the images acquired by each camera can be obtained by fitting according to color information of an overlapping area of a reference image of the target equalization parameters, and finally, the consistency of the color information in the images acquired by the plurality of cameras through the reference image is determined, so that the visual effect is improved.
The color balance parameters of the target image acquired by the first camera can also be used for performing color balance adjustment of each color channel on the image acquired subsequently by the first camera.
In the image color processing method of the embodiment of the disclosure, a target image acquired by a first camera is acquired, a reference image acquired by a second camera adjacent to the first camera is acquired, wherein an overlapping region exists between the target image and the reference image, and a target color balance parameter of the target image is determined according to color information of the overlapping region in the reference image, so as to adjust the image color of the target image. The target color balance parameters of the target image collected by each first camera can be used for adjusting the image color of the subsequent images collected by the first cameras, so that the continuous consistency of color information in the images collected by the plurality of cameras is ensured, the color difference after splicing is avoided, and the visual effect is improved.
Based on the foregoing embodiments, an embodiment of the present disclosure provides another image color processing method, and fig. 3 is a schematic flowchart of the another image color processing method provided by the embodiment of the present disclosure, as shown in fig. 3, the method includes the following steps:
step 301, acquiring a target image acquired by a first camera.
Step 302, acquiring a reference image acquired by a second camera adjacent to the first camera, wherein an overlapping area exists between the target image and the reference image.
In step 301 and step 302, the explanation in the foregoing embodiment can be referred to, the principle is the same, and details are not repeated in this embodiment.
The embodiment of the present disclosure provides a scene, for example for on-vehicle all around splicing map scene, all around splicing map passes through a plurality of cameras that set up on the vehicle and gathers the image, as an example to the image concatenation that 4 cameras that carry on all around on the vehicle gathered on the on-vehicle all around splicing map and obtained, for example, the camera is the flake camera, and the flake camera has great wide angle. The stitched panoramic image will be described by taking a top view angle shown in fig. 2 as an example.
Step 303, determining whether the reference images corresponding to the target images are a group, if yes, executing step 304, and if not, executing step 305.
In the embodiment of the present disclosure, in the all-round stitched image, the image collected by each camera has two adjacent cameras, that is, one or two of the images collected by the two adjacent cameras may be used as a reference image thereof, and the specific description is as follows:
in an implementation manner of the embodiment of the present disclosure, if the reference image is a group of images acquired by the second camera, that is, the reference images corresponding to the group of target images acquired by the first camera are a group, step 304 is executed.
In another implementation manner of the embodiment of the present disclosure, if the reference images are two groups of images acquired by two second cameras, that is, if the reference image corresponding to one group of images acquired by the first camera is two groups of images, step 305 is executed.
And step 304, determining target color balance parameters by adopting a linear fitting mode according to the color information of the overlapped area in the group of reference images and the color information of the overlapped area in the group of target images.
In the embodiment of the present disclosure, the target image is a group of target images acquired by the first camera, and the reference image is a group of reference images acquired by the second camera. Any image in the group of target images and the corresponding image in the group of reference images have the same acquisition time, and the target images and the reference images are acquired at the same time, so that the acquisition in the same scene is ensured, the different influences of external environments are reduced, and the accuracy is improved when the target color balance parameters corresponding to the target images are determined according to the reference images.
In the embodiment of the present disclosure, the images acquired by the 4 cameras in fig. 2 are taken as an example for explanation under a top view angle, and the principle is the same for other angles, which is not described in detail in this embodiment.
The left camera is used as a first camera, the adjacent cameras are a front camera and a rear camera, and for a group of target images collected by the left camera, a corresponding reference image is determined to be a group of images collected by the front camera. For example, the multi-frame reference images acquired by the front camera at multiple times are C11, C12, and C13, and the multi-frame target images acquired by the left camera at multiple times are C21, C22, and C23, where there is an overlapping region, such as the region a shown in fig. 2, between the top view corresponding to the front-shot acquired image and the top view corresponding to the left-shot acquired image as the reference image. Y is a color value of a color channel in an overlapping area of the reference image, for example, a color value of a RED (RED, R) color channel, Y1 is a color value of a color channel corresponding to the overlapping area in the target image acquired by the first camera, that is, a and b are target color balance parameters of the color channel, and are used to perform color adjustment of the color channel on the acquired image corresponding to the reference image. In the embodiment of the present disclosure, color information is taken as one of three color channels of RGB, for example, a red color channel is taken as an example for explanation. For example, in a group of target images acquired by the first camera, the red color mean value corresponding to the overlapping region of the target image C11 is R11, the red color mean value corresponding to the overlapping region of the target image C12 is R12, and the red color mean value corresponding to the overlapping region of the target image C13 is R13; in a group of reference images, the average value of the red color corresponding to the overlapping area of the reference image C21 is R21, the average value of the red color corresponding to the overlapping area of the reference image C22 is R22, and the average value of the red color corresponding to the overlapping area of the reference image C23 is R23. Each target image in a group of target images acquired by the first camera needing to determine the target color balance parameters and each reference image in the corresponding group of reference images are respectively substituted into a fitting formula to be represented as follows:
R11=R21*a R1l +b R1l
R12=R22*a R2l +b R2l
R13=R23*a R3l +b R3l
solving the color balance parameter a of the red channel of the target image corresponding to the left camera by linear fitting, for example, fitting a straight line by least squares method or Random Sample Consensus (RANSAC) Rl ,b Rl . Similarly, the color balance parameters of the blue color channel and the color balance parameters of the green color channel can be obtained by fitting.
It should be noted that, in the embodiment of the present disclosure, a group of images is used as 3 frames of images for fitting, in practical applications, for example, the accuracy of color balance parameters obtained by fitting is improved, and a group of images including more frames of image data may be used for fitting.
Step 305, determining a first candidate color balance parameter by adopting a linear fitting mode according to the color information of the overlapping area in the group of reference images and the color information of the overlapping area in the group of target images.
In an implementation manner of the embodiment of the present disclosure, it is necessary to determine candidate color balance parameters of a group of images acquired by the camera according to two groups of reference images, respectively.
The method for determining the candidate color balance parameters of the target image acquired by the corresponding first camera according to each group of reference images may refer to the description in step 304, and the principle is the same, which is not repeated in this embodiment. When the candidate color balance parameters of the target image acquired by the corresponding first camera are determined by fitting with the reference image of which the target color balance parameters are determined, the color information corresponding to the overlapping area of the reference image is the color information adjusted by using the determined target color balance parameters, so that the color information is transmitted according to the reference image, and the color balance is performed on the images acquired by the multiple cameras, so that the color adjustment in the images acquired by the multiple cameras is kept consistent.
Step 306, determining a second candidate color balance parameter by using a linear fitting manner according to the color information of the overlapping region in the other group of reference images and the color information of the overlapping region in the group of target images.
Similarly, reference may be made to the description in step 305, and details are not repeated in this embodiment.
And 307, determining target color balance parameters of a group of target images according to the first candidate color balance parameters and the second candidate color balance parameters so as to adjust the color of the image acquired by the first camera.
In an implementation manner of the embodiment of the present disclosure, after determining two candidate color balance parameters of a group of target images acquired by a first camera, that is, determining a first candidate color balance parameter and a second candidate color balance parameter, averaging the first candidate color balance parameter and the second candidate color balance parameter to obtain the target color balance parameters of the group of images acquired by the one camera.
In the embodiment of the present disclosure, in order to improve the accuracy of determining the target color balance coefficients of a group of images acquired by one camera, it is necessary to determine whether the target color balance parameters of a group of target images acquired by a first camera are usable according to the difference between two candidate color balance parameters corresponding to a group of target images acquired by the first camera, which is specifically described as follows:
in a first implementation manner of the embodiment of the present disclosure, if the candidate color balance parameter is a slope, determining a first ratio between a slope in the first candidate color balance parameter and a slope in the second candidate color balance parameter, and if the first ratio belongs to a set interval in a scene, taking an average value of the slope in the first candidate color balance parameter and the slope in the second candidate color balance parameter as a target color balance parameter of a group of target images acquired by the first camera; and if the first ratio does not belong to the set interval, the accuracy of the first candidate color balance parameter and the second candidate color balance parameter is considered to be poor and not meet the set requirement, and the target color balance parameter of the target image is determined to be abandoned.
In a second implementation manner of the embodiment of the present disclosure, if the candidate color balance parameter is an intercept, determining a second ratio between the intercept in the first candidate color balance parameter and the intercept in the second candidate color balance parameter, and if the second ratio belongs to a set interval in a scene, taking an average value of the intercept in the first candidate color balance parameter and the intercept in the second candidate color balance parameter as a target color balance parameter of a group of target images acquired by the first camera; and if the second ratio does not belong to the set interval, the accuracy of the first candidate color balance parameter and the second candidate color balance parameter is considered to be poor and not meet the set requirement, and the target color balance parameter of the target image is determined to be abandoned.
In a third implementation manner of the embodiment of the present disclosure, the candidate color equalization parameters are two, that is, two linear parameters obtained by linear fitting are slope a and intercept b, respectively, a first ratio between a slope in the first candidate color equalization parameter and a slope in the second candidate color equalization parameter is determined, and a second ratio between an intercept in the first candidate color equalization parameter and an intercept in the second candidate color equalization parameter is determined, where in a case where both the first ratio and the second ratio belong to a set interval, an average value of the slope in the first candidate color equalization parameter and the slope in the second candidate color equalization parameter is used as a slope in a target color equalization parameter of a group of target images acquired by the first camera, and an average value of the intercept in the first candidate color equalization parameter and the intercept in the second candidate color equalization parameter is used as an intercept in a target color equalization parameter of a group of target images acquired by the first camera.
It should be noted that, because the color equalization parameters include the color equalization parameters of each color channel, that is, the candidate color equalization parameters also include the color equalization parameters of each color channel, when determining the first ratio of the slope and the second ratio of the intercept, the ratio of the slope of the color equalization parameters of each color channel needs to be satisfied, and/or the ratio of the intercept of the color equalization parameters of each color channel belongs to the range of the setting interval, the slope and the intercept in the target color equalization parameters of each color channel of the target image may be determined, otherwise, the determination is abandoned.
In the embodiment of the present disclosure, the determined target color balance parameter of the target image acquired by the first camera is used to adjust the color of the subsequent image acquired by the first camera. Under the scene that a plurality of cameras exist, the plurality of cameras adjust the color of the subsequently acquired images according to the corresponding target color balance parameters, so that the color information of each image is kept consistent, and the display effect is improved.
It should be noted that, in the embodiment of the present disclosure, the target color balance parameter of each camera may be determined according to the above image color processing method based on the set period, so as to improve the reliability of balance adjustment performed on the color of the acquired image by each camera.
In the image color processing method of the embodiment of the disclosure, the target color balance parameter of the image acquired by each camera can be obtained by fitting according to the color information of the overlapping area of the reference images, and when the reference images corresponding to the image acquired by one camera are two groups, the reliability of the target color balance parameter of the camera adjusted this time is determined according to the difference between the two candidate color balance parameters determined by the two groups of reference images, so that the accuracy of color adjustment of multiple cameras is improved, the difference between colors in a spliced image obtained by splicing the images acquired by multiple cameras is avoided, and the visual effect is improved.
Based on the foregoing embodiments, an embodiment of the present disclosure provides another image color processing method, and fig. 4 is a schematic flowchart of the another image color processing method provided by the embodiment of the present disclosure, as shown in fig. 4, the method includes the following steps:
step 401, acquiring a target image acquired by a first camera.
And 402, acquiring a reference image acquired by a second camera adjacent to the first camera, wherein an overlapping area exists between the target image and the reference image.
Specifically, reference may be made to the description in the foregoing embodiments, and the principle is the same, which is not described herein again.
And 403, mapping the target image and the reference image from the image coordinate system to the world coordinate system according to the mapping relation between the image coordinate system and the world coordinate system to obtain an overlapping area between the target image and the reference image.
In the embodiment of the present disclosure, the images collected by the multiple cameras are used for performing image stitching to obtain a stitched image, and the stitched image may be a closed-loop panoramic all-around stitched image or a non-closed-loop stitched image.
In the embodiment of the disclosure, after a target image acquired by a first camera at least one moment and a reference image acquired by an adjacent second camera at least one corresponding moment are mapped into a world coordinate system from an image coordinate system, as an implementation manner, a mosaic is obtained according to relative position information between the cameras or an image mosaic sequence according to a frame of image acquired by each camera at each moment, and further, an overlapping region between the acquired images is determined according to a mosaic position relationship between the acquired images in the mosaic, and further, for a target image acquired by any one first camera, a color balance parameter of the target image acquired by the first camera is determined according to color information of the overlapping region in the target image acquired by the first camera and color information of the overlapping region in the corresponding reference image, so as to perform color adjustment on each target image. The color balance parameters of the target image acquired by the first camera can also be used for performing color balance adjustment of each color channel on the image acquired subsequently by the first camera.
Step 404, determining whether a group of reference images corresponding to a group of target images is a group, if yes, executing step 405, and if not, executing step 406.
Step 405, determining target color balance parameters by adopting a linear fitting mode according to the color information of the overlapping area in the group of reference images and the color information of the overlapping area in the group of target images.
Step 406, determining a first candidate color balance parameter by using a linear fitting manner according to the color information of the overlapping region in the group of reference images and the color information of the overlapping region in the group of target images.
Step 407, determining a second candidate color balance parameter by using a linear fitting manner according to the color information of the overlapping region in the other group of reference images and the color information of the overlapping region in the group of target images.
And step 408, determining target color balance parameters of a group of target images according to the first candidate color balance parameters and the second candidate color balance parameters so as to adjust the color of the image acquired by the first camera.
The steps 404 to 408 can refer to the explanations in the foregoing embodiments, and the principle is the same, which is not described again in this embodiment. In the image color processing method in the embodiment of the disclosure, multiple frames of continuous collected images are respectively collected by each camera, and the collected images of each frame are mapped to the world coordinate system, so as to determine the overlapping area between the multiple collected images collected by one group of cameras, thereby realizing that the color information of each collected image is determined by fitting based on the overlapping area, and providing the accuracy of color balance.
In the image color processing method in the embodiment of the disclosure, at least one frame of target image acquired by each first camera is mapped to the world coordinate system to determine the overlapping area between at least one target image acquired by the first camera, so that the color information of each target image is determined by fitting based on the overlapping area, and the accuracy of color balance is provided.
Based on the above embodiments, in order to further clearly describe the above embodiments, the embodiments of the present disclosure describe the above image color processing method with reference to the scene of fig. 2.
In the embodiment of the present disclosure, the fitting formula is Y = Y1 × a + b, where Y is color information of an overlapping region of one captured image in the reference image, and Y1 is color information of an overlapping region of one frame of the target image for which the target color balance parameter needs to be determined.
In the embodiment of the disclosure, collected images in four directions of front, back, left and right are collected at least one time through cameras arranged in four directions of a vehicle body, 4 original collected images collected at each time are read respectively, calibrated parameters inside and outside the fisheye camera are read, then the original collected images in the four directions are mapped into a world coordinate system according to a mapping relation between an image coordinate system and the world coordinate system, 4 collected images under the world coordinate system are obtained, and top views of the four collected images corresponding to top view angles, namely, a panoramic all-round top view as shown in fig. 2, are obtained, wherein an overlapping region exists between adjacent top views, wherein the overlapping region between adjacent top views may be an irregular overlapping region.
In the embodiment of the disclosure, before determining the color balance parameter corresponding to the acquired image, the color of the image acquired by each camera is the color of the original acquired image, and the colors of the images after splicing are not uniform, so that it is possible to determine that any camera, that is, the image acquired by the first camera, is the target image whose color balance parameter needs to be determined, determine the reference image corresponding to the target image acquired by the first camera, and determine the color balance coefficient of the target image based on the reference image. Specifically, the left-view plan view and the front-view plan view are adjacent top views, and when determining the color balance parameter corresponding to the left-view plan view, the front-view plan view is used as a reference image, and the color information of the front-view plan view is used as a reference, and according to the color information of the overlapping area between the multiple frames of front-view plan views and the left-view plan view, the target color balance parameter corresponding to the left-view plan view is determined by fitting.
Furthermore, after the target color balance parameter of the left-shot top view and the target color balance parameter of the right-shot top view are determined, the target color balance parameter of the back-shot top view also needs to be determined, and the reference image of the back-shot top view comprises the left-shot top view and the right-shot top view in which the target color balance parameter is determined, so that the color information of each frame of left-shot top view is updated according to the determined target color balance parameter, the overlapping area between multiple frames of left-shot top view and back-shot top view is determined, the first candidate color balance parameter of the back-shot top view is fitted according to the color information of the overlapping area between multiple frames of left-shot top view and back-shot top view, and similarly, the second candidate color balance parameter of the back-shot top view is fitted according to the updated color information of the overlapping area between multiple frames of right-shot top view and back-shot top view.
In an example of the embodiment of the present disclosure, the color equalization parameters include a slope and an intercept, for example, the first candidate color equalization parameter includes a slope ai1 and an intercept bi1, and the second candidate color equalization parameter includes a slope ai2 and an intercept bi2, where i is 1-3, and corresponds to three color channels of R, G, and B, respectively. Furthermore, the ratio of ai1 to ai2 corresponding to each color channel is compared to determine whether the ratio belongs to a set range, and the ratio of bi1 to bi2 is compared to determine whether the ratio also belongs to a set range, for example, the set range is 0.8-1.2, where the set ranges of the ratio of the slope and the ratio of the intercept corresponding to different color channels may also be different, and this embodiment is not limited. Under a scene, when the ratio of the two fitting parameters of each channel, namely the ratio of the slope and the ratio of the intercept, belong to a set range, the accuracy of the color balance parameter corresponding to each color channel is determined to meet the set requirement, and then the color of the image subsequently acquired by each camera is adjusted, so that the colors in the subsequently acquired panoramic all-around images are kept consistent, and the splicing effect of the panoramic all-around views is improved. In another scenario, if a ratio of two fitting parameters of any color channel, that is, a ratio of a slope ratio and an intercept ratio corresponding to any color channel does not belong to a set range, it is determined that the accuracy of the target color balance parameter corresponding to each top view does not meet the set requirement, and the target color balance parameter is not used for adjusting the color of the image subsequently acquired by each camera. As an implementation manner, for the adjustment of the color of the image subsequently acquired by each camera, if the historical color balance parameters exist in each top view, the historical color balance parameters corresponding to each top view can be adopted to adjust the color corresponding to the panoramic annular view obtained by splicing each top view; and if the historical color balance parameters do not exist in each top view, the colors of each top view are not adjusted, namely the original image is output, the plurality of cameras are driven to acquire the images at least one moment again, and the color balance processing method is sampled to perform color balance adjustment so as to improve the reliability of the color balance adjustment.
In order to implement the above embodiments, the present disclosure also provides an image color processing apparatus.
Fig. 5 is a schematic structural diagram of an image color processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the apparatus includes:
the first obtaining module 51 is configured to obtain a target image collected by the first camera.
A second obtaining module 52, configured to obtain a reference image acquired by a second camera adjacent to the first camera, where an overlapping area exists between the target image and the reference image.
The determining module 53 is configured to determine a target color balance parameter of the target image according to the color information of the overlapping area in the reference image, so as to adjust the color of the image acquired by the first camera.
Further, as a possible implementation manner, the target image is a group of target images acquired by the first camera, the reference image is a group of reference images acquired by the second camera, and the determining module 52 is specifically configured to:
and determining the target color balance parameters by adopting a linear fitting mode according to the color information of the overlapped area in the group of reference images and the color information of the overlapped area in the group of target images.
As a possible implementation manner, the determining module 53 is specifically further configured to:
when the reference images are two groups of reference images respectively acquired by two second cameras, determining a first candidate color balance parameter by adopting a linear fitting mode according to the color information of the overlapping area in the group of reference images and the color information of the overlapping area in the group of target images;
determining a second candidate color balance parameter by adopting a linear fitting mode according to the color information of the overlapping region in the other group of reference images and the color information of the overlapping region in the group of target images;
and determining target color balance parameters of the group of target images according to the first candidate color balance parameters and the second candidate color balance parameters.
As a possible implementation manner, the determining module 53 is specifically further configured to:
determining a ratio between the first candidate color equalization parameter and the second candidate color equalization parameter;
and taking the average value of the first candidate color balance parameter and the second candidate color balance parameter as the target color balance parameter of the group of target images under the condition that the ratio belongs to a set interval.
As a possible implementation manner, the candidate color equalization parameter includes a slope and/or an intercept obtained by linear fitting, and the determining module 53 is further configured to:
determining a first ratio between a slope in the first candidate color equalization parameter and a slope in the second candidate color equalization parameter and/or determining a second ratio between an intercept in the first candidate color equalization parameter and an intercept in the second candidate color equalization parameter;
and under the condition that the first ratio and/or the second ratio belong to a set interval, taking the average value of the two slopes as the slope of the target color balance parameters of the group of target images, and/or taking the average value of the two intercepts as the intercept of the target color balance parameters of the group of target images.
As a possible implementation, any image in a set of target images has the same acquisition time as the corresponding image in the set of reference images.
As a possible implementation manner, the apparatus further includes:
and the mapping module is used for mapping the target image and the reference image from the image coordinate system to the world coordinate system according to the mapping relation between the image coordinate system and the world coordinate system to obtain an overlapping area between the target image and the reference image.
As a possible implementation manner, the color information includes a plurality of color channels, and the color equalization parameter includes a color equalization parameter of each color channel. It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of the embodiment, and is not repeated herein.
In the image color processing device according to the embodiment of the disclosure, a target image acquired by a first camera is acquired, a reference image acquired by a second camera adjacent to the first camera is acquired, wherein an overlapping region exists between the target image and the reference image, a target color balance parameter of the target image is determined according to color information of the overlapping region in the reference image, so as to adjust an image color acquired by the first camera, and the image color of a subsequent image acquired by the first camera can be adjusted by determining the target color balance parameter of the image acquired by the first camera.
In order to implement the above embodiments, an embodiment of the present disclosure provides an electronic device, including:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of the foregoing method embodiments.
In order to achieve the above embodiments, the embodiments of the present disclosure propose a vehicle including the electronic device in the foregoing embodiments.
To achieve the above embodiments, the embodiments of the present disclosure propose a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method described in the foregoing method embodiments.
To implement the above embodiments, the present disclosure provides a computer program product including computer instructions, which when executed by a processor implement the method of the foregoing method embodiments.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present disclosure. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 10 includes a processor 11, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 12 or a program loaded from a Memory 16 into a Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 are also stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An Input/Output (I/O) interface 15 is also connected to the bus 14.
The following components are connected to the I/O interface 15: a memory 16 including a hard disk and the like; and a communication section 17 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like, the communication section 17 performing communication processing via a Network such as the internet; a drive 18 is also connected to the I/O interface 15 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program, carried on a computer readable medium, containing program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 17. The computer program performs the above-mentioned functions defined in the method of the present disclosure when executed by the processor 11.
In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as the memory 16 comprising instructions, executable by the processor 11 of the electronic device 10 to perform the above-described method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, "plurality" means at least two, e.g., two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried out in the method of implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present disclosure have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present disclosure, and that changes, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present disclosure.

Claims (18)

1. An image color processing method, comprising:
acquiring a target image acquired by a first camera;
acquiring a reference image acquired by a second camera adjacent to the first camera, wherein an overlapping area exists between the target image and the reference image;
and determining a target color balance parameter of the target image according to the color information of the overlapping area in the reference image so as to adjust the color of the image acquired by the first camera.
2. The method of claim 1, wherein the target image is a set of target images captured by the first camera and the reference image is a set of reference images captured by the second camera; the determining a target color balance parameter of the target image according to the color information of the overlapping region in the reference image includes:
and determining the target color balance parameters by adopting a linear fitting mode according to the color information of the overlapped area in the group of reference images and the color information of the overlapped area in the group of target images.
3. The method of claim 2, wherein determining the target color balance parameter for the target image based on color information of overlapping regions in the reference image comprises:
when the reference images are two groups of reference images respectively acquired by two second cameras, determining a first candidate color balance parameter by adopting a linear fitting mode according to the color information of the overlapping region in the group of reference images and the color information of the overlapping region in the group of target images;
determining a second candidate color balance parameter by adopting a linear fitting mode according to the color information of the overlapping region in the other group of reference images and the color information of the overlapping region in the group of target images;
and determining target color balance parameters of the group of target images according to the first candidate color balance parameters and the second candidate color balance parameters.
4. The method of claim 3, wherein determining the target color equalization parameters for the set of target images based on the first candidate color equalization parameter and the second candidate color equalization parameter comprises:
determining a ratio between the first candidate color equalization parameter and the second candidate color equalization parameter;
and taking the average value of the first candidate color balance parameter and the second candidate color balance parameter as the target color balance parameter of the group of target images under the condition that the ratio belongs to a set interval.
5. The method of claim 3, wherein the candidate color equalization parameters comprise a slope and/or an intercept from a linear fit, and wherein determining the target color equalization parameters for the set of target images based on the first candidate color equalization parameter and the second candidate color equalization parameter comprises:
determining a first ratio between a slope in the first candidate color equalization parameter and a slope in the second candidate color equalization parameter and/or determining a second ratio between an intercept in the first candidate color equalization parameter and an intercept in the second candidate color equalization parameter;
and under the condition that the first ratio and/or the second ratio belong to a set interval, taking the average value of the two slopes as the slope of the target color balance parameters of the group of target images, and/or taking the average value of the two intercepts as the intercept of the target color balance parameters of the group of target images.
6. The method of claim 2, wherein any image in the set of target images has the same acquisition time as a corresponding image in the set of reference images.
7. The method of any one of claims 1-6, wherein said obtaining a reference image acquired by a second camera adjacent to the first camera comprises:
and mapping the target image and the reference image from the image coordinate system to the world coordinate system according to the mapping relation between the image coordinate system and the world coordinate system to obtain an overlapping area between the target image and the reference image.
8. The method of any of claims 1-6, wherein the color information comprises a plurality of color channels, and the color equalization parameters comprise color equalization parameters for each color channel.
9. An image color processing apparatus characterized by comprising:
the first acquisition module is used for acquiring a target image acquired by the first camera;
the second acquisition module is used for acquiring a reference image acquired by a second camera adjacent to the first camera, wherein an overlapping area exists between the target image and the reference image;
and the determining module is used for determining a target color balance parameter of the target image according to the color information of the overlapping area in the reference image so as to adjust the color of the image acquired by the first camera.
10. The apparatus of claim 9, wherein the target image is a set of target images captured by the first camera and the reference image is a set of reference images captured by the second camera; the determining module is specifically configured to:
and determining the target color balance parameters by adopting a linear fitting mode according to the color information of the overlapping region in the group of reference images and the color information of the overlapping region in the group of target images.
11. The apparatus of claim 9, wherein the determining module is further specifically configured to:
when the reference images are two groups of reference images respectively acquired by two second cameras, determining a first candidate color balance parameter by adopting a linear fitting mode according to the color information of the overlapping area in the group of reference images and the color information of the overlapping area in the group of target images;
determining a second candidate color balance parameter by adopting a linear fitting mode according to the color information of the overlapping region in the other group of reference images and the color information of the overlapping region in the group of target images;
and determining target color balance parameters of the group of target images according to the first candidate color balance parameters and the second candidate color balance parameters.
12. The apparatus of claim 11, wherein the determining module is further specifically configured to:
determining a ratio between the first candidate color equalization parameter and the second candidate color equalization parameter;
and taking the average value of the first candidate color balance parameter and the second candidate color balance parameter as the target color balance parameter of the group of target images under the condition that the ratio belongs to a set interval.
13. The apparatus of claim 11, wherein the candidate color equalization parameters comprise slopes and/or intercepts resulting from a linear fit, and wherein the determining module is further configured to:
determining a first ratio between a slope in the first candidate color equalization parameter and a slope in the second candidate color equalization parameter and/or determining a second ratio between an intercept in the first candidate color equalization parameter and an intercept in the second candidate color equalization parameter;
and under the condition that the first ratio and/or the second ratio belong to a set interval, taking the average value of the two slopes as the slope of the target color balance parameters of the group of target images, and/or taking the average value of the two intercepts as the intercept of the target color balance parameters of the group of target images.
14. The apparatus of any of claims 9-13, further comprising:
and the mapping module is used for mapping the target image and the reference image from the image coordinate system to the world coordinate system according to the mapping relation between the image coordinate system and the world coordinate system to obtain an overlapping area between the target image and the reference image.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
16. A vehicle characterized by comprising the electronic device of claim 15.
17. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to any one of claims 1-8.
18. A computer program product comprising computer instructions, characterized in that the computer instructions, when executed by a processor, implement the method of any one of claims 1-8.
CN202111389251.XA 2021-11-22 2021-11-22 Image color processing method, image color processing device, vehicle, electronic device, and storage medium Pending CN115460390A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111389251.XA CN115460390A (en) 2021-11-22 2021-11-22 Image color processing method, image color processing device, vehicle, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111389251.XA CN115460390A (en) 2021-11-22 2021-11-22 Image color processing method, image color processing device, vehicle, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN115460390A true CN115460390A (en) 2022-12-09

Family

ID=84295082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111389251.XA Pending CN115460390A (en) 2021-11-22 2021-11-22 Image color processing method, image color processing device, vehicle, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN115460390A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050046703A1 (en) * 2002-06-21 2005-03-03 Cutler Ross G. Color calibration in photographic devices
CN104463786A (en) * 2014-12-03 2015-03-25 中国科学院自动化研究所 Mobile robot figure stitching method and device
CN110753217A (en) * 2019-10-28 2020-02-04 黑芝麻智能科技(上海)有限公司 Color balance method and device, vehicle-mounted equipment and storage medium
CN113496474A (en) * 2021-06-15 2021-10-12 中汽创智科技有限公司 Image processing method, device, all-round viewing system, automobile and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050046703A1 (en) * 2002-06-21 2005-03-03 Cutler Ross G. Color calibration in photographic devices
CN104463786A (en) * 2014-12-03 2015-03-25 中国科学院自动化研究所 Mobile robot figure stitching method and device
CN110753217A (en) * 2019-10-28 2020-02-04 黑芝麻智能科技(上海)有限公司 Color balance method and device, vehicle-mounted equipment and storage medium
CN113496474A (en) * 2021-06-15 2021-10-12 中汽创智科技有限公司 Image processing method, device, all-round viewing system, automobile and storage medium

Similar Documents

Publication Publication Date Title
US9591237B2 (en) Automated generation of panning shots
US20180240265A1 (en) Systems and Methods for Depth-Assisted Perspective Distortion Correction
EP3061234B1 (en) Guided color grading for an extended dynamic range image
WO2017000484A1 (en) Panoramic image generation method and apparatus for user terminal
CN108712608B (en) Terminal equipment shooting method and device
CN107172353B (en) Automatic explosion method, device and computer equipment
CN108154514B (en) Image processing method, device and equipment
CN107835372A (en) Imaging method, device, mobile terminal and storage medium based on dual camera
CN106713755A (en) Method and apparatus for processing panoramic image
CN105578021A (en) Imaging method of binocular camera and apparatus thereof
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN108053438B (en) Depth of field acquisition method, device and equipment
US20150071534A1 (en) System, Device and Method for Displaying a Harmonized Combined Image
CN108024056A (en) Imaging method and device based on dual camera
CN107846556A (en) imaging method, device, mobile terminal and storage medium
CN112771843A (en) Information processing method, device and imaging system
CN113496474A (en) Image processing method, device, all-round viewing system, automobile and storage medium
JPH11242737A (en) Method for processing picture and device therefor and information recording medium
CN114390262A (en) Method and electronic device for splicing three-dimensional spherical panoramic image
CN109166076A (en) Luminance regulating method, device and the portable terminal of polyphaser splicing
EP1832108A2 (en) Automatic white balance control
CN109785390B (en) Method and device for image correction
CN111860632B (en) Multipath image consistency fusion method
US11825213B2 (en) Removal of image capture device from omnidirectional image created by stitching partial images
US8508632B2 (en) Image processing method of obtaining a high dynamic range image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination