WO2023077431A1 - Procédé et appareil de traitement d'image, et dispositif de collecte d'image et support d'enregistrement - Google Patents

Procédé et appareil de traitement d'image, et dispositif de collecte d'image et support d'enregistrement Download PDF

Info

Publication number
WO2023077431A1
WO2023077431A1 PCT/CN2021/129018 CN2021129018W WO2023077431A1 WO 2023077431 A1 WO2023077431 A1 WO 2023077431A1 CN 2021129018 W CN2021129018 W CN 2021129018W WO 2023077431 A1 WO2023077431 A1 WO 2023077431A1
Authority
WO
WIPO (PCT)
Prior art keywords
white balance
image
camera
color
balance gain
Prior art date
Application number
PCT/CN2021/129018
Other languages
English (en)
Chinese (zh)
Inventor
吴伟霖
王雯琪
郑子翔
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/129018 priority Critical patent/WO2023077431A1/fr
Publication of WO2023077431A1 publication Critical patent/WO2023077431A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image processing method, device, image acquisition device, and storage medium.
  • the obtained image is still relatively clear.
  • many image acquisition devices are equipped with multiple cameras with different focal lengths.
  • the zoom magnification reaches the corresponding zoom magnification of the next camera, the The camera switches from the current camera to the next camera, and uses the next camera to collect images to achieve optical zoom and obtain clear images.
  • the colors of the same scene in the images output by different cameras before and after camera switching are quite different, which affects user experience.
  • the present application provides an image processing method, device, image acquisition device and storage medium.
  • an image processing method comprising:
  • the color information of the first image captured by the first camera before the optical zoom to perform color processing on the second image captured by the second camera after the optical zoom to obtain a target image, wherein the color information is based on the first Acquiring a target pixel area in the image, where the target pixel area has the same field of view as all or part of the pixel area of the second image;
  • an image processing device the device includes a processor, a memory, and a computer program stored in the memory for execution by the processor, when the processor executes the computer program.
  • the color information of the first image captured by the first camera before the optical zoom to perform color processing on the second image captured by the second camera after the optical zoom to obtain a target image, wherein the color information is based on the first Acquiring a target pixel area in the image, where the target pixel area has the same field of view as all or part of the pixel area of the second image;
  • an image acquisition device is provided, and the image acquisition includes a first camera, a second camera, and the image processing device mentioned in the second aspect above.
  • a computer-readable storage medium wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed, the method mentioned in the above-mentioned first aspect is implemented .
  • the target pixel area can be determined from the first image captured by the first camera, and the target pixel area and all or part of the pixels in the second image captured by the second camera The regions have the same field of view, and then use the color information obtained based on the target pixel region to perform color processing on the second image, and perform color processing on the second image by using the color information of the target pixel region that has the same field of view as the pixel region of the second image
  • Color processing can eliminate the color difference of the images output by the two cameras caused by the difference in the field of view, so that the colors of the images output by the two cameras for the same scene can be consistent before and after switching.
  • FIG. 1 is a schematic diagram of images output before and after camera switching according to an embodiment of the present application.
  • Fig. 2 is a schematic diagram of an image processing method according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an expanded pixel area obtained by extending a target pixel area according to an embodiment of the present application.
  • Fig. 4(a) is a schematic diagram of an image processing method according to an embodiment of the present application.
  • Fig. 4(b) is a schematic diagram of an image processing method according to another embodiment of the present application.
  • Fig. 5 is a schematic diagram of a logical structure of an image processing device according to an embodiment of the present application.
  • a multi-camera relay zoom technology is currently provided, that is, one image acquisition device can be equipped with multiple cameras, and the focal lengths of these cameras can be different.
  • the camera can be switched to the camera whose focal length matches the zoom ratio based on the zoom ratio, so as to use the switched camera for image acquisition.
  • the image acquisition device can support larger The focal length range, and the images obtained during the zooming process have relatively high definition and good quality.
  • the field of view of the camera is constantly changing. For example, assume that the first camera is a short-focus camera with a larger field of view, and the second camera is a telephoto camera with a smaller field of view that is only a part of the field of view of the first camera.
  • digital zooming is first performed using images captured by the first camera, and when the zoom ratio matches the focal length of the second camera, the camera is switched to the second camera.
  • the embodiment of the present application provides an image processing method.
  • the target pixel area can be determined from the first image captured by the first camera, and the target pixel area and the first image captured by the second camera can be determined.
  • All or part of the pixel areas in the second image have the same field of view, and then use the color information obtained based on the target pixel area to perform color processing on the second image, by using the target pixel area that has the same field of view as the pixel area of the second image
  • the color information of the second image is color processed, which can eliminate the influence of the field of view difference on the color of the image output by the two cameras, so as to avoid the large deviation in the color of the image output for the same scene before and after the switching of the two cameras .
  • the image processing method of the embodiment of the present application can be used to process images collected by various image acquisition devices equipped with at least two cameras, and the method can be executed by the image acquisition device or by a specialized image processing device.
  • the image acquisition device can be a mobile phone, a camera, a cloud platform, a drone, an unmanned car, and the like.
  • the image acquisition device may include a user interaction interface, and images acquired by the image acquisition device may be displayed on the user interaction interface for viewing by the user.
  • the image acquisition device may be communicatively connected with the control terminal, and the image acquisition device may send the acquired image to the control terminal to be displayed on the interactive interface of the control terminal for the user to view. For example, the image collected by the drone is sent to the user's mobile phone for the user to view.
  • the image acquisition device includes at least a first camera and a second camera, and the focal length supported by the first camera is not exactly the same as that supported by the second camera.
  • the first camera and the second camera can both be fixed-focus cameras, or both can be zoom cameras, or one can be a fixed-focus camera and the other can be a zoom camera.
  • Not completely the same focal length means that the focal lengths supported by the first camera and the second camera are partly the same or completely different.
  • the image processing method may include the following steps:
  • step S202 when using the image acquisition device to take pictures of the target shooting scene, the user may input a zoom command to perform zoom processing on the captured image.
  • the first image captured by the first camera can be digitally zoomed, and when the zoom ratio matches the focal length of the second camera, the first camera is switched to the second camera to realize optical zoom.
  • step S204 since the focal lengths of the first camera and the second camera are different, the fields of view are also different. Therefore, after the switching of the cameras is completed, the target pixel area can be determined from the first image captured by the first camera, and the target pixel area can have the same field of view as all or part of the pixel areas in the second image captured by the second camera , where having the same field of view means that the content of the scene displayed by the two image areas is the same, but the magnification is different. Then, color processing is performed on the second image by using the color information of the target pixel area to obtain the target image.
  • the color processing can be white balance correction (AWB) processing on the image, or color correction (CCM) processing on the image, or color uniformity (Color Shading) processing on the image, etc. It is not difficult to understand that any Image processing to make the color of the image meet the needs of human eyes is within the protection scope of this application.
  • AVB white balance correction
  • CCM color correction
  • Color uniformity Color Shading
  • the color deviation of the image caused by the difference in the field of view of the two cameras can be eliminated, and the output image before and after camera switching can be guaranteed.
  • the same scene in the same color is consistent.
  • the first image may be the last frame of image taken by the first camera before zooming
  • the second image may be the first frame of image taken by the second camera after zooming
  • the first image may be taken by the first camera before zooming
  • the second image is the zoomed image of the first frame.
  • the first image may be the last frame image captured by the first camera before zooming
  • the second image may be all frame images captured by the second camera after zooming.
  • S206 Output the target image, wherein the image output by the first camera before optical zooming is in the same color as the image area in the target image having the same field of view.
  • step S206 after performing color processing on the second image to obtain the target image, the target image may be output, wherein the processed target image is consistent with the color of the image area having the same field of view in the image output by the first camera, Therefore, there is no jump in the color of images output by the two cameras for the same scene, thereby improving user experience.
  • the consistent color can mean that the colors of the image areas with the same field of view are exactly the same, or roughly the same, that is, there may be a certain deviation between the two, but the deviation is controlled within a small range to ensure the visual perception of the human eye. It seems that there is no color jump phenomenon.
  • the white balance gain of the first image may be corrected based on the white balance gain of the first image.
  • the second image undergoes white balance correction.
  • parameters for other color processing such as color correction, color uniformity processing, etc.
  • the image can be processed more in line with the needs of the human eye.
  • the focal length of the first camera may be shorter than the focal length of the second camera, and the angle of view of the first camera covers the angle of view of the second camera, that is, the second image shows the enlarged content of some pixel areas in the first image . Therefore, the target pixel area may have the same field of view as all pixel areas of the second image.
  • the focal length of the first camera may be greater than the focal length of the second camera, and the angle of view of the second camera covers the angle of view of the first camera, that is, the first image is the enlarged content of some pixel areas in the second image. Therefore, The target pixel area may have the same field of view as the part of the pixel area in the second image.
  • the fields of view of the first camera and the second camera may not be exactly the same, that is, there is a certain overlapping field of view, and the target pixel area may be the image area corresponding to the overlapping field of view in the first image, which is different from the overlapping field of view in the second image.
  • the corresponding image areas have the same field of view.
  • the target pixel area may be determined from the first image based on the current zoom ratio and the relative positional relationship between the first camera and the second camera. As the zoom ratio changes, the content of the image output and displayed to the user will also change. Since the image captured by the second camera corresponds to the zoomed image, the content of the image captured by the second camera can be determined based on the zoom ratio. In addition, because there is a certain deviation in the center positions of the two cameras, the output content of the second camera is not completely the content of the center position of the first image, so it is necessary to determine the image captured by the second camera based on the deviation of the center position of the cameras. A corresponding area in the image to obtain the target pixel area.
  • the first white balance gain when performing white balance correction on the second image based on the white balance gain of the first image, can be determined based on the target pixel area, for example, the target pixel area can be obtained The RGB value of the area, and the first white balance gain is determined based on the RGB value.
  • the method for determining the white balance gain may adopt a grayscale world method, a white balance algorithm based on machine learning, etc., which are not limited in this embodiment of the present disclosure.
  • the sensitivity of the two cameras to red light is different, resulting in the same amount of red light incident on the two cameras.
  • the intensity of the light signals sensed by the image sensors of the two cameras will be different, resulting in color deviation in the images captured by the two cameras.
  • the color mapping relationship between the two cameras can be calibrated in advance.
  • the first white balance gain can be mapped to the color space of the second camera according to the color mapping relationship between the first camera and the second camera, and the second White balance gain. Then use the second white balance gain to perform white balance correction on the second image to obtain and output the target image.
  • white balance correction processing may be performed on the first image based on the first white balance gain, and then the white balance processed first
  • the image is digitally zoomed and output.
  • digital zooming the change of field of view brought about by the change of zoom ratio is not considered, and the color of the scene has always maintained the overall color, and the output image during digital zooming is not its real field of view Corresponds to the true color.
  • the embodiment of the present application can dynamically determine the target pixel area based on the magnification, and use the target pixel area to determine the first white
  • the balance gain is used to correct the white balance of the image output by the first camera, so that the image output during the digital zoom process is more in line with the real color, ensuring that the perceived field of view is consistent before and after the camera is switched.
  • the shooting scene may be complicated, if the target pixel area consistent with the display content of the second image is determined in the first image directly according to the zoom ratio and the relative positional relationship of the two cameras, and based on the target pixel area Determine the first white balance gain.
  • a certain range can be extended around the target pixel area according to the preset expansion ratio, and the surrounding area of the target pixel area
  • the pixel area is expanded to obtain the expanded pixel area, and then the first white balance gain can be determined based on the RGB values of the expanded pixel area.
  • the expansion ratio can be set based on actual requirements, for example, it can be set to 5%, 7% and so on.
  • the field of view of the expanded pixel area is relatively similar to that of the second image as a whole, and can ensure that the determined first white balance gain is more accurate.
  • the white balance gain for performing white balance correction on the image will be different, and the corresponding relationship between the color temperature and the white balance gain can be determined in advance.
  • the first color temperature corresponding to the first white balance gain can be determined first, and then according to the first white balance gain
  • the color mapping relationship between the first camera and the second camera maps the first color temperature to the color space of the second camera to obtain the second color temperature, and then determines the white balance gain corresponding to the second color temperature as the second white balance gain.
  • the color mapping relationship is related to the spectral response characteristics of the two cameras, which can be established during the tuning process, or can be determined according to the spectral response deviations of the image sensors, lenses, and filters of the two cameras. Based on the established color mapping relationship, the photographing device can automatically look up a table to determine the second color temperature corresponding to the first color temperature.
  • the third white balance gain can be further determined based on the RGB value of the second image, and then the second white balance gain and the third white balance gain can be combined to adjust the second camera
  • the multi-frame second images collected are output after white balance correction to obtain multi-frame target images, so that the color of the multi-frame target images output by the second camera changes from the color corrected by the second white balance gain to the color corrected by the third white balance gain. After the color transition.
  • the third white balance gain for example, two white balance gains can be mixed according to a certain ratio, gradually reduce the ratio of the second white balance gain, increase The ratio of the third white
  • the target pixel area may have the same field of view as the reference pixel area in the second image, where the reference pixel area may be the entire pixel area of the second image, or a partial pixel area in the second image .
  • the fourth white balance gain can be determined based on the first image
  • the fifth white balance gain can be determined based on the second image, according to
  • the color deviation between the target pixel area after white balance correction using the fourth white balance gain and the reference pixel area after white balance correction using the fifth white balance gain determines the offset coefficient, which is used to characterize the acquisition of the two cameras In the image with the same field of view, there is color deviation after white balance correction, and then the fifth white balance gain can be adjusted based on the offset coefficient to obtain the sixth white balance gain, and then use the sixth white balance Gain performs white balance correction on the second image to obtain the target image and output it.
  • the target pixel area and the reference pixel area can be pre-calibrated according to the relative positional relationship between the first camera and the second camera.
  • the regions in the images captured by the two cameras that are completely aligned with the real scene are used as the target pixel region and the reference pixel region.
  • the fourth white balance gain when determining the fourth white balance gain based on the first image, may be determined directly based on the RGB values of the full image of the first image. In some embodiments, in order to eliminate the influence of field of view difference when determining the fourth white balance gain, the fourth white balance gain may be determined only according to the RGB values of the target pixel area.
  • the fifth white balance gain in some embodiments, it may be determined according to RGB values of the full image of the second image. In some embodiments, it may also be determined according to the RGB values of the reference pixel area, so as to eliminate the influence of the difference in the field of view.
  • color correction may be further performed on the white balance corrected target pixel area, and after using the fifth white balance gain to perform white balance correction on the reference pixel area, The color correction can be further performed on the reference pixel area after white balance correction, and then the offset coefficient can be determined according to the color deviation between the target pixel area after color correction and the reference pixel area after color correction.
  • the offset coefficient By determining the offset coefficient in this way, it can be more It truly reflects the deviation between the final output image of the first camera and the final output image of the second camera, so that after correcting the second image with the white balance gain adjusted by the offset coefficient, the obtained target image is different from the first The color of the same scene in the image output by the camera is closer.
  • the correction matrix and color temperature for color correction of an image are also interrelated. In order to ensure the accuracy of the correction result, the correction matrix is often different under different color temperatures. Therefore, in some embodiments, when performing color correction on the target pixel area, a correction matrix may be determined based on the third color temperature corresponding to the fourth white balance gain, and then use the correction matrix to perform color correction on the target pixel area. Similarly, when performing color correction on the reference pixel area, a correction matrix may be determined based on the fourth color temperature corresponding to the fifth white balance gain, and then use the correction matrix to perform color correction on the reference pixel area.
  • the presented color is the real color of the image output by the second camera, that is, only corrected by the sixth white balance gain. In this way, it can avoid that the first camera needs to work all the time, and the fourth white balance gain is determined based on the collected first image and sent to the second camera.
  • the image acquisition device includes a short-focus camera 1 and a telephoto camera 2
  • the zooming process when switching from camera 1 to camera 2 (similar to switching from camera 2 to camera 1), due to the angle of view and contrast of camera 1 and camera 2
  • There are differences in the response characteristics of light resulting in a large difference in the color of the image area with the same field of view in the output image before and after the camera is switched.
  • the following image processing methods are provided to make the output image color consistent before and after camera switching.
  • the image area with the same field of view as image 2 can be determined from image 1 collected by camera 1 based on the zoom ratio, and the white balance gain can be determined based on the image area to correct image 2, so that the two images caused by different fields of view can be eliminated.
  • the image output by the camera is different.
  • the camera 1 After the camera 1 captures the image 1, it can determine the ROI area in the image 1 based on the current zoom ratio and the positional relationship between the two cameras, wherein the ROI area and the image 2 captured by the camera 2 have the same viewing angle. field.
  • the white balance correction of image 1 can be performed by using the white balance gain 1, and then based on the zoom ratio, after sampling from the central area of the white balance corrected image 1, a digital zoom image is obtained and output .
  • the camera 2 can also determine the white balance gain 3 and the color temperature 3 based on the RGB values of the image 2 .
  • White balance correction can be performed on the multi-frame image 2 output by the camera in combination with white balance gain 2 and white balance gain 3, so that the color of the output optical zoom image is corrected from the color corrected by white balance gain 2 to white balance gain 3
  • the final color transition is consistent with the color of image 1 captured by camera 1 when the camera first switches, and then gradually transitions to the real color of image 2 captured by camera 2, so that there will be no color jump, but a slow transition .
  • the ROI area whose field of view is completely aligned can be selected from the images collected by the two cameras, and the offset coefficient is determined based on the color deviation of the ROI area after the camera performs balance correction on the ROI area, and the image color is adjusted using the offset coefficient. Adjust to make the color of the images output by the two cameras consistent for the same scene. As shown in Figure 4(b), it specifically includes the following steps:
  • ROI area 1 can be determined from image 1 collected by camera 1
  • ROI area 2 can be determined from image 2 collected by camera 2, wherein ROI area 1 and ROI Region 2 has the same field of view and is a completely aligned region of the two cameras.
  • ROI region 1 and ROI region can be pre-calibrated based on the positional relationship of the two cameras.
  • White balance gain 1 and color temperature 1 can be determined based on the RGB value of ROI area 1, and the white balance correction of ROI area 1 can be performed using white balance gain 1, and then the color correction matrix CCM1 corresponding to color temperature 1 can be determined, and white balance correction can be performed using CCM1 ROI area 1 is further color corrected.
  • White balance gain 2 and color temperature 2 can be determined based on the RGB value of ROI area 2, and the white balance correction of ROI area 2 can be performed using white balance gain 2, and then the color correction matrix CCM2 corresponding to color temperature 2 can be determined, and white balance correction can be performed using CCM2 ROI area 2 for further color correction.
  • the embodiment of the present application also provides an image processing device.
  • the processor 51 can implement the following steps when executing the computer program:
  • the color information of the first image captured by the first camera to perform color processing on the second image captured by the second camera to obtain a target image, wherein the color information is based on the target pixel area in the first image Acquiring that the target pixel area has the same field of view as all or part of the pixel area of the second image;
  • the processor is configured to use the color information of the first image captured by the first camera to perform color processing on the second image captured by the second camera, specifically for:
  • the processor when the processor is configured to perform white balance correction on the second image based on the white balance gain for performing white balance correction on the first image, it is specifically used to:
  • the second white balance gain is obtained based on the color mapping relationship between the first camera and the second camera and the first white balance gain, wherein the color mapping relationship is based on the spectral response characteristic of the first camera and the first white balance gain.
  • the spectral response characteristics of the second camera are determined;
  • the processor is configured to use the target pixel area to determine the first white balance gain, specifically for:
  • the extended pixel area includes the target pixel area and pixel areas around the target pixel area;
  • the first white balance gain is determined by using RGB values of the extended pixel area.
  • the processor when the processor is configured to obtain the second white balance gain based on the color mapping relationship between the first camera and the second camera and the first white balance gain, it is specifically configured to:
  • a white balance gain corresponding to the second color temperature is determined as the second white balance gain.
  • the processor is further configured to:
  • white balance correction is performed on multiple frames of second images collected by the second camera to obtain multiple frames of target images; so that the output of the second camera The color of the multiple frames of the target image transitions from the color corrected by the second white balance gain to the color corrected by the third white balance gain.
  • the viewing angle of the first camera covers the viewing angle of the second camera, and the target pixel area has the same field of view as all pixel areas of the second image;
  • the viewing angle of the second camera covers the viewing angle of the first camera, and the target pixel area has the same field of view as a partial pixel area of the second image.
  • the target pixel area is determined based on a zoom factor and a relative positional relationship between the first camera and the second camera.
  • the target pixel area has the same field of view as the reference pixel area in the second image
  • the processor When the processor is used to perform white balance correction on the second image based on the white balance gain for performing white balance correction on the first image, it is specifically used to:
  • the processor when the processor is configured to determine the fourth white balance gain based on the first image, it is specifically configured to:
  • determining a fifth white balance gain based on the second image comprising:
  • the fifth white balance gain is determined based on RGB values of the reference pixel region in the second image.
  • the processor is configured to use the fourth white balance gain to perform white balance correction based on the color deviation between the target pixel area after white balance correction and the fifth white balance gain.
  • the offset coefficient it is specifically used for:
  • the offset coefficient is determined based on a color deviation between the color-corrected target pixel area and the color-corrected reference pixel area.
  • the correction matrix for performing color correction on the target pixel area is determined based on a third color temperature, and the third color temperature is determined based on the fourth white balance gain;
  • a correction matrix for performing color correction on the white balance corrected reference pixel area is determined based on a fourth color temperature, and the fourth color temperature is determined based on a fifth white balance gain.
  • the target pixel area and the reference pixel area are pre-calibrated and obtained based on the relative positional relationship between the first camera and the second camera.
  • the processor is further configured to:
  • white balance correction is performed on multiple frames of second images collected by the second camera to obtain multiple frames of target images; so that the output of the second camera The color of the multiple frames of the target image transitions from the color corrected by the sixth white balance gain to the color corrected by the fifth white balance gain.
  • an image acquisition device is provided in an embodiment of the present application, and the image acquisition device includes a first camera, a second camera, and the image processing apparatus according to any one of the above embodiments.
  • the image acquisition device can hold various devices equipped with at least two cameras, such as a pan-tilt camera, a mobile phone, and a drone.
  • the embodiments of this specification further provide a computer storage medium, where a program is stored in the storage medium, and when the program is executed by a processor, the method in any of the foregoing embodiments is implemented.
  • Embodiments of the present description may take the form of a computer program product embodied on one or more storage media (including but not limited to magnetic disk storage, CD-ROM, optical storage, etc.) having program code embodied therein.
  • Computer usable storage media includes both volatile and non-permanent, removable and non-removable media, and may be implemented by any method or technology for information storage.
  • Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of storage media for computers include, but are not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridge, tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • Flash memory or other memory technology
  • CD-ROM Compact Disc Read-Only Memory
  • DVD Digital Versatile Disc
  • Magnetic tape cartridge tape magnetic disk storage or other magnetic storage device or any other non-transmission medium that can be used to
  • the device embodiment since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

L'invention concerne un procédé et un appareil de traitement d'image, un dispositif de collecte d'image et un support d'enregistrement. Le procédé comprend les étapes consistant à : pour la même scène de photographie cible, commuter une première caméra d'un dispositif de collecte d'image sur une seconde caméra, de façon à réaliser un zoom optique (S202) ; effectuer, à l'aide d'informations de couleur d'une première image collectée par la première caméra, un traitement de couleur sur une seconde image collectée par la seconde caméra, de manière à obtenir une image cible, les informations de couleur étant acquises sur la base d'une zone de pixel cible dans la première image, et la zone de pixel cible ayant le même champ de vision que toutes ou quelques zones de pixel de la seconde image (S204) ; et délivrer l'image cible, des zones d'image ayant le même champ de vision dans une image délivrée avant le zoom optique par la première caméra, et l'image cible ayant la même couleur (S206). Au moyen de ce procédé, l'effet d'une différence de champ de vision sur une couleur d'image peut être éliminé, garantissant ainsi que des scènes identiques dans des images délivrées avant et après la commutation d'une caméra ont la même couleur.
PCT/CN2021/129018 2021-11-05 2021-11-05 Procédé et appareil de traitement d'image, et dispositif de collecte d'image et support d'enregistrement WO2023077431A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/129018 WO2023077431A1 (fr) 2021-11-05 2021-11-05 Procédé et appareil de traitement d'image, et dispositif de collecte d'image et support d'enregistrement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/129018 WO2023077431A1 (fr) 2021-11-05 2021-11-05 Procédé et appareil de traitement d'image, et dispositif de collecte d'image et support d'enregistrement

Publications (1)

Publication Number Publication Date
WO2023077431A1 true WO2023077431A1 (fr) 2023-05-11

Family

ID=86240480

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/129018 WO2023077431A1 (fr) 2021-11-05 2021-11-05 Procédé et appareil de traitement d'image, et dispositif de collecte d'image et support d'enregistrement

Country Status (1)

Country Link
WO (1) WO2023077431A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117478802A (zh) * 2023-10-30 2024-01-30 神力视界(深圳)文化科技有限公司 图像处理方法、装置及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013207721A (ja) * 2012-03-29 2013-10-07 Fujifilm Corp 撮像装置、並びにそのホワイトバランス補正方法及びホワイトバランス補正プログラム
CN105227945A (zh) * 2015-10-21 2016-01-06 维沃移动通信有限公司 一种自动白平衡的控制方法及移动终端
CN107277355A (zh) * 2017-07-10 2017-10-20 广东欧珀移动通信有限公司 摄像头切换方法、装置及终端
CN107343190A (zh) * 2017-07-25 2017-11-10 广东欧珀移动通信有限公司 白平衡调节方法、装置和终端设备
US20170359494A1 (en) * 2016-06-12 2017-12-14 Apple Inc. Switchover control techniques for dual-sensor camera system
CN111314683A (zh) * 2020-03-17 2020-06-19 Oppo广东移动通信有限公司 白平衡调节方法及相关设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013207721A (ja) * 2012-03-29 2013-10-07 Fujifilm Corp 撮像装置、並びにそのホワイトバランス補正方法及びホワイトバランス補正プログラム
CN105227945A (zh) * 2015-10-21 2016-01-06 维沃移动通信有限公司 一种自动白平衡的控制方法及移动终端
US20170359494A1 (en) * 2016-06-12 2017-12-14 Apple Inc. Switchover control techniques for dual-sensor camera system
CN107277355A (zh) * 2017-07-10 2017-10-20 广东欧珀移动通信有限公司 摄像头切换方法、装置及终端
CN107343190A (zh) * 2017-07-25 2017-11-10 广东欧珀移动通信有限公司 白平衡调节方法、装置和终端设备
CN111314683A (zh) * 2020-03-17 2020-06-19 Oppo广东移动通信有限公司 白平衡调节方法及相关设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117478802A (zh) * 2023-10-30 2024-01-30 神力视界(深圳)文化科技有限公司 图像处理方法、装置及电子设备

Similar Documents

Publication Publication Date Title
US8830348B2 (en) Imaging device and imaging method
JP5321163B2 (ja) 撮像装置及び撮像方法
CN108154514B (zh) 图像处理方法、装置及设备
WO2017045558A1 (fr) Procédé et appareil d'ajustement de profondeur de champ, et terminal
JP2009053748A (ja) 画像処理装置、画像処理プログラムおよびカメラ
US10582174B2 (en) Video processing apparatus, video processing method, and medium
US20090190856A1 (en) Distortion correcting apparatus
CN112991245B (zh) 双摄虚化处理方法、装置、电子设备和可读存储介质
US11245852B2 (en) Capturing apparatus for generating two types of images for display from an obtained captured image based on scene luminance and exposure
WO2023077431A1 (fr) Procédé et appareil de traitement d'image, et dispositif de collecte d'image et support d'enregistrement
JP6253007B2 (ja) 表示装置
US20180077298A1 (en) Image-capturing assistance device and image-capturing device
CN114339042A (zh) 基于多摄像头的图像处理方法及装置、计算机可读存储介质
CN113315956B (zh) 图像处理设备、摄像设备、图像处理方法和机器可读介质
KR102285756B1 (ko) 전자 시스템 및 영상 처리 방법
JP5911340B2 (ja) 撮像装置およびその制御方法
JP6210772B2 (ja) 情報処理装置、撮像装置、制御方法、及びプログラム
WO2023016044A1 (fr) Procédé et appareil de traitement vidéo, dispositif électronique et support de stockage
JP2016046610A (ja) 撮像装置
JP2009212787A (ja) 撮像装置及び撮像方法
EP3442218A1 (fr) Appareil de traitement d'images pour produire des images avec des caracteristiques entree/sortie différentes pour des régions d'image différentes et des informations relatives aux régions, appareil client pour recevoir des images avec des caracteristiques entree/sortie différentes pour des régions d'image différentes et des informations relatives aux régions et pour afficher les régions dans une manière reconnaisable
WO2022087982A1 (fr) Procédé et appareil de traitement d'images, dispositif de photographie et support de stockage lisible par ordinateur
WO2023011197A1 (fr) Procédé de traitement d'image, dispositif électronique et support de stockage lisible par ordinateur
US20240007599A1 (en) Information processing system, information processing device, information processing method, information processing program, imaging device, control method of imaging device, and control program
WO2023044789A1 (fr) Procédé et dispositif de variation de focale servant à un dispositif d'acquisition d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21962944

Country of ref document: EP

Kind code of ref document: A1