WO2020011112A1 - Procédé et système de traitement d'image, support de stockage lisible et terminal - Google Patents

Procédé et système de traitement d'image, support de stockage lisible et terminal Download PDF

Info

Publication number
WO2020011112A1
WO2020011112A1 PCT/CN2019/094934 CN2019094934W WO2020011112A1 WO 2020011112 A1 WO2020011112 A1 WO 2020011112A1 CN 2019094934 W CN2019094934 W CN 2019094934W WO 2020011112 A1 WO2020011112 A1 WO 2020011112A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
cameras
color
images
Prior art date
Application number
PCT/CN2019/094934
Other languages
English (en)
Chinese (zh)
Inventor
吴炽强
Original Assignee
奇酷互联网络科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奇酷互联网络科技(深圳)有限公司 filed Critical 奇酷互联网络科技(深圳)有限公司
Publication of WO2020011112A1 publication Critical patent/WO2020011112A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising

Definitions

  • the present invention relates to the technical field of multi-camera imaging, and in particular, to an image processing method, system, readable storage medium, and terminal.
  • terminals are constantly popularizing, such as mobile phones, tablets, computers, and cameras.
  • terminals are constantly popularizing, such as mobile phones, tablets, computers, and cameras.
  • almost all smart devices have integrated camera lenses to achieve the camera's camera functions.
  • Dual cameras can obtain the depth of field of images, achieve background blur, or support Optical zoom, color + black and white noise reduction and other effects.
  • Current terminals are moving towards this multi-camera trend.
  • the current dual cameras can obtain the depth of field of the image, they cannot be calibrated, resulting in a poor background blur effect. For example, objects such as faces and hair at the focus distance are incorrectly blurred.
  • the dual cameras cannot support both optical zoom and image noise reduction, resulting in low noise and high definition images.
  • the object of the present invention is to provide an image processing method, system, readable storage medium, and terminal to solve the technical problem that the existing dual cameras cannot obtain low noise and high definition images.
  • An image processing method is applied to a terminal.
  • the terminal is provided with a main camera and at least two sub cameras. All cameras are connected through a synchronization signal line, and the main camera is connected to each of the cameras.
  • the sub-cameras have a common shooting area.
  • the method includes: acquiring a main-photograph image taken by the main camera, and separately acquiring a sub-photograph image synchronized by each of the sub-cameras; sequentially each of the sub-cameras An image is synthesized with one of the main images, and each of the images is selected as a reference image according to a first preset rule to synthesize multiple optimized images at a time. According to the second preset rule, Image synthesis is performed on a plurality of the primary optimized images to synthesize a secondary optimized image.
  • an image processing method may further have the following additional technical features:
  • the one-time optimized image is a depth image.
  • the step of performing image synthesis on a plurality of the one-time optimized images includes: selecting one of the one-time optimized images as a first reference image according to the second preset rule; obtaining the first reference A target image region having abnormal depth information in the image; obtaining target depth information of the same region as the target image region from other one-time optimized images, and using the target depth information to correct the depth information of the target image region.
  • the preset rule includes any one of the following rules: a rule that preferentially selects an image with the least abnormal depth information; a rule that preferentially selects an image with the largest field of view; a rule that preferentially selects an image with the smallest field of view; priority Select a composite image of images taken by a specific two cameras; or a rule that preferentially selects the image with the highest image quality.
  • the main camera is a first color camera
  • all the sub cameras include at least a black and white camera and a second color camera, and there is a difference in the equivalent focal lengths of the first color camera and the second color camera. .
  • the step of synthesizing the first optimized image by the images captured by the first color camera and the black and white camera includes: according to the first preset rule, a first color image captured from the first color camera And one of the second luminance signal images captured by the black and white camera is selected as a reference image; the first color image is split into a first luminance signal image and a first chrominance signal image; based on the selected A reference image, performing noise reduction synthesis on the first luminance signal image and the second luminance signal image to obtain a third luminance signal image after noise reduction; combining the first chrominance signal image and the third luminance signal image
  • the brightness signal image is synthesized to obtain the once optimized image after noise reduction.
  • the step of synthesizing the first optimized image by the images captured by the first color camera and the second color camera includes: according to the first preset rule, a first image captured by the first color camera A color image, and a second image selected from the second color camera as a reference image; determining the first color according to the focal lengths of the first color camera and the second color camera The subject and background relationship between the image and the second color image; and selectively performing a virtual correction on the first color image and the second color image based on the selected reference image and the determined subject and background relationship Chemical synthesis to obtain a clear one-time optimized image.
  • the arrangement manner of the cameras on the terminal is any one of the following situations: when two secondary cameras are provided on the terminal, the connection between the main camera and the two secondary cameras When the three sub cameras are provided on the terminal, all the cameras are arranged in a rectangular arrangement, and the four cameras are located at four corner points of the rectangle.
  • a pin connected to the synchronization signal line of the main camera is used as a synchronization signal output terminal
  • a pin connected to each synchronization signal line of each of the sub cameras is used as a synchronization signal input terminal
  • the frame rate of each camera is the same.
  • An image processing system is applied to a terminal.
  • the terminal is provided with a main camera and at least two sub cameras. All cameras are connected through a synchronization signal line, and the main camera is connected to each of the cameras.
  • an image processing system may also have the following additional technical features:
  • the one-time optimized image is a depth image.
  • the secondary synthesis module includes: a reference selection unit, configured to select one of the primary optimized images as a first reference image according to the second preset rule; and an abnormality acquisition unit, configured to acquire all A target image region having abnormal depth information in the first reference image; a depth calibration unit, configured to obtain target depth information in the same region as the target image region from other one-time optimized images, and use the target depth information to Correcting depth information of the target image area.
  • the second preset rule includes any one of the following rules: a rule that preferentially selects an image with the least abnormal depth information; a rule that preferentially selects an image with the largest field of view; a rule that preferentially selects an image with the smallest field of view ; Preferentially select the composite image of the images captured by the specific two cameras; or the rule of preferentially selecting the image with the highest image quality.
  • the main camera is a first color camera
  • all the sub cameras include at least a black and white camera and a second color camera, and there is a difference in the equivalent focal lengths of the first color camera and the second color camera. .
  • the one-time composition module includes a first selection unit configured to, according to the first preset rule, a first color image captured from the first color camera, and a second color image captured by the black and white camera.
  • An image is selected from the luminance signal images as a reference image;
  • an image splitting unit is configured to split the first color image into a first luminance signal image and a first chrominance signal image;
  • a first synthesizing unit is configured to Performing noise reduction synthesis on the first luminance signal image and the second luminance signal image to obtain a noise-reduced third luminance signal image;
  • a second synthesis unit configured to combine the first chrominance signal image with the first luminance signal image;
  • the third brightness signal image is synthesized to obtain the once optimized image after noise reduction.
  • the one-time synthesis module further includes: a second selection unit configured to, according to the first preset rule, a first color image captured from the first color camera, and a second color camera capture One of the second color images is selected as a reference image; the relationship determining unit is configured to determine the first color image and the first color image according to the focal lengths of the first color camera and the second color camera; Subject and background relationship between two color images; an image blurring unit for selecting the first color image and the second color image based on the selected reference image and the determined subject and background relationship Sexual blurring synthesis to obtain a clear one-time optimized image.
  • a second selection unit configured to, according to the first preset rule, a first color image captured from the first color camera, and a second color camera capture One of the second color images is selected as a reference image
  • the relationship determining unit is configured to determine the first color image and the first color image according to the focal lengths of the first color camera and the second color camera
  • Subject and background relationship between two color images an image blurring unit for selecting the first
  • the arrangement manner of the cameras on the terminal is any one of the following situations: when two secondary cameras are provided on the terminal, the connection between the main camera and the two secondary cameras When the three sub cameras are provided on the terminal, all the cameras are arranged in a rectangular arrangement, and the four cameras are located at four corner points of the rectangle.
  • a pin connected to the synchronization signal line of the main camera is used as a synchronization signal output terminal
  • a pin connected to each synchronization signal line of each of the sub cameras is used as a synchronization signal input terminal
  • the frame rate of each camera is the same.
  • the present invention also proposes a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the image processing method as described above is implemented.
  • the present invention also provides a terminal including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the terminal is provided with a main camera and at least two sub cameras, and all cameras pass a synchronization signal.
  • the main camera and each of the sub cameras have a common shooting area, and the processor implements the method as described above when the processor executes the program.
  • the image processing method, system, readable storage medium, and terminal described above by arranging multiple secondary cameras and having a common shooting area between the primary camera and each secondary camera, acquire a primary image and multiple images simultaneously. Sub-shooting images, and then synthesizing each sub-shooting image with a main-shooting image to obtain multiple optimized images at once. This time the combination is a dual-shot combination, which can synthesize multiple depth-of-field images and / or at least one clear image. Image and at least one noise reduction image, and then synthesizing multiple once-optimized images to output a second-optimized image.
  • the synthesis of multiple depth-of-field images can achieve calibration of depth-of-field information, and clear images and noise reduction
  • the images can be superimposed and synthesized with each other, so the image processing method, system, readable storage medium and terminal can output a more accurate depth of field map to avoid error blurring, and at the same time can output low noise and high definition images, which can improve the overall quality of the captured images. Picture quality.
  • FIG. 1 is a flowchart of an image processing method in a first embodiment of the present invention
  • FIG. 2 is a flowchart of an image processing method in a second embodiment of the present invention.
  • FIG. 5 is a flowchart of an image processing method in a third embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an image processing system in a fourth embodiment of the present invention.
  • Image acquisition module 11 One-time synthesis module 12 Secondary Synthesis Module 13 Benchmark Selection Unit 131 Exception Obtaining Unit 132 Depth Calibration Unit 133 First selected unit 121 Second selected unit 125 Image splitting unit 122 First synthesis unit 123 Second synthesis unit 124 Relationship determination unit 126 Image blurring unit 127 Zh Zh
  • FIG. 1 shows an image processing method according to a first embodiment of the present invention, which is applied to a terminal.
  • the terminal is provided with a main camera and at least two sub cameras, and all cameras are connected through a synchronization signal line. There is a common shooting area between the main camera and each of the sub cameras, and the image processing method includes steps S01 to S03.
  • step S01 a main camera image captured by the main camera is acquired, and a sub camera image captured by each of the sub cameras is acquired separately.
  • the pins connected to the synchronization signal line of the main camera can be used as the synchronization signal output terminal, and the pins connected to the synchronization signal line of each sub camera are used as the synchronization signal input terminal, and the frame rate of each camera is the same.
  • the main camera starts the exposure of a frame of images, it outputs a synchronization signal to trigger the exposure of each sub-camera at the same time, so that the frames output by all cameras are collected simultaneously, providing a basis for subsequent image synthesis.
  • the selection type of each camera can be determined according to the requirements of the final output image. For example, the final output image needs to be a more accurate depth map. Then the type of each camera can be a depth camera. The more requirements for output images, the more the number and type of cameras.
  • step S02 image synthesis is performed on each of the sub-photograph images and one of the main-photograph images in turn, and one of the images is selected as a reference image in accordance with the first preset rule for each combination to synthesize multiple images once. Optimize the image.
  • the reference image refers to the image that is selected as the final optimized result from the two synthesized images. For example, the first image and the second image are synthesized. Assuming that the first image is selected as the reference image, the synthesized output is optimized. First image.
  • the main camera and each sub-camera have a common shooting area, the main image and each sub-image acquired simultaneously have the same image area, so each sub-camera can be combined with a
  • the main shot image is image synthesized to optimize the same image area to obtain one optimized image at a time.
  • images taken by different types of cameras have their own characteristics.
  • black and white cameras have the characteristics of low noise
  • the combination of images taken by different types of cameras also produces images with different characteristics, such as black and white images taken by black and white cameras.
  • the color camera captures and synthesizes color images to obtain low-noise color images to reduce noise in the image. For example, if two images containing depth-of-field information are combined, a depth-of-field image can be synthesized.
  • the first preset rule may be any one of the following rules:
  • the rule for selecting an image with a larger field of view or a smaller field of view For example, if the field of view of the first image is larger than the field of view of the second image, the first image is selected as the reference image, and the larger the field of view, the larger the image area;
  • step S03 according to a second preset rule, a plurality of the primary optimized images are image synthesized to synthesize a secondary optimized image.
  • the image synthesis in this step is mainly to integrate the characteristics of each primary optimized image onto a secondary optimized image to obtain a better image effect.
  • multiple sub cameras are arranged, and a common shooting area is provided between the main camera and each sub camera, so as to acquire a main image and multiple Sub-shooting images, and then synthesizing each sub-shooting image with a main-shooting image to obtain multiple optimized images at once.
  • the combination is a dual-shot combination, which can synthesize multiple depth-of-field images and / or at least one clear image. Image and at least one noise reduction image, and then synthesizing multiple once-optimized images to output a second-optimized image.
  • the synthesis of multiple depth-of-field images can achieve calibration of depth-of-field information, and clear images and noise reduction
  • the images can be superimposed and synthesized with each other, so the image processing method, system, readable storage medium and terminal can output a more accurate depth of field map to avoid error blurring, and at the same time can output low noise and high definition images, which can improve the overall quality of the captured images. Picture quality.
  • FIG. 2 shows an image processing method according to a second embodiment of the present invention, which is applied to a terminal.
  • the terminal is provided with a main camera and two sub-cameras, which are three cameras in total, and each camera is capable of shooting.
  • a camera that can be used to synthesize images of the depth-of-field image is provided. All cameras are connected through a synchronization signal line, and the main camera and each of the sub cameras have a common shooting area.
  • the image processing method includes steps S11 to Step S15.
  • FIG. 3 shows the arrangement of the cameras on the terminal in this embodiment. Specifically, the connection between the main camera and the two sub cameras is perpendicular to each other. This vertical arrangement can make The image information of the main camera and each sub-camera is more complementary and more beautiful. At the same time, all cameras and flashes are arranged in a square arrangement and are located on the four corners of the rectangle.
  • Step S11 Acquire a main image taken by the main camera, and acquire a sub-image taken by each of the sub-cameras synchronously.
  • the pins connected to the synchronization signal line of the main camera can be used as the synchronization signal output terminal, and the pins connected to the synchronization signal line of each sub camera are used as the synchronization signal input terminal, and the frame rate of each camera can be the same.
  • each camera is a camera capable of capturing an image that can be used to synthesize a depth of field image
  • both the main camera image and the sub-camera obtained by synchronous shooting contain depth information.
  • step S12 image synthesis is performed on each of the sub-photograph images and one of the main-photograph images in turn, and one of the images is selected as a reference image in accordance with the first preset rule for each combination to synthesize multiple images once. Optimize the image.
  • the one-time optimized image is a depth image, which is synthesized according to the depth information of each image.
  • Step S13 According to a second preset rule, select one of the primary optimized images as the first reference image.
  • the second preset rule includes any one of the following rules: a rule that preferentially selects the image with the least abnormal depth information; a rule that preferentially selects the image with the largest field of view; and preferentially selects the image with the smallest field of view
  • a rule that preferentially selects a composite image of images captured by a specific two cameras for example, always selecting an optimized image that is a combination of images captured by camera A and camera B as the first reference image; or a rule that preferentially selects the image with the highest image quality.
  • Step S14 Obtain a target image region with abnormal depth information in the first reference image.
  • the target image region with abnormal depth information in the first reference image can be obtained.
  • Step S15 Obtain target depth information of the same region as the target image region from the other optimized images, and use the target depth information to correct the depth information of the target image region to obtain a second optimization. image.
  • this step is to correct the depth information of the corresponding area of the reference image by obtaining the depth information of the same image area in other images, so that the reference image becomes a more accurate depth map, and then output.
  • the terminal may also be provided with a main camera and three sub-cameras, which have four cameras as a whole, and each camera may be arranged in a square arrangement for all cameras. Arrangement, the flash is arranged at the center of the area surrounded by each camera.
  • the image processing method in the present invention is not limited to three shots and four shots. In other embodiments, it may be five shots or more, which can be determined according to the requirements of the final output image.
  • the image processing method in the above embodiments of the present invention can realize self-calibration of the depth of field map during execution, and can output a more accurate depth of field map.
  • FIG. 5 shows an image processing method according to a third embodiment of the present invention, which is applied to a terminal.
  • the terminal is provided with a main camera and two sub cameras.
  • the main camera is a first color camera.
  • the secondary cameras are respectively a black and white camera and a second color camera. There is a difference in the equivalent focal length of the first color camera and the second color camera. All cameras are connected through a synchronization signal line, and the main camera There is a common shooting area with each of the secondary cameras, and the image processing method includes steps S21 to S29.
  • Step S21 Acquire a main image taken by the main camera, and acquire a sub-image taken by each of the sub-cameras synchronously.
  • the first color camera and the second color camera are both color imaging cameras, and the captured images are color images.
  • the color images generally have brightness information and chrominance information that can be split.
  • Black and white cameras are black and white imaging cameras.
  • the captured image is a black and white image, and the black and white image has only brightness information, low noise, and good image stability. Therefore, the main image and one of the sub-images are color images, and the other sub-image is a black and white image.
  • Step S22 According to a first preset rule, an image is selected as a reference image from a first color image captured by the first color camera and a second brightness signal image captured by the black and white camera.
  • Step S23 Split the first color image into a first luminance signal image and a first chrominance signal image.
  • Step S24 Based on the selected reference image, perform noise reduction synthesis on the first luminance signal image and the second luminance signal image to obtain a third luminance signal image after noise reduction.
  • a process of performing noise reduction combining the first luminance signal image and the second luminance signal image captured by the black-and-white camera may be calculating the first luminance signal image and the second luminance signal image.
  • the average pixel value of the same pixel point is then used as the final pixel value after the pixel point is synthesized, and the final pixel value is rendered on the reference image to obtain a third brightness signal image.
  • the second brightness signal image is a black and white image taken by a black and white camera, and because the second brightness signal has low noise, the first brightness signal image and all After the second luminance signal image is synthesized, the obtained third luminance signal image has a characteristic of low noise.
  • step S25 based on the selected reference image, the first chrominance signal image and the third luminance signal image are synthesized to obtain a one-time optimized image after noise reduction.
  • Step S26 According to the first preset rule, select one image as a reference image from the first color image and the second color image captured by the second color camera;
  • Step S27 Determine the subject and background relationship between the first color image and the second color image according to the focal lengths of the first color camera and the second color camera.
  • the focal length of the lens can determine what the first color image and the second color image are as the subject image and what is the background image.
  • step S28 based on the selected reference image and the determined relationship between the subject and the background, the first color image and the second color image are selectively blurred and synthesized to obtain a clear primary optimized image.
  • the selective blurring may be any one of a blurring background, a blurring subject, or a certain image area, which may be preset or determined by the terminal processor.
  • steps S22 to S25 are synthesize the images captured by the first color camera and the black and white camera once to optimize the images.
  • the above steps S26 to S28 are mainly for synthesizing the images captured by the first color camera and the second color camera into an optimized image.
  • steps S22-S25 are arranged and executed before steps S26-S28, but the present invention is not limited thereto.
  • steps S26-S28 can also be arranged and executed before steps S22-S25, or synchronized carried out.
  • Step S29 According to a second preset rule, image synthesis is performed on a plurality of the primary optimized images to synthesize a secondary optimized image.
  • the second preset rule may be: firstly selecting a one-time optimized image as a reference image, and then performing multiple pixel synthesis on the same image area for multiple one-time optimized images and rendering them to corresponding areas of the reference image, Get the final output image.
  • the image processing method in the above embodiments of the present invention can support effects such as optical zoom and color + black and white noise reduction during execution, and can output low noise and high definition images.
  • the terminal may also be provided with a main camera and three sub-cameras, which are four cameras in total.
  • the arrangement of each camera may be arranged with reference to FIG. 4, and is not described herein again.
  • the image processing method in the present invention is not limited to three-shot and four-shot. In other embodiments, it may be five-shot or more, which can be determined according to the requirements of the final output image.
  • FIG. 6 shows an image processing system in a fourth embodiment of the present invention, which is applied to a terminal.
  • the terminal is provided with a main camera and at least two cameras.
  • Sub-cameras all cameras are connected by a synchronization signal line, and the main camera and each of the sub-cameras have a common shooting area, the system includes:
  • An image acquisition module 11 configured to acquire a main image captured by the main camera, and respectively acquire a secondary image captured by each of the secondary cameras synchronously;
  • a one-time composition module 12 is configured to sequentially synthesize images of each of the sub-photograph images and one of the main-photograph images, and select one of the images as a reference image according to a first preset rule for each combination, and Synthesize multiple optimized images at once;
  • the secondary synthesis module 13 is configured to perform image synthesis on a plurality of the primary optimized images according to a second preset rule to synthesize a secondary optimized image.
  • the one-time optimized image is a depth image.
  • the secondary synthesis module 13 includes:
  • a reference selecting unit 131 configured to select one of the once optimized images as a first reference image according to the second preset rule
  • An abnormality obtaining unit 132 configured to obtain a target image region in which the depth information in the first reference image is abnormal
  • a depth calibration unit 133 is configured to obtain target depth information of the same area as the target image area from among the other optimized images, and use the target depth information to correct the depth information of the target image area.
  • the second preset rule includes any one of the following rules: a rule that preferentially selects an image with the least abnormal depth information; a rule that preferentially selects an image with the largest field of view; a rule that preferentially selects an image with the smallest field of view ; Preferentially select the composite image of the images captured by the specific two cameras; or the rule of preferentially selecting the image with the highest image quality.
  • the main camera is a first color camera
  • all the sub cameras include at least a black and white camera and a second color camera, and there is a difference in the equivalent focal lengths of the first color camera and the second color camera. .
  • the primary synthesis module 12 includes:
  • a first selecting unit 121 configured to select an image from a first color image captured by the first color camera and a second brightness signal image captured by the black and white camera according to the first preset rule As a reference image;
  • An image splitting unit 122 configured to split the first color image into a first luminance signal image and a first chrominance signal image
  • a first combining unit 123 configured to perform noise reduction synthesis on the first brightness signal image and the second brightness signal image to obtain a third brightness signal image after noise reduction;
  • the second synthesizing unit 124 is configured to synthesize the first chrominance signal image and the third luminance signal image to obtain the primary optimized image after noise reduction.
  • the one-time synthesis module 12 further includes:
  • the second selecting unit 125 is configured to select one of a first color image captured by the first color camera and a second color image captured by the second color camera according to the first preset rule. Image as the reference image;
  • a relationship determining unit 126 configured to determine a subject and background relationship between the first color image and the second color image according to a focal length of the first color camera and the second color camera;
  • An image blurring unit 127 is configured to selectively blur the first color image and the second color image based on the selected reference image and the determined relationship between the subject and the background, so as to obtain a clear image. Describe the optimized image once.
  • the arrangement manner of the cameras on the terminal is any one of the following situations:
  • connection between the main camera and the two secondary cameras is perpendicular to each other;
  • all cameras are arranged in a rectangular arrangement, and four cameras are located at four corner points of the rectangle.
  • a pin connected to the synchronization signal line of the main camera is used as a synchronization signal output terminal
  • a pin connected to each synchronization signal line of each of the sub cameras is used as a synchronization signal input terminal
  • the frame rate of each camera is the same.
  • the present invention also proposes a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the image processing method as described above is implemented.
  • the present invention also provides a terminal including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the terminal is provided with a main camera and at least two sub cameras, and all cameras pass a synchronization signal.
  • the shooting area of the main camera is equal to or includes the shooting area of the sub camera, and the processor implements the method as described above when the processor executes the program.
  • the terminal includes, but is not limited to, a mobile phone, a computer, a tablet, a smart TV, a security device, a smart wearable device, and the like.
  • a "computer-readable medium” may be any device that can contain, store, communicate, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) with one or more wirings, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read-only memory (ROM), erasable and editable read-only memory (EPROM or flash memory), fiber optic devices, and portable optical disk read-only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because, for example, by optically scanning the paper or other medium, followed by editing, interpretation, or other suitable Processing to obtain the program electronically and then store it in computer memory.
  • each part of the present invention may be implemented by hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it may be implemented using any one or a combination of the following techniques known in the art: Discrete logic circuits, application specific integrated circuits with suitable combinational logic gate circuits, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
  • An image processing method is applied to a terminal.
  • the terminal is provided with a main camera and at least two sub cameras, all cameras are connected by a synchronization signal line, and between the main camera and each of the sub cameras Both have a common shooting area, the method includes:
  • a plurality of the primary optimized images are image synthesized to synthesize a secondary optimized image.
  • A2 The image processing method according to A1, wherein the one-time optimized image is a depth image.
  • the image processing method according to A2, wherein the step of performing image synthesis on the plurality of optimized images at one time includes:
  • the second preset rule includes any one of the following rules:
  • the main camera is a first color camera
  • all the sub cameras include at least a black and white camera and a second color camera, the first color camera and the second camera There is a difference in the equivalent focal length of color cameras.
  • the step of synthesizing the images optimized by the first color camera and the black and white camera into the one-time optimized image includes:
  • the step of synthesizing the images optimized by the first color camera and the second color camera into the one-time optimized image includes:
  • the arrangement manner of the cameras on the terminal is any one of the following situations:
  • connection between the main camera and the two secondary cameras is perpendicular to each other;
  • all cameras are arranged in a rectangular arrangement, and four cameras are located at four corner points of the rectangle.
  • A9 The image processing method according to A1, wherein a pin connected to the synchronization signal line of the main camera is used as a synchronization signal output terminal, and a pin connected to the synchronization signal line of each of the sub cameras is used as a synchronization signal input. And the frame rate of each camera is the same.
  • An image processing system applied to a terminal the terminal is provided with a main camera and at least two sub cameras, all cameras are connected by a synchronization signal line, and between the main camera and each of the sub cameras Both have a common shooting area, and the system includes:
  • An image acquisition module configured to acquire a main image captured by the main camera, and respectively acquire a secondary image captured by each of the secondary cameras synchronously;
  • a one-time composition module configured to sequentially synthesize each of the sub-photograph images and one of the main-photograph images, and select one of the images as a reference image according to a first preset rule for each composition to synthesize Optimize multiple images at once;
  • a secondary synthesis module is configured to perform image synthesis on a plurality of the primary optimized images according to a second preset rule to synthesize one secondary optimized image.
  • a reference selecting unit configured to select one of the once optimized images as a first reference image according to the second preset rule
  • An abnormality obtaining unit configured to obtain a target image region in which depth information is abnormal in the first reference image
  • a depth calibration unit is configured to obtain target depth information of the same area as the target image area from among the other optimized images, and use the target depth information to correct the depth information of the target image area.
  • the main camera is a first color camera
  • all the sub cameras include at least a black and white camera and a second color camera, the first color camera and the second camera There is a difference in the equivalent focal length of color cameras.
  • a first selecting unit configured to select an image from a first color image captured by the first color camera and a second brightness signal image captured by the black and white camera according to the first preset rule Reference image
  • An image splitting unit configured to split the first color image into a first luminance signal image and a first chrominance signal image
  • a first synthesis unit configured to perform noise reduction synthesis on the first brightness signal image and the second brightness signal image to obtain a third brightness signal image after noise reduction
  • a second synthesizing unit is configured to synthesize the first chrominance signal image and the third luminance signal image to obtain the primary optimized image after noise reduction.
  • the one-time synthesis module further includes:
  • a second selection unit configured to select an image from a first color image captured by the first color camera and a second color image captured by the second color camera according to the first preset rule As a reference image;
  • a relationship determining unit configured to determine a subject and background relationship between the first color image and the second color image according to a focal length of the first color camera and the second color camera;
  • An image blurring unit configured to selectively blur the first color image and the second color image based on the selected reference image and the determined relationship between the subject and the background, so as to obtain a clear image Optimize the image at once.
  • connection between the main camera and the two secondary cameras is perpendicular to each other;
  • all cameras are arranged in a rectangular arrangement, and four cameras are located at four corner points of the rectangle.
  • a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to any of A1 to A9.
  • a terminal comprising a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the terminal is provided with a main camera and at least two sub cameras, and all cameras are connected through a synchronization signal line And the main camera and each of the sub cameras have a common shooting area, and when the processor executes the program, the method according to any one of A1 to A9 is implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé et un système de traitement d'imagerie, un support de stockage lisible et un terminal. Le procédé est appliqué au terminal sur lequel une caméra principale et au moins deux caméras auxiliaires sont disposées, l'ensemble des caméras étant connectées par l'intermédiaire d'une ligne de signal de synchronisation, et toutes les caméras ayant une zone de photographie commune. Le procédé consiste à : acquérir des images photographiées de manière synchrone par toutes les caméras ; effectuer une synthèse d'image sur chaque image photographiée de manière auxiliaire et une image photographiée de manière principale en séquence, et sélectionner l'une des images comme image de référence selon une première règle prédéfinie pendant chaque processus de synthèse, de façon à synthétiser une pluralité d'images optimisées primaires ; et effectuer une synthèse d'image sur la pluralité d'images optimisées primaires selon une seconde règle prédéfinie, de façon à synthétiser une image optimisée secondaire. Au moyen du procédé et du système de traitement d'image, du support de stockage lisible et du terminal de la présente invention, une pluralité de caméras auxiliaires sont agencées, et une synthèse d'image est effectuée de manière répétée, de telle sorte qu'une image de profondeur de champ plus précise peut être délivrée, un flou erroné peut être évité, et une image à faible bruit et de haute définition peut également être délivrée.
PCT/CN2019/094934 2018-07-09 2019-07-05 Procédé et système de traitement d'image, support de stockage lisible et terminal WO2020011112A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810747463.2 2018-07-09
CN201810747463.2A CN109064415A (zh) 2018-07-09 2018-07-09 图像处理方法、系统、可读存储介质及终端

Publications (1)

Publication Number Publication Date
WO2020011112A1 true WO2020011112A1 (fr) 2020-01-16

Family

ID=64819163

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/094934 WO2020011112A1 (fr) 2018-07-09 2019-07-05 Procédé et système de traitement d'image, support de stockage lisible et terminal

Country Status (2)

Country Link
CN (1) CN109064415A (fr)
WO (1) WO2020011112A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051361A (zh) * 2022-06-30 2023-05-02 荣耀终端有限公司 图像维测数据的处理方法及装置

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064415A (zh) * 2018-07-09 2018-12-21 奇酷互联网络科技(深圳)有限公司 图像处理方法、系统、可读存储介质及终端
CN110312056B (zh) * 2019-06-10 2021-09-14 青岛小鸟看看科技有限公司 一种同步曝光方法和图像采集设备
CN110611765B (zh) * 2019-08-01 2021-10-15 深圳市道通智能航空技术股份有限公司 一种相机成像方法、相机系统及无人机
CN110620873B (zh) * 2019-08-06 2022-02-22 RealMe重庆移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN113518172B (zh) * 2020-03-26 2023-06-20 华为技术有限公司 图像处理方法和装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827754A (zh) * 2016-03-24 2016-08-03 维沃移动通信有限公司 一种高动态范围图像的生成方法及移动终端
CN108024054A (zh) * 2017-11-01 2018-05-11 广东欧珀移动通信有限公司 图像处理方法、装置及设备
CN109064415A (zh) * 2018-07-09 2018-12-21 奇酷互联网络科技(深圳)有限公司 图像处理方法、系统、可读存储介质及终端

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156987A (zh) * 2011-04-25 2011-08-17 深圳超多维光电子有限公司 获取场景深度信息的方法及装置
JP5843751B2 (ja) * 2012-12-27 2016-01-13 株式会社ソニー・コンピュータエンタテインメント 情報処理装置、情報処理システム、および情報処理方法
CN105160663A (zh) * 2015-08-24 2015-12-16 深圳奥比中光科技有限公司 获取深度图像的方法和系统
CN106210524B (zh) * 2016-07-29 2019-03-19 信利光电股份有限公司 一种摄像模组的拍摄方法及摄像模组
CN106993136B (zh) * 2017-04-12 2021-06-15 深圳市知赢科技有限公司 移动终端及其基于多摄像头的图像降噪方法和装置
CN107800827B (zh) * 2017-11-10 2024-05-07 信利光电股份有限公司 一种多摄像头模组的拍摄方法和多摄像头模组
CN107819992B (zh) * 2017-11-28 2020-10-02 信利光电股份有限公司 一种三摄像头模组及电子设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827754A (zh) * 2016-03-24 2016-08-03 维沃移动通信有限公司 一种高动态范围图像的生成方法及移动终端
CN108024054A (zh) * 2017-11-01 2018-05-11 广东欧珀移动通信有限公司 图像处理方法、装置及设备
CN109064415A (zh) * 2018-07-09 2018-12-21 奇酷互联网络科技(深圳)有限公司 图像处理方法、系统、可读存储介质及终端

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051361A (zh) * 2022-06-30 2023-05-02 荣耀终端有限公司 图像维测数据的处理方法及装置
CN116051361B (zh) * 2022-06-30 2023-10-24 荣耀终端有限公司 图像维测数据的处理方法及装置

Also Published As

Publication number Publication date
CN109064415A (zh) 2018-12-21

Similar Documents

Publication Publication Date Title
WO2020011112A1 (fr) Procédé et système de traitement d'image, support de stockage lisible et terminal
CN107948519B (zh) 图像处理方法、装置及设备
US10897609B2 (en) Systems and methods for multiscopic noise reduction and high-dynamic range
US9288392B2 (en) Image capturing device capable of blending images and image processing method for blending images thereof
CN109598673A (zh) 图像拼接方法、装置、终端及计算机可读存储介质
CN107911682B (zh) 图像白平衡处理方法、装置、存储介质和电子设备
US10489885B2 (en) System and method for stitching images
CN111932587B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
US11184553B1 (en) Image signal processing in multi-camera system
EP3891974B1 (fr) Anti-dédoublement d'image et fusion à plage dynamique élevée
JP2019533957A (ja) 端末のための撮影方法及び端末
CN110958401A (zh) 一种超级夜景图像颜色校正方法、装置和电子设备
CN105120247A (zh) 一种白平衡调整方法及电子设备
WO2020029679A1 (fr) Procédé et appareil de commande, dispositif d'imagerie, dispositif électronique et support de stockage lisible
US11503223B2 (en) Method for image-processing and electronic device
WO2020215180A1 (fr) Procédé et appareil de traitement d'image, et dispositif électronique
KR101915036B1 (ko) 실시간 비디오 스티칭 방법, 시스템 및 컴퓨터 판독 가능 기록매체
CN112991245A (zh) 双摄虚化处理方法、装置、电子设备和可读存储介质
US11563898B2 (en) Apparatus and methods for generating high dynamic range media, based on multi-stage compensation of motion
US11032463B2 (en) Image capture apparatus and control method thereof
JP2019075716A (ja) 画像処理装置、画像処理方法、及びプログラム
CN112104796B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN109447925B (zh) 图像处理方法和装置、存储介质、电子设备
CN113014811A (zh) 图像处理装置及方法、设备、存储介质
WO2020244194A1 (fr) Procédé et système d'obtention d'une image de profondeur de champ peu profonde

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19834439

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19834439

Country of ref document: EP

Kind code of ref document: A1