WO2019109805A1 - Method and device for processing image - Google Patents

Method and device for processing image Download PDF

Info

Publication number
WO2019109805A1
WO2019109805A1 PCT/CN2018/116752 CN2018116752W WO2019109805A1 WO 2019109805 A1 WO2019109805 A1 WO 2019109805A1 CN 2018116752 W CN2018116752 W CN 2018116752W WO 2019109805 A1 WO2019109805 A1 WO 2019109805A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
main
target
sub
Prior art date
Application number
PCT/CN2018/116752
Other languages
French (fr)
Chinese (zh)
Inventor
谭国辉
姜小刚
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019109805A1 publication Critical patent/WO2019109805A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present application relates to the field of image processing, and in particular, to an image processing method and apparatus.
  • the functions of terminal devices are more and more diversified, and image processing functions that meet various needs of users are provided, for example, the beauty of photos in the terminal devices that satisfy the user's photo beauty requirements.
  • the function of the color for example, the filter adding function that satisfies the user's photo beautification function in the terminal device.
  • the terminal device After receiving the image processing function selected by the user, the terminal device performs unified processing on the image according to the image processing function selected by the user.
  • the user has different image processing on different regions on the image.
  • the above image processing method is difficult to meet the user's personalized image processing requirements.
  • the present application provides an image processing method and apparatus to solve the technical problem in the prior art that it is difficult to perform corresponding image processing on different areas in an image.
  • An embodiment of the present application provides an image processing method, including: controlling a main camera to capture a plurality of sets of main images, and controlling a sub-camera to take a plurality of sets of sub-images; acquiring a reference main image from the plurality of sets of main images, and Obtaining a reference sub-image captured in the same group as the reference main image in the group sub-image; performing synthesizing noise reduction processing on the plurality of groups of main images by the first thread to generate a target main image, and simultaneously passing (N-1) multi-threads Obtaining depth information of (N-1) target images corresponding to the (N-1) multi-threads one by one according to the reference main image and the reference sub-image, respectively, wherein the (N-1) (N-1) target images corresponding to a plurality of threads are not repeated, N is an integer greater than 2; and corresponding (N) is obtained in the target main image according to depth information of the (N-1) target images -1) target areas, corresponding image processing is performed on
  • an image processing apparatus including: a photographing module, configured to control a main camera to capture a plurality of sets of main images, and simultaneously control a sub-camera to take a plurality of sets of sub-images; and a first acquiring module, configured to Obtaining a reference main image in the group main image, and acquiring a reference sub-image captured in the same group as the reference main image from the plurality of sets of sub-images; the first processing module is configured to pair the plurality of groups by the first thread Performing a synthetic noise reduction process on the image to generate a target main image, and acquiring (N-1) multi-threads according to the reference main image and the reference sub-image respectively by (N-1) multi-threads (N-1) depth information of the target image, wherein the (N-1) target images corresponding to the (N-1) multithreads are not repeated, and N is an integer greater than 2; the second processing module uses Obtaining corresponding (N-1) target regions in the target main image according
  • a further embodiment of the present application provides a computer device including a memory and a processor, wherein the memory stores computer readable instructions, and when the instructions are executed by the processor, the processor performs the above implementation of the present application.
  • the image processing method described in the example is described in the example.
  • a further embodiment of the present application provides a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements an image processing method as described in the above embodiments of the present application.
  • the target main image is generated by synthesizing the noise reduction processing of the plurality of sets of main images by the first thread, and acquiring (N-1) multi-threads according to the reference main image and the reference sub-image respectively by (N-1) multi-threads
  • the depth of field information of the corresponding (N-1) target images, and further, the corresponding (N-1) target regions are acquired in the target main image according to the depth information of the (N-1) target images, according to the preset (N-1) image processing parameters corresponding to the target image perform corresponding image processing on (N-1) target regions. Thereby, corresponding image processing for different regions is realized, and image processing efficiency is improved.
  • FIG. 1 is a flow chart of an image processing method according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of the principle of triangulation according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of acquiring depth information of a dual camera according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an image processing scenario according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a scene of a tourist photograph according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an image processing apparatus according to another embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an image processing apparatus according to still another embodiment of the present application.
  • FIG. 9 is a schematic diagram of an image processing circuit in accordance with an embodiment of the present application.
  • the execution body of the image processing method and apparatus of the present application may be a terminal device, where the terminal device may be a hardware device with a dual camera, such as a mobile phone, a tablet computer, a personal digital assistant, a wearable device, or the like.
  • the wearable device can be a smart bracelet, a smart watch, smart glasses, and the like.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 1, the method includes:
  • step 101 the main camera is controlled to take a plurality of sets of main images, and the sub-camera is controlled to take a plurality of sets of sub-images.
  • the terminal device performs photographing through a dual camera system, and the dual camera system calculates depth information by using a main image captured by the main camera and a sub image captured by the sub camera, wherein the dual camera system includes a main camera that acquires a main image of the photographing subject, and A sub-camera that assists the main image to acquire depth information, wherein the main camera and the sub-camera may be arranged in a horizontal direction, or may be arranged in a vertical direction, etc., in order to more clearly describe the dual camera.
  • the dual camera system includes a main camera that acquires a main image of the photographing subject, and A sub-camera that assists the main image to acquire depth information, wherein the main camera and the sub-camera may be arranged in a horizontal direction, or may be arranged in a vertical direction, etc., in order to more clearly describe the dual camera.
  • the human eye discriminates the depth of field information mainly by relying on binocular vision to distinguish depth of field information, which is the same as the principle of dual camera resolution of depth information, mainly based on the principle of triangular ranging as shown in Fig. 2, based on the figure. 2, in the actual space, draw the imaging object, and the position of the two cameras O R and O T , and the focal plane of the two cameras, the distance between the focal plane and the plane of the two cameras is f, in the focal plane Two cameras are positioned to image, resulting in two captured images.
  • P and P' are the positions of the same object in different captured images, respectively.
  • the distance from the P point to the left boundary of the captured image is X R
  • the distance from the P′ point to the left boundary of the captured image is X T .
  • O R and O T are two cameras respectively, and the two cameras are on the same plane with a distance B.
  • the distance Z between the object in Figure 2 and the plane of the two cameras has the following relationship:
  • d is the difference in distance between the positions of the same object in different captured images. Since B and f are constant values, the distance Z of the object can be determined according to d.
  • the distance of the object in the scene from the camera is related to the main camera and the sub camera.
  • the displacement difference, the posture difference, and the like of the imaging are proportional, and therefore, in one embodiment of the present application, the above-described distance Z can be obtained according to such a proportional relationship.
  • a map of different point differences is calculated by the main image acquired by the main camera and the sub-image acquired by the sub-camera, which is represented by a disparity map, which is the same on the two images.
  • the depth image information for the same object in the main image and the sub image is calculated by the main image captured by the main camera and the sub image captured by the sub camera, and the main image is used as the final image of the final image.
  • the base image and in order to avoid the depth information of the main image and the sub image, the depth of field information calculation is inaccurate due to the difference between the main image and the sub image, or the main image is unclear, resulting in the final imaging effect.
  • control the main camera to shoot multiple sets of main images
  • control the sub-camera to shoot multiple sets of sub-images, so as to make optimal selection among multiple sets of main images and multiple sets of sub-images, improve the accuracy of the depth information calculation and the final imaging effect.
  • both the main camera and the sub-camera have better imaging effects. Therefore, in this application scenario, in order to improve the shooting efficiency.
  • the number of groups of the main image and the sub image to be photographed can be reduced, for example, only one set of main image and sub image are taken, and on the other hand, light is consumed.
  • a set of main images and a set of sub-images can meet the requirements of the accuracy of the depth of field information and the resolution of the image.
  • only taking a set of main images and sub-images improves the image processing efficiency.
  • detecting the brightness of the shooting environment for example, detecting the brightness of the current shooting environment by using a light sensor, if the detected brightness is less than a preset threshold, indicating that the current ambient brightness may affect the imaging effect of the terminal device, thereby , controlling the main camera and the sub camera to simultaneously shoot multiple sets of main images and multiple sets of sub images.
  • the preset threshold may be a reference brightness value used to determine whether the ambient brightness affects the imaging effect according to data of a large number of experiments, and the preset threshold may also be related to the imaging hardware of the terminal device, and the better the sensitivity of the imaging hardware is. The lower the preset threshold.
  • Step 102 Acquire a reference main image from the plurality of sets of main images, and acquire a reference sub-image taken in the same group as the reference main image from the plurality of sets of sub-images.
  • Step 103 Perform a composite noise reduction process on the plurality of sets of main images by the first thread to generate a target main image, and acquire (N-1) by (N-1) multi-threads according to the reference main image and the reference sub-image respectively.
  • the image information of the main image and the sub image belonging to the same group taken at the same time point is relatively Close to, and according to the source main image and the sub-image to calculate the depth of field information, it can ensure that the acquired depth information is more accurate.
  • the reference main image is selected from the plurality of sets of main images
  • the reference sub-images taken in the same group as the reference main image are selected from the plurality of sets of sub-images, and it is emphasized that, during actual shooting, the main image and the sub-picture
  • the image captures a plurality of sets of images at the same frequency, wherein the main image and the sub-image taken at the same time belong to the same group of images, for example, in chronological order, the plurality of sets of main images taken by the main camera include the main image 11 and the main image.
  • the image 12..., the plurality of sets of main images captured by the sub-camera include the sub-image 21, the sub-image 22, ..., the main image 11 and the sub-image 21 are the same group of images, and the main image 12 and the sub-image 22 are the same group of images... Further improving the efficiency and accuracy of the depth information acquisition, and selecting a reference main image with higher definition from the plurality of sets of main images, of course, when the number of image frames in the acquired image group is large, in order to improve the selection efficiency, It is also possible to initially select several frame main images and corresponding several sub-images according to image sharpness and the like, and select a reference main image from a plurality of high-definition main images and corresponding several sub-images. And the corresponding reference sub-image.
  • the first main thread performs synthetic noise reduction processing on the plurality of sets of main images to generate a target main image, and simultaneously (N-1) multi-threads are respectively based on the reference main image and the reference.
  • the sub-image acquires depth-of-field information of the target image corresponding to the own thread, wherein (N-1) target images corresponding to (N-1) multi-threads are not repeated, N is an integer greater than 2, and N is per target.
  • the calculation of the depth of field information of the image is set, and the calculation amount of the depth information of each thread is made as close as possible.
  • the noise reduction synthesis according to the plurality of sets of main images makes the target main image detail clear, and the image is clear. The quality is higher, the processed image is better.
  • the calculation of the depth of field information is further subdivided into (N-1) parallel thread runs, and each thread obtains the depth of field information of the corresponding target image, not only
  • the depth of field information of the corresponding target object image can be acquired in a targeted manner to facilitate further image processing for the target image, and the depth of field information of the plurality of target objects is simultaneously calculated by using multiple parallel threads, thereby further shortening the depth of field calculation.
  • the time difference between time and multi-frame noise reduction improves image processing efficiency.
  • the multi-frame synthesis noise reduction of the main image is described below in a scene with poor lighting conditions.
  • imaging devices such as terminal devices generally adopt a method of automatically increasing the sensitivity.
  • this way of increasing the sensitivity results in more noise in the image.
  • Multi-frame synthesis noise reduction is to reduce the noise point in the image and improve the image quality of the image taken under high sensitivity.
  • the principle is that noise is a priori knowledge of disordered arrangement.
  • the noise at the same position may be red noise, green noise, white noise, or even There is no noise, so there is a condition for comparison screening, which can be based on the value of each pixel corresponding to the same position in the plurality of sets of captured images (the value of the pixel includes the number of pixels included in the pixel, and the pixel included More, the higher the value of the pixel, the clearer the corresponding image.
  • the pixels that belong to the noise are filtered out.
  • the noise can be guessed and pixel replaced according to the algorithm of the further method to achieve the effect of removing noise. Through such a process, it is possible to achieve a noise reduction effect with extremely low image quality loss.
  • the values of the pixels corresponding to the same position in the plurality of sets of captured images may be read, and a weighted average is calculated by calculating the pixels. The value that produces the value of the pixel at that location in the composite image. In this way, a clear image can be obtained.
  • a target image that needs to acquire depth of field information for further image processing according to depth of field information may be set in advance, and the target image may be a face image, a specific gesture motion image (such as a scissors hand, a fueling gesture, etc.), and may include a famous building. Images (such as the Great Wall, Huangshan, etc.), or, may include images of objects of certain shapes (such as circular objects, triangular objects, etc.).
  • the location of the target image may be determined by techniques such as image recognition and contour recognition, thereby acquiring corresponding depth of field information, and the object corresponding to the target image is only dividing the main image.
  • the image corresponding to the plurality of sub-regions may acquire depth-of-field information corresponding to the image corresponding to the plurality of sub-regions according to the divided pixel positions.
  • Step 104 Acquire corresponding (N-1) target regions in the target main image according to the depth information of the (N-1) target images, according to preset image processing parameters corresponding to the (N-1) target images. Corresponding image processing is performed on (N-1) target areas.
  • the corresponding (N-1) target regions are acquired in the target main image according to the depth information of the (N-1) target images, wherein the target region includes the target object, and further, according to the preset N-1) image processing parameters corresponding to the target image perform corresponding image processing on (N-1) target regions, whereby each target region is subjected to corresponding image processing, wherein (N-1)
  • the image processing parameters corresponding to the target image may include the beauty processing parameters, the blurring processing parameters, the filter adding processing parameters, and the like, and the image processing parameters corresponding to the (N-1) target images may be the same or different, and the flexibility is satisfied.
  • the image processing parameters corresponding to the (N-1) target images may be generated by the system according to the current photographing scene, or may be set by the user, that is, the user is set and each Personalized image processing parameters corresponding to the target image.
  • the background area of each target area may be blurred according to the depth information of the target image, that is, according to (N-1) target images.
  • the depth of field information is blurred for (N-1) background regions corresponding to (N-1) target regions, respectively.
  • the manner in which the (N-1) background regions corresponding to the (N-1) target regions are blurred according to the depth information of the (N-1) target images includes, but is not limited to, the following manners:
  • the corresponding background area may be blurred according to the ambiguity of the (N-1) background regions by different implementation manners:
  • the blurring coefficient is related to the blurring intensity of the background region,
  • the imaginary coefficient of each pixel can be obtained by calculating the imaginary intensity of the background region and the depth information of each pixel in the background region corresponding to the target region.
  • the background area of the target area is blurred according to the ambiguity coefficient of each pixel.
  • Corresponding relationship between the difference between the depth information of the target area and the background area and the blurring intensity is stored in advance.
  • the difference between the depth information of the target area and the background area is larger, and the blurring intensity corresponding to the background area is larger.
  • the corresponding background area is blurred.
  • the image processing method of the embodiment of the present application not only divides the calculation of the depth information into multiple threads for parallel calculation, but also processes the depth of field calculation in parallel with the acquisition of the target main image, thereby improving the efficiency of image processing, and
  • the captured scene includes multiple target objects, especially for different image processing operations including multiple different target objects and multiple target images, which is of great significance and flexibly meets the user's personalized image processing requirements.
  • the following is an example of a specific application scenario:
  • the target image corresponds to a face image
  • the current photo scene is a multi-person photo scene, including the face images of the three users A, B, and C, wherein the image processing parameter corresponding operation is beautiful.
  • the depth of field information of the face images of the three users A, B, and C are respectively acquired by three multi-threads.
  • the main camera and the sub-camera are controlled to simultaneously capture, and the main image of 4 frames and the sub-image of 4 frames are acquired, wherein the numbers of the main images of the four frames in the shooting order are 11, 12, respectively.
  • the 13 and 14 frames of the sub-pictures are numbered 21, 22, 23, and 24, respectively.
  • the reference main image is selected from the plurality of sets of main images to be 12, and the sub-image 22 captured in the same group as the reference main image is selected from the plurality of sets of sub-images, and further, the composite noise reduction of the plurality of sets of main images is performed by the first thread.
  • the target image is generated by the processing, and the depth information corresponding to the face images of the users A, B, and C are respectively acquired according to the reference main image and the reference sub image by the three multithreads.
  • the corresponding three target areas including the face image are acquired in the target main image according to the depth information of the three face images, and the three face images are beautifully performed according to the preset beauty processing parameters corresponding to the three face images.
  • Yan treatment in which user A is a female young user, then face A, whitening, acne and eye-catching beauty treatment for user A, user B is a male young user, then whitening and acne beauty treatment for user B, If the user C is a male child user, the user C is not subjected to the beauty treatment, or the user A is added with the beauty treatment of the pig nose, the user B is added with the beard to add the beauty treatment, and the user C is added with the beauty treatment for the frog eye. Thereby, not only the users A, B, and C are each obtained with a suitable beauty treatment, but also the image processing efficiency is improved.
  • the second scenario in the scenario, the target image is a building and a portrait, and the current scene is a travel scene including the building 1 and the character image 2, wherein the operation corresponding to the image processing parameter is a blurring operation, and then
  • the depth of field information of the image corresponding to the user A and the building 1 is obtained by dividing the two threads into two.
  • the main camera and the sub-camera are controlled to simultaneously capture, and four main frames and four sub-images are acquired, wherein the main images of the four frames in the order of shooting are respectively 11, 12,
  • the 13 and 14 frames of the sub-pictures are numbered 21, 22, 23, and 24, respectively.
  • the reference main image is selected from the plurality of sets of main images to be 12, and the sub-image 22 captured in the same group as the reference main image is selected from the plurality of sets of sub-images, and further, the composite noise reduction of the plurality of sets of main images is performed by the first thread.
  • the generated target main image is processed, and the depth information of the image corresponding to the user A and the building 1 is respectively acquired according to the reference main image and the reference sub image by the two multi-threads.
  • the corresponding target area is acquired in the target main image according to the depth information of the image corresponding to the user A and the building 1. Further, in order to highlight the current photographing subject, the target area of the user A is not blurred, and the building 1 is performed.
  • the lower-intensity blurring process performs a strong blurring operation on the background area corresponding to the target area where the user A is located and the background area corresponding to the target area in which the building 1 is located, which not only highlights the photographing subject, but also satisfies the user and the building.
  • the image processing efficiency is improved, and the image processing time caused by simultaneously calculating the depth information of the image corresponding to the user A and the building 1 is avoided, and the user A and the building 1 are properly blurred. Processing improves the visual effect of image processing.
  • the image processing method in the embodiment of the present application controls the main camera to capture multiple sets of main images, and simultaneously controls the sub-camera to take multiple sets of sub-images, obtains reference main images from multiple sets of main images, and obtains multiple sets of sub-images.
  • the sub-image acquires depth-of-field information of (N-1) target images corresponding to (N-1) multi-threads one by one, and further acquires corresponding images in the target main image according to depth information of (N-1) target images (N-1) target areas, corresponding image processing is performed on (N-1) target areas according to preset image processing parameters corresponding to (N-1) target images. Thereby, corresponding image processing for different regions is realized, and image processing efficiency is improved.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
  • the image processing apparatus includes: a photographing module 100, The first obtaining module 200, the first processing module 300, and the second processing module 400.
  • the photographing module 100 is configured to control the main camera to capture a plurality of sets of main images, and simultaneously control the sub-camera to take a plurality of sets of sub-images.
  • the photographing module 100 includes a detecting unit 110 and a photographing unit 120, wherein the detecting unit 110 is configured to detect the brightness of the photographing environment.
  • the photographing unit 120 is configured to control the main camera to capture the plurality of sets of main images when detecting that the learned brightness is less than the preset threshold, and simultaneously control the sub-camera to take the plurality of sets of sub-images.
  • the first obtaining module 200 is configured to acquire a reference main image from the plurality of sets of main images, and acquire a reference sub-image taken in the same group as the reference main image from the plurality of sets of sub-images.
  • the first processing module 300 is configured to perform a composite noise reduction process on the plurality of sets of main images by the first thread to generate a target main image, and obtain (N-1) multi-threads according to the reference main image and the reference sub-image respectively. -1) depth-of-depth information of (N-1) target images corresponding to one thread, wherein (N-1) multi-threaded corresponding (N-1) target images are not repeated, and N is greater than 2. Integer.
  • the second processing module 400 is configured to acquire corresponding (N-1) target regions in the target main image according to the depth information of the (N-1) target images, according to the preset (N-1) target images. Corresponding image processing parameters perform corresponding image processing on (N-1) target areas.
  • FIG. 8 is a schematic structural diagram of an image processing apparatus according to another embodiment of the present application. As shown in FIG. 8, the apparatus further includes a second obtaining module 500, where the second obtaining module 500 is used. The personalized image processing parameters corresponding to each target image set by the user are obtained.
  • each module in the above image processing apparatus is for illustrative purposes only. In other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
  • the image processing apparatus of the embodiment of the present application controls the main camera to capture a plurality of sets of main images, and simultaneously controls the sub-camera to take a plurality of sets of sub-images, obtains a reference main image from the plurality of sets of main images, and obtains a plurality of sets of sub-images.
  • the sub-picture acquires depth-of-field information of (N-1) target images corresponding to (N-1) multi-threads one by one, and further acquires corresponding information in the target main image according to depth information of (N-1) target images (N-1) target areas, corresponding image processing is performed on (N-1) target areas according to preset image processing parameters corresponding to (N-1) target images. Thereby, corresponding image processing for different regions is realized, and image processing efficiency is improved.
  • the present application further provides a computer device, wherein the computer device is any device including a memory including a storage computer program and a processor running the computer program, for example, a smart phone, a personal computer, or the like.
  • the computer device further includes an image processing circuit, and the image processing circuit may be implemented by using hardware and/or software components, and may include various processing units defining an ISP (Image Signal Processing) pipeline.
  • Figure 9 is a schematic illustration of an image processing circuit in one embodiment. As shown in FIG. 9, for convenience of explanation, only various aspects of the image processing technique related to the embodiment of the present application are shown.
  • the image processing circuit includes an ISP processor 1040 and a control logic 1050.
  • the image data captured by imaging device 1010 is first processed by ISP processor 1040, which analyzes the image data to capture image statistical information that may be used to determine and/or control one or more control parameters of imaging device 1010.
  • the imaging device 1010 (camera) may include a camera having one or more lenses 1012 and an image sensor 1014, wherein the imaging device 1010 includes two sets of cameras in order to implement the background blurring method of the present application, wherein, with continued reference to FIG.
  • the imaging device 1010 can simultaneously capture a scene image based on the main camera and the sub camera.
  • Image sensor 1014 may include a color filter array (such as a Bayer filter) that may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 1014 and provide a set of primitives that may be processed by ISP processor 1040 Image data, wherein the ISP processor 1040 can calculate depth information and the like based on the original image data acquired by the image sensor 1014 in the main camera provided by the sensor 1020 and the original image data acquired by the image sensor 1014 in the sub camera.
  • Sensor 1020 can provide raw image data to ISP processor 1040 based on sensor 1020 interface type.
  • the sensor 1020 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
  • SMIA Standard Mobile Imaging Architecture
  • the ISP processor 1040 processes the raw image data pixel by pixel in a variety of formats.
  • each image pixel can have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 1040 can perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
  • ISP processor 1040 can also receive pixel data from image memory 1030. For example, raw pixel data is sent from the sensor 1020 interface to the image memory 1030, and the raw pixel data in the image memory 1030 is then provided to the ISP processor 1040 for processing.
  • Image memory 1030 can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
  • DMA Direct Memory Access
  • ISP processor 1040 When receiving raw image data from sensor 1020 interface or from image memory 1030, ISP processor 1040 can perform one or more image processing operations, such as time domain filtering.
  • the processed image data can be sent to image memory 1030 for additional processing prior to being displayed.
  • the ISP processor 1040 receives the processed data from the image memory 1030 and performs image data processing in the original domain and in the RGB and YCbCr color spaces.
  • the processed image data may be output to display 1070 for viewing by a user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit). Additionally, the output of ISP processor 1040 can also be sent to image memory 1030, and display 1070 can read image data from image memory 1030.
  • image memory 1030 can be configured to implement one or more frame buffers.
  • ISP processor 1040 can be sent to encoder/decoder 1060 to encode/decode image data.
  • the encoded image data can be saved and decompressed before being displayed on the display 1070 device.
  • Encoder/decoder 1060 can be implemented by a CPU or GPU or coprocessor.
  • the statistics determined by the ISP processor 1040 can be sent to the control logic 1050 unit.
  • the statistical data may include image sensor 1014 statistical information such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 1012 shading correction, and the like.
  • Control logic 1050 can include a processor and/or a microcontroller that executes one or more routines, such as firmware, and one or more routines can determine control parameters and control of imaging device 1010 based on received statistical data. parameter.
  • the control parameters may include sensor 1020 control parameters (eg, gain, integration time for exposure control), camera flash control parameters, lens 1012 control parameters (eg, focus or zoom focal length), or a combination of these parameters.
  • the ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), as well as lens 1012 shading correction parameters.
  • the present application also proposes a non-transitory computer readable storage medium that enables execution of an image processing method as in the above embodiment when instructions in the storage medium are executed by a processor.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Abstract

The present application proposes a method and a device for processing an image, the method comprising: controlling a main camera to take multiple sets of main images, and meanwhile, controlling a secondary camera to take multiple sets of secondary images; obtaining a reference main image from the multiple sets of main images, and obtaining a reference secondary image taken in the same group as the reference main image from the multiple sets of secondary images; generating a target main image by performing a synthetic noise reduction process on the multiple sets of main images by a first thread and, at the same time, (N-1) multi-threads respectively obtaining, according to the reference main image and the reference secondary image, depth information of (N-1) target images corresponding one-to-one to the (N-1) multi-threads; and, according to preset image processing parameters corresponding to the (N-1) target images, performing corresponding image processing on (N-1) target areas. Thus, different image processing is performed on different areas at the same time, improving image processing efficiency, and satisfying various user processing requirements.

Description

图像处理方法和装置Image processing method and device
相关申请的交叉引用Cross-reference to related applications
本申请要求广东欧珀移动通信有限公司于2017年12月06日提交的、申请名称为“图像处理方法和装置”的、中国专利申请号“201711276709.4”的优先权。This application claims the priority of the Chinese Patent Application No. "201711276709.4" filed on December 06, 2017 by the Guangdong Appo Mobile Communications Co., Ltd., entitled "Image Processing Method and Apparatus".
技术领域Technical field
本申请涉及图像处理领域,尤其涉及一种图像处理方法和装置。The present application relates to the field of image processing, and in particular, to an image processing method and apparatus.
背景技术Background technique
目前,为了满足用户在生产和生活中的需要,终端设备的功能越来越多元化,具备了满足用户各种需求的图像处理功能等,比如,终端设备中满足用户拍照美颜需求的拍照美颜功能,又比如,终端设备中满足用户拍照美化功能的滤镜添加功能等。At present, in order to meet the needs of users in production and life, the functions of terminal devices are more and more diversified, and image processing functions that meet various needs of users are provided, for example, the beauty of photos in the terminal devices that satisfy the user's photo beauty requirements. The function of the color, for example, the filter adding function that satisfies the user's photo beautification function in the terminal device.
相关技术中,接收到用户选择的图像处理功能后,终端设备根据用户选择的图像处理功能对图像进行统一的处理,然而,在实际应用中,用户具有对图像上的不同区域进行不同的图像处理的需求,上述图像处理方式难以满足用户的这种个性化的图像处理需求。In the related art, after receiving the image processing function selected by the user, the terminal device performs unified processing on the image according to the image processing function selected by the user. However, in practical applications, the user has different image processing on different regions on the image. The above image processing method is difficult to meet the user's personalized image processing requirements.
发明内容Summary of the invention
本申请提供一种图像处理方法和装置,以解决现有技术中,难以实现对图像中不同区域进行相应图像处理的技术问题。The present application provides an image processing method and apparatus to solve the technical problem in the prior art that it is difficult to perform corresponding image processing on different areas in an image.
本申请实施例提供一种图像处理方法,包括:控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像;从所述多组主图像中获取参考主图像,并从所述多组副图像中获取与所述参考主图像同组拍摄的参考副图像;通过第一线程对所述多组主图像进行合成降噪处理生成目标主图像,同时通过(N-1)个多线程分别根据所述参考主图像和所述参考副图像,获取与所述(N-1)个多线程一一对应的(N-1)个目标图像的景深信息,其中,所述(N-1)个多线程对应的(N-1)个目标图像不重复,N为大于2的整数;根据所述(N-1)个目标图像的景深信息在所述目标主图像中获取对应的(N-1)个目标区域,根据预设的与所述(N-1)个目标图像对应的图像处理参数对所述(N-1)个目标区域进行相应的图像处理。An embodiment of the present application provides an image processing method, including: controlling a main camera to capture a plurality of sets of main images, and controlling a sub-camera to take a plurality of sets of sub-images; acquiring a reference main image from the plurality of sets of main images, and Obtaining a reference sub-image captured in the same group as the reference main image in the group sub-image; performing synthesizing noise reduction processing on the plurality of groups of main images by the first thread to generate a target main image, and simultaneously passing (N-1) multi-threads Obtaining depth information of (N-1) target images corresponding to the (N-1) multi-threads one by one according to the reference main image and the reference sub-image, respectively, wherein the (N-1) (N-1) target images corresponding to a plurality of threads are not repeated, N is an integer greater than 2; and corresponding (N) is obtained in the target main image according to depth information of the (N-1) target images -1) target areas, corresponding image processing is performed on the (N-1) target areas according to preset image processing parameters corresponding to the (N-1) target images.
本申请另一实施例提供一种图像处理装置,包括:拍摄模块,用于控制主摄像头拍摄 多组主图像,同时控制副摄像头拍摄多组副图像;第一获取模块,用于从所述多组主图像中获取参考主图像,并从所述多组副图像中获取与所述参考主图像同组拍摄的参考副图像;第一处理模块,用于通过第一线程对所述多组主图像进行合成降噪处理生成目标主图像,同时通过(N-1)个多线程分别根据所述参考主图像和所述参考副图像获取与所述(N-1)个多线程一一对应的(N-1)个目标图像的景深信息,其中,所述(N-1)个多线程对应的(N-1)个目标图像不重复,N为大于2的整数;第二处理模块,用于根据所述(N-1)个目标图像的景深信息在所述目标主图像中获取对应的(N-1)个目标区域,根据预设的与所述(N-1)个目标图像对应的图像处理参数对所述(N-1)个目标区域进行相应的图像处理。Another embodiment of the present invention provides an image processing apparatus, including: a photographing module, configured to control a main camera to capture a plurality of sets of main images, and simultaneously control a sub-camera to take a plurality of sets of sub-images; and a first acquiring module, configured to Obtaining a reference main image in the group main image, and acquiring a reference sub-image captured in the same group as the reference main image from the plurality of sets of sub-images; the first processing module is configured to pair the plurality of groups by the first thread Performing a synthetic noise reduction process on the image to generate a target main image, and acquiring (N-1) multi-threads according to the reference main image and the reference sub-image respectively by (N-1) multi-threads (N-1) depth information of the target image, wherein the (N-1) target images corresponding to the (N-1) multithreads are not repeated, and N is an integer greater than 2; the second processing module uses Obtaining corresponding (N-1) target regions in the target main image according to the depth information of the (N-1) target images, corresponding to the (N-1) target images according to a preset Image processing parameters perform corresponding image processing on the (N-1) target regions
本申请又一实施例提供一种计算机设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行本申请上述实施例所述的图像处理方法。A further embodiment of the present application provides a computer device including a memory and a processor, wherein the memory stores computer readable instructions, and when the instructions are executed by the processor, the processor performs the above implementation of the present application. The image processing method described in the example.
本申请还一实施例提供一种非临时性计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如本申请上述实施例所述的图像处理方法。A further embodiment of the present application provides a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements an image processing method as described in the above embodiments of the present application.
本申请实施例提供的技术方案可以包括以下有益效果:The technical solutions provided by the embodiments of the present application may include the following beneficial effects:
控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像,从多组主图像中获取参考主图像,并从多组副图像中获取与参考主图像同组拍摄的参考副图像,通过第一线程对多组主图像进行合成降噪处理生成目标主图像,同时通过(N-1)个多线程分别根据参考主图像和参考副图像获取与(N-1)个多线程一一对应的(N-1)个目标图像的景深信息,进而,根据(N-1)个目标图像的景深信息在目标主图像中获取对应的(N-1)个目标区域,根据预设的与(N-1)个目标图像对应的图像处理参数对(N-1)个目标区域进行相应的图像处理。由此,实现了对不同区域进行对应的图像处理,且提高了图像处理效率。Controlling the main camera to take multiple sets of main images, and controlling the sub-camera to take multiple sets of sub-images, acquiring reference main images from the plurality of sets of main images, and acquiring reference sub-images taken in the same group as the reference main images from the plurality of sets of sub-images, The target main image is generated by synthesizing the noise reduction processing of the plurality of sets of main images by the first thread, and acquiring (N-1) multi-threads according to the reference main image and the reference sub-image respectively by (N-1) multi-threads The depth of field information of the corresponding (N-1) target images, and further, the corresponding (N-1) target regions are acquired in the target main image according to the depth information of the (N-1) target images, according to the preset (N-1) image processing parameters corresponding to the target image perform corresponding image processing on (N-1) target regions. Thereby, corresponding image processing for different regions is realized, and image processing efficiency is improved.
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。The aspects and advantages of the present invention will be set forth in part in the description which follows.
附图说明DRAWINGS
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments will be briefly described below. Obviously, the drawings in the following description are some embodiments of the present application. Those skilled in the art can also obtain other drawings based on these drawings without paying any creative work.
图1是根据本申请一个实施例的图像处理方法的流程图;1 is a flow chart of an image processing method according to an embodiment of the present application;
图2是根据本申请一个实施例的三角测距的原理示意图;2 is a schematic diagram of the principle of triangulation according to an embodiment of the present application;
图3是根据本申请一个实施例的双摄像头景深信息获取示意图;FIG. 3 is a schematic diagram of acquiring depth information of a dual camera according to an embodiment of the present application; FIG.
图4是根据本申请一个实施例的图像处理场景示意图;4 is a schematic diagram of an image processing scenario according to an embodiment of the present application;
图5是根据本申请一个实施例的旅游拍照的场景示意图;FIG. 5 is a schematic diagram of a scene of a tourist photograph according to an embodiment of the present application; FIG.
图6是根据本申请一个实施例的图像处理装置的结构示意图;FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application; FIG.
图7是根据本申请另一个实施例的图像处理装置的结构示意图;FIG. 7 is a schematic structural diagram of an image processing apparatus according to another embodiment of the present application; FIG.
图8是根据本申请又一个实施例的图像处理装置的结构示意图;以及FIG. 8 is a schematic structural diagram of an image processing apparatus according to still another embodiment of the present application;
图9是根据本申请一个实施例的图像处理电路的示意图。9 is a schematic diagram of an image processing circuit in accordance with an embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。The embodiments of the present application are described in detail below, and the examples of the embodiments are illustrated in the drawings, wherein the same or similar reference numerals are used to refer to the same or similar elements or elements having the same or similar functions. The embodiments described below with reference to the accompanying drawings are intended to be illustrative, and are not to be construed as limiting.
下面参考附图描述本申请实施例的图像处理方法和装置。The image processing method and apparatus of the embodiment of the present application will be described below with reference to the drawings.
其中,本申请的图像处理方法和装置的执行主体可以是终端设备,其中,终端设备可以是手机、平板电脑、个人数字助理、穿戴式设备等具有双摄像头的硬件设备。该穿戴式设备可以是智能手环、智能手表、智能眼镜等。The execution body of the image processing method and apparatus of the present application may be a terminal device, where the terminal device may be a hardware device with a dual camera, such as a mobile phone, a tablet computer, a personal digital assistant, a wearable device, or the like. The wearable device can be a smart bracelet, a smart watch, smart glasses, and the like.
图1是根据本申请一个实施例的图像处理方法的流程图,如图1所示,该方法包括:FIG. 1 is a flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 1, the method includes:
步骤101,控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像。In step 101, the main camera is controlled to take a plurality of sets of main images, and the sub-camera is controlled to take a plurality of sets of sub-images.
具体地,终端设备通过双摄像头系统进行拍照,双摄像头系统通过主摄像头拍摄的主图像和副摄像头拍摄的副图像计算景深信息,其中,双摄像头系统包括一个获取拍摄主体主图像的主摄像头,和一个辅助主图像获取景深信息的副摄像头,其中,主摄像头和副摄像头的设置方式可以为沿着水平方向设置,或者,也可以是沿着竖直方向设置等,为了更加清楚的描述双摄像头如何获取景深信息,下面参考附图说明双摄像头获取景深信息的原理:Specifically, the terminal device performs photographing through a dual camera system, and the dual camera system calculates depth information by using a main image captured by the main camera and a sub image captured by the sub camera, wherein the dual camera system includes a main camera that acquires a main image of the photographing subject, and A sub-camera that assists the main image to acquire depth information, wherein the main camera and the sub-camera may be arranged in a horizontal direction, or may be arranged in a vertical direction, etc., in order to more clearly describe the dual camera. To obtain the depth of field information, the following describes the principle of obtaining the depth of field information by the dual camera with reference to the accompanying drawings:
在实际应用中,人眼分辩景深信息主要是依靠双目视觉分辨景深信息,这与双摄像头分辨景深信息的原理一样,主要是依靠如图2所示的三角测距的原理实现的,基于图2中,在实际空间中,画出了成像对象,以及两个摄像头所在位置O R和O T,以及两个摄像头的焦平面,焦平面距离两个摄像头所在平面的距离为f,在焦平面位置两个摄像头进行成像,从而得到两张拍摄图像。 In practical applications, the human eye discriminates the depth of field information mainly by relying on binocular vision to distinguish depth of field information, which is the same as the principle of dual camera resolution of depth information, mainly based on the principle of triangular ranging as shown in Fig. 2, based on the figure. 2, in the actual space, draw the imaging object, and the position of the two cameras O R and O T , and the focal plane of the two cameras, the distance between the focal plane and the plane of the two cameras is f, in the focal plane Two cameras are positioned to image, resulting in two captured images.
其中,P和P’分别是同一对象在不同拍摄图像中的位置。其中,P点距离所在拍摄图像的左侧边界的距离为X R,P’点距离所在拍摄图像的左侧边界的距离为X T。O R和O T分别为两个摄像头,这两个摄像头在同一平面,距离为B。 Among them, P and P' are the positions of the same object in different captured images, respectively. The distance from the P point to the left boundary of the captured image is X R , and the distance from the P′ point to the left boundary of the captured image is X T . O R and O T are two cameras respectively, and the two cameras are on the same plane with a distance B.
基于三角测距原理,图2中的对象与两个摄像头所在平面之间的距离Z,具有如下 关系:
Figure PCTCN2018116752-appb-000001
Based on the principle of triangulation, the distance Z between the object in Figure 2 and the plane of the two cameras has the following relationship:
Figure PCTCN2018116752-appb-000001
基于此,可以推得
Figure PCTCN2018116752-appb-000002
其中,d为同一对象在不同拍摄图像中的位置之间的距离差。由于B、f为定值,因此,根据d可以确定出对象的距离Z。
Based on this, you can push
Figure PCTCN2018116752-appb-000002
Where d is the difference in distance between the positions of the same object in different captured images. Since B and f are constant values, the distance Z of the object can be determined according to d.
当然,除了三角测距法,也可以采用其他的方式来计算主图像的景深信息,比如,主摄像头和副摄像头针对同一个场景拍照时,场景中的物体距离摄像头的距离与主摄像头和副摄像头成像的位移差、姿势差等成比例关系,因此,在本申请的一个实施例中,可以根据这种比例关系获取上述距离Z。Of course, in addition to the triangulation method, other methods can be used to calculate the depth information of the main image. For example, when the main camera and the sub camera are photographed for the same scene, the distance of the object in the scene from the camera is related to the main camera and the sub camera. The displacement difference, the posture difference, and the like of the imaging are proportional, and therefore, in one embodiment of the present application, the above-described distance Z can be obtained according to such a proportional relationship.
举例而言,如图3所示,通过主摄像头获取的主图像以及副摄像头获取的副图像,计算出不同点差异的图,这里用视差图表示,这个图上表示的是两张图上相同点的位移差异,但是由于三角定位中的位移差异和Z成正比,因此很多时候视差图就直接被用作景深信息图。For example, as shown in FIG. 3, a map of different point differences is calculated by the main image acquired by the main camera and the sub-image acquired by the sub-camera, which is represented by a disparity map, which is the same on the two images. The difference in displacement of the points, but since the difference in displacement in the triangulation is proportional to Z, the disparity map is often used directly as the depth of field map.
具体而言,在本申请实施例中,通过主摄像头拍摄的主图像和副摄像头拍摄的副图像计算得到主图像和副图像中针对同一对象的景深信息,并通过主图像作为最终成像的实际图像的基础图像,而为了避免主图像和副图像计算景深信息时,由于主图像和副图像的差异较大等原因导致景深信息计算不精确,或者,主图像不清晰导致进行最终成像的成像效果不好,控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像,以便于在多组主图像和多组副图像中进行最优选择,提高景深信息计算的准确度和最终成像效果。Specifically, in the embodiment of the present application, the depth image information for the same object in the main image and the sub image is calculated by the main image captured by the main camera and the sub image captured by the sub camera, and the main image is used as the final image of the final image. The base image, and in order to avoid the depth information of the main image and the sub image, the depth of field information calculation is inaccurate due to the difference between the main image and the sub image, or the main image is unclear, resulting in the final imaging effect. Well, control the main camera to shoot multiple sets of main images, and control the sub-camera to shoot multiple sets of sub-images, so as to make optimal selection among multiple sets of main images and multiple sets of sub-images, improve the accuracy of the depth information calculation and the final imaging effect. .
当然,由于摄像头在暗光下的成像效果较差,在光线充足的环境下拍摄时,无论是主摄像头还是副摄像头均成像效果较好,因而,在这种应用场景下,为了提高拍摄效率以进一步提高图像处理效率,在本申请的一个实施例中,在环境光线亮度充足时,可以减少拍摄的主图像和副图像的组数,比如仅仅拍摄一组主图像和副图像,一方面由于光线充足,拍摄的一组主图像和一组副图像即可满足景深信息的精确度和成像清晰度的要求,另一方面仅仅拍摄一组主图像和副图像提高了图像处理效率。Of course, since the imaging effect of the camera in the dark light is poor, when shooting in a well-lit environment, both the main camera and the sub-camera have better imaging effects. Therefore, in this application scenario, in order to improve the shooting efficiency. Further improving the image processing efficiency, in one embodiment of the present application, when the ambient light brightness is sufficient, the number of groups of the main image and the sub image to be photographed can be reduced, for example, only one set of main image and sub image are taken, and on the other hand, light is consumed. Sufficient, a set of main images and a set of sub-images can meet the requirements of the accuracy of the depth of field information and the resolution of the image. On the other hand, only taking a set of main images and sub-images improves the image processing efficiency.
在本申请的另一个实施例中,检测拍摄环境的亮度,比如通过光线传感器检测当前拍摄环境的亮度,如果检测获知亮度小于预设阈值,则表明当前环境亮度可能影响终端设备的成像效果,从而,控制主摄像头和副摄像头同时拍摄多组主图像和多组副图像。In another embodiment of the present application, detecting the brightness of the shooting environment, for example, detecting the brightness of the current shooting environment by using a light sensor, if the detected brightness is less than a preset threshold, indicating that the current ambient brightness may affect the imaging effect of the terminal device, thereby , controlling the main camera and the sub camera to simultaneously shoot multiple sets of main images and multiple sets of sub images.
其中,预设阈值可以是根据大量实验的数据标定的用来判断环境亮度是否影响成像效果的基准亮度值,该预设阈值还可以与终端设备的成像硬件相关,成像硬件的感光性越好,该预设阈值越低。The preset threshold may be a reference brightness value used to determine whether the ambient brightness affects the imaging effect according to data of a large number of experiments, and the preset threshold may also be related to the imaging hardware of the terminal device, and the better the sensitivity of the imaging hardware is. The lower the preset threshold.
步骤102,从多组主图像中获取参考主图像,并从多组副图像中获取与参考主图像同 组拍摄的参考副图像。Step 102: Acquire a reference main image from the plurality of sets of main images, and acquire a reference sub-image taken in the same group as the reference main image from the plurality of sets of sub-images.
步骤103,通过第一线程对多组主图像进行合成降噪处理生成目标主图像,同时通过(N-1)个多线程分别根据参考主图像和参考副图像,获取与(N-1)个多线程一一对应的(N-1)个目标图像的景深信息,其中,(N-1)个多线程对应的(N-1)个目标图像不重复,N为大于2的整数。Step 103: Perform a composite noise reduction process on the plurality of sets of main images by the first thread to generate a target main image, and acquire (N-1) by (N-1) multi-threads according to the reference main image and the reference sub-image respectively. The depth of field information of the (N-1) target images corresponding to the multi-thread one-to-one, wherein (N-1) target images corresponding to (N-1) multi-threads are not repeated, and N is an integer greater than 2.
基于以上分析可知,双摄像头获取景深信息时,需要获取同一对象在不同拍摄图像中的位置,因此,如果获取景深信息的双摄像头的两张图像较为接近,则会提高景深信息获取的效率和准确率。Based on the above analysis, when the dual camera acquires the depth information, it is necessary to acquire the position of the same object in different captured images. Therefore, if the two images of the dual cameras that acquire the depth information are relatively close, the efficiency and accuracy of the depth information acquisition can be improved. rate.
可以理解,在本申请的实施例中,由于主摄像头和副摄像头同时拍摄多组主图像和多组副图像,因此,在同一时间点拍摄的属于同组的主图像和副图像的图像信息较为接近,且根据源主图像和副图像进行景深信息的计算,可以保证获取的景深信息较为准确。It can be understood that, in the embodiment of the present application, since the main camera and the sub camera simultaneously capture multiple sets of main images and multiple sets of sub images, the image information of the main image and the sub image belonging to the same group taken at the same time point is relatively Close to, and according to the source main image and the sub-image to calculate the depth of field information, it can ensure that the acquired depth information is more accurate.
具体而言,从多组主图像中选择参考主图像,并从多组副图像中选择与参考主图像同组拍摄的参考副图像,需要强调的是,在实际拍摄过程中,主图像和副图像按照同样的频率拍摄多组图像,其中,在同一时刻拍摄的主图像和副图像属于同组图像,比如,按照时间先后顺序,主摄像头拍摄的多组主图像包括主图像11、主图像12…,副摄像头拍摄的多组主图像包括副图像21、副图像22…,则主图像11和副图像21为同组图像,主图像12和副图像22为同组图像…为了进一步提高景深信息获取的效率和准确度,还可以从多组主图像中选择清晰度较高的参考主图像等,当然,当获取的图像组中的图像帧数较多时,为了提高选择效率,还可以根据图像清晰度等初步选择几帧主图像和对应的几帧副图像,从清晰度较高的几帧主图像和对应的几帧副图像中选择参考主图像和对应的参考副图像。Specifically, the reference main image is selected from the plurality of sets of main images, and the reference sub-images taken in the same group as the reference main image are selected from the plurality of sets of sub-images, and it is emphasized that, during actual shooting, the main image and the sub-picture The image captures a plurality of sets of images at the same frequency, wherein the main image and the sub-image taken at the same time belong to the same group of images, for example, in chronological order, the plurality of sets of main images taken by the main camera include the main image 11 and the main image. The image 12..., the plurality of sets of main images captured by the sub-camera include the sub-image 21, the sub-image 22, ..., the main image 11 and the sub-image 21 are the same group of images, and the main image 12 and the sub-image 22 are the same group of images... Further improving the efficiency and accuracy of the depth information acquisition, and selecting a reference main image with higher definition from the plurality of sets of main images, of course, when the number of image frames in the acquired image group is large, in order to improve the selection efficiency, It is also possible to initially select several frame main images and corresponding several sub-images according to image sharpness and the like, and select a reference main image from a plurality of high-definition main images and corresponding several sub-images. And the corresponding reference sub-image.
进一步地,由于景深信息计算耗时较长,因而,通过第一线程对多组主图像进行合成降噪处理生成目标主图像,同时通过(N-1)个多线程分别根据参考主图像和参考副图像获取与自身线程对应的目标图像的景深信息,其中,(N-1)个多线程对应的(N-1)个目标图像不重复,N为大于2的整数,N可根据每个目标图像的景深信息的计算量而设置,尽量使得每个线程的景深信息的计算量接近。Further, since the depth of field information calculation takes a long time, the first main thread performs synthetic noise reduction processing on the plurality of sets of main images to generate a target main image, and simultaneously (N-1) multi-threads are respectively based on the reference main image and the reference. The sub-image acquires depth-of-field information of the target image corresponding to the own thread, wherein (N-1) target images corresponding to (N-1) multi-threads are not repeated, N is an integer greater than 2, and N is per target. The calculation of the depth of field information of the image is set, and the calculation amount of the depth information of each thread is made as close as possible.
由此,一方面,在计算景深信息的同时,对多组主图像进行合成降噪处理,获取到目标主图像,以便于在获取景深信息后,可以直接根据景深信息和目标主图像进行相应的图像处理,相比先获取景深信息,再对主图像进行降噪处理的处理方式,图像处理效率得到了提高,另一方面,根据多组主图像进行降噪合成使得目标主图像细节清晰,图像质量较高,处理后的图像效果较好,又一方面,将对景深信息的计算进一步细分为(N-1)个并行的线程运行,每个线程获取对应的目标图像的景深信息,不但可以针对性的获取到对应的目标对象图像的景深信息以便于进一步针对该目标图像进行单独 的图像处理,而且采用多个并行的线程同时计算多个目标对象的景深信息,进一步缩短了景深计算的时间和多帧降噪的时间差,提高了图像处理效率。Therefore, on the one hand, while calculating the depth of field information, synthesizing noise reduction processing on the plurality of sets of main images, and acquiring the target main image, so that after acquiring the depth information, the corresponding depth information and the target main image can be directly performed. Image processing, compared with the first method of obtaining depth of field information, and then performing noise reduction processing on the main image, the image processing efficiency is improved. On the other hand, the noise reduction synthesis according to the plurality of sets of main images makes the target main image detail clear, and the image is clear. The quality is higher, the processed image is better. In another aspect, the calculation of the depth of field information is further subdivided into (N-1) parallel thread runs, and each thread obtains the depth of field information of the corresponding target image, not only The depth of field information of the corresponding target object image can be acquired in a targeted manner to facilitate further image processing for the target image, and the depth of field information of the plurality of target objects is simultaneously calculated by using multiple parallel threads, thereby further shortening the depth of field calculation. The time difference between time and multi-frame noise reduction improves image processing efficiency.
其中,为了便于清楚理解多帧合成降噪过程,下面以光线条件较差的场景中对主图像的多帧合成降噪进行说明。In order to facilitate a clear understanding of the multi-frame synthesis noise reduction process, the multi-frame synthesis noise reduction of the main image is described below in a scene with poor lighting conditions.
当环境光线不足时,终端设备等成像设备一般采用自动提高感光度的方式拍摄。但这种提高感光度的方式,导致了图像中噪声较多。多帧合成降噪就是为了减少图像中的噪声点,改善高感光情况下所拍摄的图像画质。其原理在于,噪点是无序排列的这一先验知识,具体来说,连拍多组拍摄图像后,同一个位置出现的噪点可能是红噪点,也可能是绿噪点、白噪点,甚至是没有噪点,这样就有了比对筛选的条件,可以依据多组拍摄图像中对应同一位置的各像素点的取值(该像素点的取值包括像素所包含的像素的多少,包含的像素越多,像素点的取值越高,对应的图像也就越清晰),将属于噪声的像素点(即噪点)筛选出来。When the ambient light is insufficient, imaging devices such as terminal devices generally adopt a method of automatically increasing the sensitivity. However, this way of increasing the sensitivity results in more noise in the image. Multi-frame synthesis noise reduction is to reduce the noise point in the image and improve the image quality of the image taken under high sensitivity. The principle is that noise is a priori knowledge of disordered arrangement. Specifically, after multiple sets of captured images are taken, the noise at the same position may be red noise, green noise, white noise, or even There is no noise, so there is a condition for comparison screening, which can be based on the value of each pixel corresponding to the same position in the plurality of sets of captured images (the value of the pixel includes the number of pixels included in the pixel, and the pixel included More, the higher the value of the pixel, the clearer the corresponding image. The pixels that belong to the noise (ie, the noise) are filtered out.
更进一步地,在筛选出噪点之后,还可以根据进一步法的算法对噪点进行猜色和像素替换处理,达到去除噪点的效果。经过这样的过程,就能够达到画质损失度极低的降噪效果了。Further, after filtering out the noise, the noise can be guessed and pixel replaced according to the algorithm of the further method to achieve the effect of removing noise. Through such a process, it is possible to achieve a noise reduction effect with extremely low image quality loss.
例如,作为一种比较简便的多帧合成降噪方法,可以在获取多组拍摄图像之后,读取多组拍摄图像中对应同一位置的各像素点的取值,通过对这些像素点计算加权平均值,生成合成图像中该位置的像素点的取值。通过这种方式,可以得到清晰的图像。For example, as a relatively simple multi-frame synthetic noise reduction method, after acquiring a plurality of sets of captured images, the values of the pixels corresponding to the same position in the plurality of sets of captured images may be read, and a weighted average is calculated by calculating the pixels. The value that produces the value of the pixel at that location in the composite image. In this way, a clear image can be obtained.
另外,预先设置需要获取景深信息以便于进一步根据景深信息进行图像处理的目标图像,该目标图像可以是人脸图像、特定的手势动作图像(比如剪刀手、加油手势等),可以包括著名建筑物图像(比如万里长城、黄山等),或者,可以包括某些特定的形状的物体图像等(比如圆形物体、三角形物体等)。In addition, a target image that needs to acquire depth of field information for further image processing according to depth of field information may be set in advance, and the target image may be a face image, a specific gesture motion image (such as a scissors hand, a fueling gesture, etc.), and may include a famous building. Images (such as the Great Wall, Huangshan, etc.), or, may include images of objects of certain shapes (such as circular objects, triangular objects, etc.).
当目标图像对应的对象是上述示例中特定的对象时,可以通过图像识别、轮廓识别等技术确定出目标图像所在位置,进而获取对应的景深信息,当目标图像对应的对象仅仅是将主图像划分后的多个子区域对应的图像,可以根据划分的像素位置,获取对应多个子区域对应的图像的景深信息。When the object corresponding to the target image is a specific object in the above example, the location of the target image may be determined by techniques such as image recognition and contour recognition, thereby acquiring corresponding depth of field information, and the object corresponding to the target image is only dividing the main image. The image corresponding to the plurality of sub-regions may acquire depth-of-field information corresponding to the image corresponding to the plurality of sub-regions according to the divided pixel positions.
步骤104,根据(N-1)个目标图像的景深信息在目标主图像中获取对应的(N-1)个目标区域,根据预设的与(N-1)个目标图像对应的图像处理参数对(N-1)个目标区域进行相应的图像处理。Step 104: Acquire corresponding (N-1) target regions in the target main image according to the depth information of the (N-1) target images, according to preset image processing parameters corresponding to the (N-1) target images. Corresponding image processing is performed on (N-1) target areas.
具体地,根据(N-1)个目标图像的景深信息在目标主图像中获取对应的(N-1)个目标区域,其中,该目标区域中包含目标对象,进而,根据预设的与(N-1)个目标图像对应的图像处理参数对(N-1)个目标区域进行相应的图像处理,由此,每一个目标区域都得到 了相应的图像处理,其中与(N-1)个目标图像对应的图像处理参数可以包含美颜处理参数、虚化处理参数、滤镜添加处理参数等,(N-1)个目标图像对应的图像处理参数可以相同,也可以不同,灵活的满足了用户的多种图像处理需求。其中,在本申请的一个实施例中,上述(N-1)个目标图像对应的图像处理参数可以是系统根据当前拍照场景学习生成的,也可以是用户设置的,即获取用户设置的与每个目标图像对应的个性化的图像处理参数。Specifically, the corresponding (N-1) target regions are acquired in the target main image according to the depth information of the (N-1) target images, wherein the target region includes the target object, and further, according to the preset N-1) image processing parameters corresponding to the target image perform corresponding image processing on (N-1) target regions, whereby each target region is subjected to corresponding image processing, wherein (N-1) The image processing parameters corresponding to the target image may include the beauty processing parameters, the blurring processing parameters, the filter adding processing parameters, and the like, and the image processing parameters corresponding to the (N-1) target images may be the same or different, and the flexibility is satisfied. A variety of image processing needs for users. In an embodiment of the present application, the image processing parameters corresponding to the (N-1) target images may be generated by the system according to the current photographing scene, or may be set by the user, that is, the user is set and each Personalized image processing parameters corresponding to the target image.
进一步地,在本申请的一个实施例中,为了进一步提高图像处理效果,还可根据目标图像的景深信息对每个目标区域的背景区域进行虚化处理,即根据(N-1)个目标图像的景深信息分别对与(N-1)个目标区域对应的(N-1)个背景区域进行虚化处理。Further, in an embodiment of the present application, in order to further improve the image processing effect, the background area of each target area may be blurred according to the depth information of the target image, that is, according to (N-1) target images. The depth of field information is blurred for (N-1) background regions corresponding to (N-1) target regions, respectively.
具体地,根据(N-1)个目标图像的景深信息分别对与(N-1)个目标区域对应的(N-1)个背景区域进行虚化处理的方式包括但不限于以下方式:Specifically, the manner in which the (N-1) background regions corresponding to the (N-1) target regions are blurred according to the depth information of the (N-1) target images includes, but is not limited to, the following manners:
作为一种可能的实现方式:As a possible implementation:
根据(N-1)个目标图像的景深信息确定与(N-1)个目标区域对应的(N-1)个背景区域的虚化强度,进而,根据(N-1)个背景区域的虚化强度对相应的背景区域进行虚化处理,从而,根据不同的景深信息进行不同程度的虚化,使得虚化的图像效果更加自然且富有层次感。Determining the blurring intensity of (N-1) background regions corresponding to (N-1) target regions according to depth information of (N-1) target images, and further, according to (N-1) background regions The intensity of the corresponding background area is blurred, so that different degrees of blurring are performed according to different depth of field information, so that the blurred image effect is more natural and layered.
需要说明的是,根据应用场景的不同,可以采用不同的实现方式,实现根据(N-1)个目标图像的景深信息确定与(N-1)个目标区域对应的(N-1)个背景区域的虚化强度,作为一种可能的实现方式,当目标图像的景深信息越精确,则证明目标对象的轮廓越清晰,此时,即使对目标区域中的背景区域进行虚化处理时越不容易导致对目标图像的误虚化,此时背景区域对应的虚化强度可以越大,因而,可以预先建立目标图像的景深信息的计算精度和背景区域对应的虚化强度的对应关系,进而,根据该对应关系获取背景区域对应的虚化强度。It should be noted that, according to different application scenarios, different implementation manners may be used to determine (N-1) backgrounds corresponding to (N-1) target regions according to depth information of (N-1) target images. The ambiguity of the region, as a possible implementation, the more accurate the depth of field information of the target image, the more clear the contour of the target object is. At this time, even if the background region in the target region is blurred, the less It is easy to cause a blurring of the target image. At this time, the blurring intensity corresponding to the background region can be larger. Therefore, the correspondence between the calculation accuracy of the depth information of the target image and the blurring intensity corresponding to the background region can be established in advance. Obtaining a blurring intensity corresponding to the background area according to the correspondence.
其中,可以通过不同的实现方式来根据(N-1)个背景区域的虚化强度对相应的背景区域进行虚化处理:The corresponding background area may be blurred according to the ambiguity of the (N-1) background regions by different implementation manners:
示例一:Example 1:
根据(N-1)个背景区域的虚化强度和目标区域对应的背景区域中每个像素的景深信息获取每个像素的虚化系数,其中,虚化系数与背景区域的虚化强度有关,虚化系数越大,虚化强度越高,比如,可通过计算背景区域的虚化强度和目标区域对应的背景区域中每个像素的景深信息的乘积,获取每个像素的虚化系数,进而,根据每个像素的虚化系数对目标区域的背景区域进行虚化处理。Obtaining a blurring coefficient of each pixel according to the blurring intensity of the (N-1) background regions and the depth information of each pixel in the background region corresponding to the target region, wherein the blurring coefficient is related to the blurring intensity of the background region, The larger the imaginary coefficient is, the higher the ambiguity is. For example, the imaginary coefficient of each pixel can be obtained by calculating the imaginary intensity of the background region and the depth information of each pixel in the background region corresponding to the target region. The background area of the target area is blurred according to the ambiguity coefficient of each pixel.
示例二:Example two:
由于背景区域的景深信息与目标区域的景深信息的差值越大,表示对应的背景区域与 目标区域的距离越远,越不相关,从而对应的虚化强度越大,在该示例中,可以预先存储目标区域与背景区域的景深信息的差值与虚化强度的对应关系,在该对应关系中,目标区域与背景区域的景深信息的差值越大,背景区域对应的虚化强度越大,从而,获取目标区域与背景区域的差值,根据该差值查询上述对应关系获取对应的(N-1)个背景区域的虚化强度,根据(N-1)个背景区域的虚化强度对相应的背景区域进行虚化处理。Since the difference between the depth information of the background area and the depth information of the target area is larger, the farther the distance between the corresponding background area and the target area is, the less relevant, and the corresponding blurring intensity is larger. In this example, Corresponding relationship between the difference between the depth information of the target area and the background area and the blurring intensity is stored in advance. In the corresponding relationship, the difference between the depth information of the target area and the background area is larger, and the blurring intensity corresponding to the background area is larger. And obtaining a difference between the target area and the background area, and querying the corresponding relationship according to the difference to obtain the ambiguous intensity of the corresponding (N-1) background regions, according to the ambiguity of the (N-1) background regions The corresponding background area is blurred.
由此,本申请实施例的图像处理方法,不但将景深信息的计算拆分为多个线程并行计算,且将景深计算与目标主图像的获取并行处理,提高了图像处理的效率,而且对当前拍摄的场景中包含多个目标对象,尤其是对包含多个不同的目标对象且对多个目标图像分别进行不同的图像处理操作,具有重要意义,灵活的满足了用户的个性化图像处理需求。为了更加清楚的说明本申请实施例的图像处理的效果,下面结合具体的应用场景进行举例:Therefore, the image processing method of the embodiment of the present application not only divides the calculation of the depth information into multiple threads for parallel calculation, but also processes the depth of field calculation in parallel with the acquisition of the target main image, thereby improving the efficiency of image processing, and The captured scene includes multiple target objects, especially for different image processing operations including multiple different target objects and multiple target images, which is of great significance and flexibly meets the user's personalized image processing requirements. In order to more clearly illustrate the effect of the image processing in the embodiment of the present application, the following is an example of a specific application scenario:
第一种场景:The first scenario:
在该场景下,目标图像对应的是人脸图像,当前拍照场景中为多人合照场景,包含用户A、B和C这3个用户的人脸图像,其中,图像处理参数对应的操作为美颜处理操作,则在该示例中,分为3个多线程分别获取用户A、B和C这3个用户的人脸图像的景深信息。In this scenario, the target image corresponds to a face image, and the current photo scene is a multi-person photo scene, including the face images of the three users A, B, and C, wherein the image processing parameter corresponding operation is beautiful. In the example, in the example, the depth of field information of the face images of the three users A, B, and C are respectively acquired by three multi-threads.
如图4所示,在获取到拍照指令时,控制主摄像头和副摄像头同时拍摄,获取4帧主图像和4帧副图像,其中,按照拍摄顺序4帧主图像的编号分别为11、12、13和14,4帧副图像的编号分别为21、22、23和24。As shown in FIG. 4, when the photographing instruction is acquired, the main camera and the sub-camera are controlled to simultaneously capture, and the main image of 4 frames and the sub-image of 4 frames are acquired, wherein the numbers of the main images of the four frames in the shooting order are 11, 12, respectively. The 13 and 14 frames of the sub-pictures are numbered 21, 22, 23, and 24, respectively.
其中,从多组主图像中选择参考主图像为12,并从多组副图像中选择与参考主图像同组拍摄的副图像22,进而,通过第一线程对多组主图像进行合成降噪处理生成目标主图像,同时通过3个多线程分别根据参考主图像和参考副图像获取分别获取用户A、B和C的人脸图像对应的景深信息。Wherein, the reference main image is selected from the plurality of sets of main images to be 12, and the sub-image 22 captured in the same group as the reference main image is selected from the plurality of sets of sub-images, and further, the composite noise reduction of the plurality of sets of main images is performed by the first thread. The target image is generated by the processing, and the depth information corresponding to the face images of the users A, B, and C are respectively acquired according to the reference main image and the reference sub image by the three multithreads.
进而,根据3个人脸图像的景深信息在目标主图像中获取对应的3个包含人脸图像的目标区域,根据预设的与3个人脸图像对应的美颜处理参数对3个人脸图像进行美颜处理,其中,用户A是女性年轻用户,则针对用户A进行瘦脸、美白、祛痘和亮眼美颜处理,用户B是男性年轻用户,则针对用户B进行美白、祛痘美颜处理,用户C是男性儿童用户,则针对用户C不进行美颜处理,或者针对用户A进行猪鼻子添加美颜处理,针对用户B进行胡须添加美颜处理,针对用户C进行青蛙眼添加美颜处理,由此,不但使得用户A、B和C各自得到了适合的美颜处理,而且提高了图像处理效率。Further, the corresponding three target areas including the face image are acquired in the target main image according to the depth information of the three face images, and the three face images are beautifully performed according to the preset beauty processing parameters corresponding to the three face images. Yan treatment, in which user A is a female young user, then face A, whitening, acne and eye-catching beauty treatment for user A, user B is a male young user, then whitening and acne beauty treatment for user B, If the user C is a male child user, the user C is not subjected to the beauty treatment, or the user A is added with the beauty treatment of the pig nose, the user B is added with the beard to add the beauty treatment, and the user C is added with the beauty treatment for the frog eye. Thereby, not only the users A, B, and C are each obtained with a suitable beauty treatment, but also the image processing efficiency is improved.
第二种场景:在该场景下,目标图像分别为建筑物和人像,当前场景为包含建筑物1和人物图像2的旅游场景,其中,图像处理参数对应的操作为虚化处理操作,则在该示例中,分为2个多线程分别获取用户A和建筑物1对应的图像的景深信息。The second scenario: in the scenario, the target image is a building and a portrait, and the current scene is a travel scene including the building 1 and the character image 2, wherein the operation corresponding to the image processing parameter is a blurring operation, and then In this example, the depth of field information of the image corresponding to the user A and the building 1 is obtained by dividing the two threads into two.
如图5所示,在获取到拍照指令时,控制主摄像头和副摄像头同时拍摄,获取4帧主图像和4帧副图像,其中,按照拍摄顺序4帧主图像的编号分别为11、12、13和14,4帧副图像的编号分别为21、22、23和24。As shown in FIG. 5, when the photographing instruction is acquired, the main camera and the sub-camera are controlled to simultaneously capture, and four main frames and four sub-images are acquired, wherein the main images of the four frames in the order of shooting are respectively 11, 12, The 13 and 14 frames of the sub-pictures are numbered 21, 22, 23, and 24, respectively.
其中,从多组主图像中选择参考主图像为12,并从多组副图像中选择与参考主图像同组拍摄的副图像22,进而,通过第一线程对多组主图像进行合成降噪处理生成目标主图像,同时通过2个多线程分别根据参考主图像和参考副图像获取分别获取用户A和建筑物1对应的图像的景深信息。Wherein, the reference main image is selected from the plurality of sets of main images to be 12, and the sub-image 22 captured in the same group as the reference main image is selected from the plurality of sets of sub-images, and further, the composite noise reduction of the plurality of sets of main images is performed by the first thread. The generated target main image is processed, and the depth information of the image corresponding to the user A and the building 1 is respectively acquired according to the reference main image and the reference sub image by the two multi-threads.
进而,根据用户A和建筑物1对应图像的景深信息在目标主图像中获取对应的目标区域,进而,为了突出当前拍照主体,针对用户A所在目标区域不进行虚化处理,针对建筑物1进行较低强度的虚化处理,针对用户A所在目标区域对应的背景区域和针对建筑物1所在目标区域对应的背景区域进行较强的虚化处理操作,不但突出了拍照主体,满足了用户和建筑的合照需求,而且提高了图像处理效率,避免了同时计算用户A和建筑物1对应图像的景深信息导致的图像处理耗时长的问题,且实现了用户A和建筑物1得到了合适的虚化处理,提高了图像处理的视觉效果。Further, the corresponding target area is acquired in the target main image according to the depth information of the image corresponding to the user A and the building 1. Further, in order to highlight the current photographing subject, the target area of the user A is not blurred, and the building 1 is performed. The lower-intensity blurring process performs a strong blurring operation on the background area corresponding to the target area where the user A is located and the background area corresponding to the target area in which the building 1 is located, which not only highlights the photographing subject, but also satisfies the user and the building. The image processing efficiency is improved, and the image processing time caused by simultaneously calculating the depth information of the image corresponding to the user A and the building 1 is avoided, and the user A and the building 1 are properly blurred. Processing improves the visual effect of image processing.
综上所述,本申请实施例的图像处理方法,控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像,从多组主图像中获取参考主图像,并从多组副图像中获取与参考主图像同组拍摄的参考副图像,通过第一线程对多组主图像进行合成降噪处理生成目标主图像,同时通过(N-1)个多线程分别根据参考主图像和参考副图像获取与(N-1)个多线程一一对应的(N-1)个目标图像的景深信息,进而,根据(N-1)个目标图像的景深信息在目标主图像中获取对应的(N-1)个目标区域,根据预设的与(N-1)个目标图像对应的图像处理参数对(N-1)个目标区域进行相应的图像处理。由此,实现了对不同区域进行对应的图像处理,且提高了图像处理效率。In summary, the image processing method in the embodiment of the present application controls the main camera to capture multiple sets of main images, and simultaneously controls the sub-camera to take multiple sets of sub-images, obtains reference main images from multiple sets of main images, and obtains multiple sets of sub-images. Obtaining a reference sub-image taken in the same group as the reference main image, synthesizing and denoising the plurality of sets of main images by the first thread to generate a target main image, and simultaneously (N-1) multi-threading according to the reference main image and the reference respectively The sub-image acquires depth-of-field information of (N-1) target images corresponding to (N-1) multi-threads one by one, and further acquires corresponding images in the target main image according to depth information of (N-1) target images (N-1) target areas, corresponding image processing is performed on (N-1) target areas according to preset image processing parameters corresponding to (N-1) target images. Thereby, corresponding image processing for different regions is realized, and image processing efficiency is improved.
为了实现上述实施例,本申请还提出了一种图像处理装置,图6是根据本申请一个实施例的图像处理装置的结构示意图,如图6所示,该图像处理装置包括:拍摄模块100、第一获取模块200、第一处理模块300和第二处理模块400。其中,拍摄模块100,用于控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像。In order to implement the above embodiments, the present application also provides an image processing apparatus. FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in FIG. 6, the image processing apparatus includes: a photographing module 100, The first obtaining module 200, the first processing module 300, and the second processing module 400. The photographing module 100 is configured to control the main camera to capture a plurality of sets of main images, and simultaneously control the sub-camera to take a plurality of sets of sub-images.
在本申请的一个实施例中,如图7所示,拍摄模块100包括检测单元110和拍摄单元120,其中,检测单元110,用于检测拍摄环境的亮度。In an embodiment of the present application, as shown in FIG. 7, the photographing module 100 includes a detecting unit 110 and a photographing unit 120, wherein the detecting unit 110 is configured to detect the brightness of the photographing environment.
拍摄单元120,用于在检测获知亮度小于预设阈值时,控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像。The photographing unit 120 is configured to control the main camera to capture the plurality of sets of main images when detecting that the learned brightness is less than the preset threshold, and simultaneously control the sub-camera to take the plurality of sets of sub-images.
第一获取模块200,用于从多组主图像中获取参考主图像,并从多组副图像中获取与 参考主图像同组拍摄的参考副图像。The first obtaining module 200 is configured to acquire a reference main image from the plurality of sets of main images, and acquire a reference sub-image taken in the same group as the reference main image from the plurality of sets of sub-images.
第一处理模块300,用于通过第一线程对多组主图像进行合成降噪处理生成目标主图像,同时通过(N-1)个多线程分别根据参考主图像和参考副图像获取与(N-1)个多线程一一对应的(N-1)个目标图像的景深信息,其中,(N-1)个多线程对应的(N-1)个目标图像不重复,N为大于2的整数。The first processing module 300 is configured to perform a composite noise reduction process on the plurality of sets of main images by the first thread to generate a target main image, and obtain (N-1) multi-threads according to the reference main image and the reference sub-image respectively. -1) depth-of-depth information of (N-1) target images corresponding to one thread, wherein (N-1) multi-threaded corresponding (N-1) target images are not repeated, and N is greater than 2. Integer.
第二处理模块400,用于根据(N-1)个目标图像的景深信息在目标主图像中获取对应的(N-1)个目标区域,根据预设的与(N-1)个目标图像对应的图像处理参数对(N-1)个目标区域进行相应的图像处理。The second processing module 400 is configured to acquire corresponding (N-1) target regions in the target main image according to the depth information of the (N-1) target images, according to the preset (N-1) target images. Corresponding image processing parameters perform corresponding image processing on (N-1) target areas.
在本申请的一个实施例中,图8是根据本申请另一个实施例的图像处理装置的结构示意图,如图8所示,该装置还包括第二获取模块500,第二获取模块500用于获取用户设置的与每个目标图像对应的个性化图像处理参数。In an embodiment of the present application, FIG. 8 is a schematic structural diagram of an image processing apparatus according to another embodiment of the present application. As shown in FIG. 8, the apparatus further includes a second obtaining module 500, where the second obtaining module 500 is used. The personalized image processing parameters corresponding to each target image set by the user are obtained.
需要说明的是,前述对方法实施例的描述,也适用于本申请实施例的装置,其实现原理类似,在此不再赘述。It should be noted that the foregoing description of the method embodiment is also applicable to the device in the embodiment of the present application, and the implementation principle is similar, and details are not described herein again.
上述图像处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图像处理装置按照需要划分为不同的模块,以完成上述图像处理装置的全部或部分功能。The division of each module in the above image processing apparatus is for illustrative purposes only. In other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
综上所述,本申请实施例的图像处理装置,控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像,从多组主图像中获取参考主图像,并从多组副图像中获取与参考主图像同组拍摄的参考副图像,通过第一线程对多组主图像进行合成降噪处理生成目标主图像,同时通过(N-1)个多线程分别根据参考主图像和参考副图像,获取与(N-1)个多线程一一对应的(N-1)个目标图像的景深信息,进而,根据(N-1)个目标图像的景深信息在目标主图像中获取对应的(N-1)个目标区域,根据预设的与(N-1)个目标图像对应的图像处理参数对(N-1)个目标区域进行相应的图像处理。由此,实现了对不同区域进行对应的图像处理,且提高了图像处理效率。In summary, the image processing apparatus of the embodiment of the present application controls the main camera to capture a plurality of sets of main images, and simultaneously controls the sub-camera to take a plurality of sets of sub-images, obtains a reference main image from the plurality of sets of main images, and obtains a plurality of sets of sub-images. Obtaining a reference sub-image taken in the same group as the reference main image, synthesizing and denoising the plurality of sets of main images by the first thread to generate a target main image, and simultaneously (N-1) multi-threading according to the reference main image and the reference respectively The sub-picture acquires depth-of-field information of (N-1) target images corresponding to (N-1) multi-threads one by one, and further acquires corresponding information in the target main image according to depth information of (N-1) target images (N-1) target areas, corresponding image processing is performed on (N-1) target areas according to preset image processing parameters corresponding to (N-1) target images. Thereby, corresponding image processing for different regions is realized, and image processing efficiency is improved.
为了实现上述实施例,本申请还提出了一种计算机设备,其中,计算机设备为包括包含存储计算机程序的存储器及运行计算机程序的处理器的任意设备,比如,可以为智能手机、个人电脑等,上述计算机设备中还包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图9为一个实施例中图像处理电路的示意图。如图9所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。In order to implement the above embodiments, the present application further provides a computer device, wherein the computer device is any device including a memory including a storage computer program and a processor running the computer program, for example, a smart phone, a personal computer, or the like. The computer device further includes an image processing circuit, and the image processing circuit may be implemented by using hardware and/or software components, and may include various processing units defining an ISP (Image Signal Processing) pipeline. Figure 9 is a schematic illustration of an image processing circuit in one embodiment. As shown in FIG. 9, for convenience of explanation, only various aspects of the image processing technique related to the embodiment of the present application are shown.
如图9所示,图像处理电路包括ISP处理器1040和控制逻辑器1050。成像设备1010捕捉的图像数据首先由ISP处理器1040处理,ISP处理器1040对图像数据进行分析以捕捉 可用于确定和/或成像设备1010的一个或多个控制参数的图像统计信息。成像设备1010(照相机)可包括具有一个或多个透镜1012和图像传感器1014的摄像头,其中,为了实施本申请的背景虚化处理方法,成像设备1010包含两组摄像头,其中,继续参照图9,成像设备1010可基于主摄像头和副摄像头同时拍摄场景图像。图像传感器1014可包括色彩滤镜阵列(如Bayer滤镜),图像传感器1014可获取用图像传感器1014的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器1040处理的一组原始图像数据,其中,ISP处理器1040可基于传感器1020提供的主摄像头中的图像传感器1014获取的原始图像数据和副摄像头中的图像传感器1014获取的原始图像数据计算景深信息等。传感器1020可基于传感器1020接口类型把原始图像数据提供给ISP处理器1040。传感器1020接口可以利用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。As shown in FIG. 9, the image processing circuit includes an ISP processor 1040 and a control logic 1050. The image data captured by imaging device 1010 is first processed by ISP processor 1040, which analyzes the image data to capture image statistical information that may be used to determine and/or control one or more control parameters of imaging device 1010. The imaging device 1010 (camera) may include a camera having one or more lenses 1012 and an image sensor 1014, wherein the imaging device 1010 includes two sets of cameras in order to implement the background blurring method of the present application, wherein, with continued reference to FIG. The imaging device 1010 can simultaneously capture a scene image based on the main camera and the sub camera. Image sensor 1014 may include a color filter array (such as a Bayer filter) that may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 1014 and provide a set of primitives that may be processed by ISP processor 1040 Image data, wherein the ISP processor 1040 can calculate depth information and the like based on the original image data acquired by the image sensor 1014 in the main camera provided by the sensor 1020 and the original image data acquired by the image sensor 1014 in the sub camera. Sensor 1020 can provide raw image data to ISP processor 1040 based on sensor 1020 interface type. The sensor 1020 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
ISP处理器1040按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器1040可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。The ISP processor 1040 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel can have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 1040 can perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
ISP处理器1040还可从图像存储器1030接收像素数据。例如,从传感器1020接口将原始像素数据发送给图像存储器1030,图像存储器1030中的原始像素数据再提供给ISP处理器1040以供处理。图像存储器1030可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。 ISP processor 1040 can also receive pixel data from image memory 1030. For example, raw pixel data is sent from the sensor 1020 interface to the image memory 1030, and the raw pixel data in the image memory 1030 is then provided to the ISP processor 1040 for processing. Image memory 1030 can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
当接收到来自传感器1020接口或来自图像存储器1030的原始图像数据时,ISP处理器1040可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器1030,以便在被显示之前进行另外的处理。ISP处理器1040从图像存储器1030接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。处理后的图像数据可输出给显示器1070,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器1040的输出还可发送给图像存储器1030,且显示器1070可从图像存储器1030读取图像数据。在一个实施例中,图像存储器1030可被配置为实现一个或多个帧缓冲器。此外,ISP处理器1040的输出可发送给编码器/解码器1060,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器1070设备上之前解压缩。编码器/解码器1060可由CPU或GPU或协处理器实现。When receiving raw image data from sensor 1020 interface or from image memory 1030, ISP processor 1040 can perform one or more image processing operations, such as time domain filtering. The processed image data can be sent to image memory 1030 for additional processing prior to being displayed. The ISP processor 1040 receives the processed data from the image memory 1030 and performs image data processing in the original domain and in the RGB and YCbCr color spaces. The processed image data may be output to display 1070 for viewing by a user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit). Additionally, the output of ISP processor 1040 can also be sent to image memory 1030, and display 1070 can read image data from image memory 1030. In one embodiment, image memory 1030 can be configured to implement one or more frame buffers. Additionally, the output of ISP processor 1040 can be sent to encoder/decoder 1060 to encode/decode image data. The encoded image data can be saved and decompressed before being displayed on the display 1070 device. Encoder/decoder 1060 can be implemented by a CPU or GPU or coprocessor.
ISP处理器1040确定的统计数据可发送给控制逻辑器1050单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜1012阴影校正等图像 传感器1014统计信息。控制逻辑器1050可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定成像设备1010的控制参数以及的控制参数。例如,控制参数可包括传感器1020控制参数(例如增益、曝光控制的积分时间)、照相机闪光控制参数、透镜1012控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜1012阴影校正参数。The statistics determined by the ISP processor 1040 can be sent to the control logic 1050 unit. For example, the statistical data may include image sensor 1014 statistical information such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 1012 shading correction, and the like. Control logic 1050 can include a processor and/or a microcontroller that executes one or more routines, such as firmware, and one or more routines can determine control parameters and control of imaging device 1010 based on received statistical data. parameter. For example, the control parameters may include sensor 1020 control parameters (eg, gain, integration time for exposure control), camera flash control parameters, lens 1012 control parameters (eg, focus or zoom focal length), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), as well as lens 1012 shading correction parameters.
以下为运用图9中图像处理技术实现图像处理方法的步骤:The following are the steps for implementing the image processing method using the image processing technique of FIG. 9:
控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像;Controlling the main camera to take multiple sets of main images, and controlling the sub-camera to take multiple sets of sub-images;
从所述多组主图像中获取参考主图像,并从所述多组副图像中获取与所述参考主图像同组拍摄的参考副图像;Acquiring a reference main image from the plurality of sets of main images, and acquiring a reference sub-image taken in the same group as the reference main image from the plurality of sets of sub-images;
通过第一线程对所述多组主图像进行合成降噪处理生成目标主图像,同时通过(N-1)个多线程分别根据所述参考主图像和所述参考副图像获取与(N-1)个多线程一一对应的(N-1)个目标图像的景深信息,其中,所述(N-1)个多线程对应的(N-1)个目标图像不重复,N为大于2的整数;Performing a composite noise reduction process on the plurality of sets of main images by a first thread to generate a target main image, and acquiring (N-1) according to the reference main image and the reference sub image by (N-1) multithreads respectively a depth-of-depth information of (N-1) target images corresponding to the multi-thread one-to-one, wherein (N-1) target images corresponding to the (N-1) multi-threads are not repeated, and N is greater than 2. Integer
根据所述(N-1)个目标图像的景深信息在所述目标主图像中获取对应的(N-1)个目标区域,根据预设的与所述(N-1)个目标图像对应的图像处理参数对所述(N-1)个目标区域进行相应的图像处理。Obtaining corresponding (N-1) target regions in the target main image according to the depth information of the (N-1) target images, according to preset presets corresponding to the (N-1) target images The image processing parameters perform corresponding image processing on the (N-1) target areas.
为了实现上述实施例,本申请还提出一种非临时性计算机可读存储介质,当所述存储介质中的指令由处理器被执行时,使得能够执行如上述实施例图像处理方法。In order to implement the above embodiments, the present application also proposes a non-transitory computer readable storage medium that enables execution of an image processing method as in the above embodiment when instructions in the storage medium are executed by a processor.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of the present specification, the description with reference to the terms "one embodiment", "some embodiments", "example", "specific example", or "some examples" and the like means a specific feature described in connection with the embodiment or example. A structure, material or feature is included in at least one embodiment or example of the application. In the present specification, the schematic representation of the above terms is not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in a suitable manner in any one or more embodiments or examples. In addition, various embodiments or examples described in the specification, as well as features of various embodiments or examples, may be combined and combined.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。Moreover, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" or "second" may include at least one of the features, either explicitly or implicitly. In the description of the present application, the meaning of "a plurality" is at least two, such as two, three, etc., unless specifically defined otherwise.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing the steps of a custom logic function or process. And the scope of the preferred embodiments of the present application includes additional implementations, in which the functions may be performed in a substantially simultaneous manner or in the reverse order depending on the functions involved, in accordance with the illustrated or discussed order. It will be understood by those skilled in the art to which the embodiments of the present application pertain.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in the flowchart or otherwise described herein, for example, may be considered as an ordered list of executable instructions for implementing logical functions, and may be embodied in any computer readable medium, Used in conjunction with, or in conjunction with, an instruction execution system, apparatus, or device (eg, a computer-based system, a system including a processor, or other system that can fetch instructions and execute instructions from an instruction execution system, apparatus, or device) Or use with equipment. For the purposes of this specification, a "computer-readable medium" can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM). In addition, the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that portions of the application can be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。One of ordinary skill in the art can understand that all or part of the steps carried by the method of implementing the above embodiments can be completed by a program to instruct related hardware, and the program can be stored in a computer readable storage medium. When executed, one or a combination of the steps of the method embodiments is included.
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module. The above integrated modules can be implemented in the form of hardware or in the form of software functional modules. The integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。The above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Claims (14)

  1. 一种图像处理方法,其特征在于,包括:An image processing method, comprising:
    控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像;Controlling the main camera to take multiple sets of main images, and controlling the sub-camera to take multiple sets of sub-images;
    从所述多组主图像中获取参考主图像,并从所述多组副图像中获取与所述参考主图像同组拍摄的参考副图像;Acquiring a reference main image from the plurality of sets of main images, and acquiring a reference sub-image taken in the same group as the reference main image from the plurality of sets of sub-images;
    通过第一线程对所述多组主图像进行合成降噪处理生成目标主图像,同时通过(N-1)个多线程分别根据所述参考主图像和所述参考副图像,获取与所述(N-1)个多线程一一对应的(N-1)个目标图像的景深信息,其中,所述(N-1)个多线程对应的(N-1)个目标图像不重复,N为大于2的整数;Performing a composite noise reduction process on the plurality of sets of main images by a first thread to generate a target main image, and acquiring (N-1) multi-threads according to the reference main image and the reference sub-image, respectively N-1) depth-of-depth information of (N-1) target images corresponding to a multi-thread one-to-one correspondence, wherein (N-1) target images corresponding to the (N-1) multi-threads are not repeated, and N is An integer greater than 2;
    根据所述(N-1)个目标图像的景深信息在所述目标主图像中获取对应的(N-1)个目标区域,根据预设的与所述(N-1)个目标图像对应的图像处理参数对所述(N-1)个目标区域进行相应的图像处理。Obtaining corresponding (N-1) target regions in the target main image according to the depth information of the (N-1) target images, according to preset presets corresponding to the (N-1) target images The image processing parameters perform corresponding image processing on the (N-1) target areas.
  2. 如权利要求1所述的方法,其特征在于,在所述根据预设的与所述(N-1)个目标图像对应的图像处理参数对所述(N-1)个目标区域进行相应的图像处理之前,还包括:The method according to claim 1, wherein said (N-1) target regions are corresponding to said image processing parameters corresponding to said (N-1) target images according to a preset Before image processing, it also includes:
    获取用户设置的与每个目标图像对应的所述图像处理参数。The image processing parameters corresponding to each target image set by the user are acquired.
  3. 如权利要求1或2所述的方法,其特征在于,所述控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像包括:The method according to claim 1 or 2, wherein the controlling the main camera to capture the plurality of sets of main images while controlling the sub-camera to take the plurality of sets of sub-images comprises:
    检测拍摄环境的亮度;Detecting the brightness of the shooting environment;
    若检测获知所述亮度小于预设阈值,则控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像。If the detection is that the brightness is less than a preset threshold, the main camera is controlled to take a plurality of sets of main images, and the sub-camera is controlled to take a plurality of sets of sub-images.
  4. 如权利要求1-3任一所述的方法,其特征在于,还包括:The method of any of claims 1-3, further comprising:
    根据所述(N-1)个目标图像的景深信息分别对与所述(N-1)个目标区域对应的(N-1)个背景区域进行虚化处理。The (N-1) background regions corresponding to the (N-1) target regions are respectively subjected to blurring processing based on the depth information of the (N-1) target images.
  5. 如权利要求4所述的方法,其特征在于,所述根据所述(N-1)个目标图像的景深信息分别对与所述(N-1)个目标区域对应的(N-1)个背景区域进行虚化处理,包括:The method according to claim 4, wherein said (N-1) corresponding to said (N-1) target regions are respectively according to depth information of said (N-1) target images The background area is blurred, including:
    根据所述(N-1)个目标图像的景深信息确定与所述(N-1)个目标区域对应的(N-1)个背景区域的虚化强度;Determining the blurring intensity of the (N-1) background regions corresponding to the (N-1) target regions according to the depth information of the (N-1) target images;
    根据所述(N-1)个背景区域的虚化强度对相应的背景区域进行虚化处理。The corresponding background area is blurred according to the blurring intensity of the (N-1) background regions.
  6. 如权利要求1-5任一所述的方法,其特征在于,A method according to any of claims 1-5, wherein
    所述主摄像头和所述副摄像头水平设置,或者,The main camera and the sub camera are horizontally arranged, or
    所述主摄像头和所述副摄像头竖直设置。The main camera and the sub camera are vertically disposed.
  7. 一种多人合拍处理装置,其特征在于,包括:A multi-person co-processing device, comprising:
    拍摄模块,用于控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像;a shooting module for controlling the main camera to take multiple sets of main images, and controlling the sub camera to take multiple sets of sub images;
    第一获取模块,用于从所述多组主图像中获取参考主图像,并从所述多组副图像中获取与所述参考主图像同组拍摄的参考副图像;a first acquiring module, configured to acquire a reference main image from the plurality of sets of main images, and acquire a reference sub-image captured in the same group as the reference main image from the plurality of sets of sub-images;
    第一处理模块,用于通过第一线程对所述多组主图像进行合成降噪处理生成目标主图像,同时通过(N-1)个多线程分别根据所述参考主图像和所述参考副图像,获取与所述(N-1)个多线程一一对应的(N-1)个目标图像的景深信息,其中,所述(N-1)个多线程对应的(N-1)个目标图像不重复,N为大于2的整数;a first processing module, configured to perform a composite noise reduction process on the plurality of sets of main images by a first thread to generate a target main image, and simultaneously pass (N-1) multi-threads according to the reference main image and the reference sub- And acquiring, by the image, depth-of-depth information of (N-1) target images corresponding to the (N-1) multi-threads, wherein (N-1) multi-threads correspond to (N-1) The target image is not repeated, and N is an integer greater than 2;
    第二处理模块,用于根据所述(N-1)个目标图像的景深信息在所述目标主图像中获取对应的(N-1)个目标区域,根据预设的与所述(N-1)个目标图像对应的图像处理参数对所述(N-1)个目标区域进行相应的图像处理。a second processing module, configured to acquire corresponding (N-1) target regions in the target main image according to the depth information of the (N-1) target images, according to the preset and the (N- 1) The image processing parameters corresponding to the target images perform corresponding image processing on the (N-1) target regions.
  8. 如权利要求7所述的装置,其特征在于,还包括:The device of claim 7 further comprising:
    第二获取模块,用于获取用户设置的与每个目标图像对应的所述图像处理参数。And a second acquiring module, configured to acquire the image processing parameter corresponding to each target image set by the user.
  9. 如权利要求7或8所述的装置,其特征在于,所述拍摄模块包括:The apparatus according to claim 7 or 8, wherein the photographing module comprises:
    检测单元,用于检测拍摄环境的亮度;a detecting unit, configured to detect brightness of a shooting environment;
    拍摄单元,用于在检测获知所述亮度小于预设阈值时,控制主摄像头拍摄多组主图像,同时控制副摄像头拍摄多组副图像。The photographing unit is configured to control the main camera to capture the plurality of sets of main images while detecting that the brightness is less than the preset threshold, and simultaneously control the sub-camera to take the plurality of sets of sub-images.
  10. 如权利要求7-9任一所述的装置,其特征在于,还包括:The device according to any one of claims 7-9, further comprising:
    虚化处理模块,用于根据所述(N-1)个目标图像的景深信息分别对与所述(N-1)个目标区域对应的(N-1)个背景区域进行虚化处理。The blurring processing module is configured to perform blurring processing on the (N-1) background regions corresponding to the (N-1) target regions according to the depth information of the (N-1) target images.
  11. 如权利要求10所述的装置,其特征在于,所述虚化处理模块,具体用于:The device according to claim 10, wherein the blurring processing module is specifically configured to:
    根据所述(N-1)个目标图像的景深信息确定与所述(N-1)个目标区域对应的(N-1)个背景区域的虚化强度;Determining the blurring intensity of the (N-1) background regions corresponding to the (N-1) target regions according to the depth information of the (N-1) target images;
    根据所述(N-1)个背景区域的虚化强度对相应的背景区域进行虚化处理。The corresponding background area is blurred according to the blurring intensity of the (N-1) background regions.
  12. 如权利要求7-11任一所述的装置,其特征在于,A device according to any of claims 7-11, wherein
    所述主摄像头和所述副摄像头水平设置,或者,The main camera and the sub camera are horizontally arranged, or
    所述主摄像头和所述副摄像头竖直设置。The main camera and the sub camera are vertically disposed.
  13. 一种计算机设备,其特征在于,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如权利要求1-6中任一所述的图像处理方法。A computer device, comprising: a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein when the processor executes the program, implementing any one of claims 1-6 The image processing method described.
  14. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-6中任一所述的图像处理方法。A computer readable storage medium having stored thereon a computer program, characterized in that the program is executed by a processor to implement the image processing method according to any of claims 1-6.
PCT/CN2018/116752 2017-12-06 2018-11-21 Method and device for processing image WO2019109805A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711276709.4 2017-12-06
CN201711276709.4A CN108111749B (en) 2017-12-06 2017-12-06 Image processing method and device

Publications (1)

Publication Number Publication Date
WO2019109805A1 true WO2019109805A1 (en) 2019-06-13

Family

ID=62209103

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116752 WO2019109805A1 (en) 2017-12-06 2018-11-21 Method and device for processing image

Country Status (2)

Country Link
CN (1) CN108111749B (en)
WO (1) WO2019109805A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112104796A (en) * 2019-06-18 2020-12-18 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112785488A (en) * 2019-11-11 2021-05-11 宇龙计算机通信科技(深圳)有限公司 Image processing method and device, storage medium and terminal
CN117576247A (en) * 2024-01-17 2024-02-20 江西拓世智能科技股份有限公司 Picture generation method and system based on artificial intelligence

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108111749B (en) * 2017-12-06 2020-02-14 Oppo广东移动通信有限公司 Image processing method and device
CN110956577A (en) * 2018-09-27 2020-04-03 Oppo广东移动通信有限公司 Control method of electronic device, and computer-readable storage medium
CN110298826A (en) * 2019-06-18 2019-10-01 合肥联宝信息技术有限公司 A kind of image processing method and device
CN110781759B (en) * 2019-09-29 2022-08-09 浙江大华技术股份有限公司 Information binding method and device for vehicle and driver and computer storage medium
CN111860530A (en) * 2020-07-31 2020-10-30 Oppo广东移动通信有限公司 Electronic equipment, data processing method and related device
CN112148124A (en) * 2020-09-10 2020-12-29 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112819683B (en) * 2021-01-19 2023-05-26 北京格视科技有限公司 Image processing method, device, computer equipment and storage medium
CN115334235A (en) * 2022-07-01 2022-11-11 西安诺瓦星云科技股份有限公司 Video processing method, device, terminal equipment and storage medium
CN115937836A (en) * 2023-02-08 2023-04-07 江阴嘉欧新材料有限公司 Cable laying depth identification device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801910A (en) * 2011-05-27 2012-11-28 三洋电机株式会社 Image sensing device
CN103108122A (en) * 2011-11-14 2013-05-15 卡西欧计算机株式会社 Image synthesizing apparatus and image recording method
US20140232900A1 (en) * 2011-10-11 2014-08-21 Sony Ericsson Mobile Communications Ab Light sensitive, low height, and high dynamic range camera
CN105100615A (en) * 2015-07-24 2015-11-25 青岛海信移动通信技术股份有限公司 Image preview method, apparatus and terminal
CN106550184A (en) * 2015-09-18 2017-03-29 中兴通讯股份有限公司 Photo processing method and device
CN108111749A (en) * 2017-12-06 2018-06-01 广东欧珀移动通信有限公司 Image processing method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973978B (en) * 2014-04-17 2018-06-26 华为技术有限公司 It is a kind of to realize the method focused again and electronic equipment
US9426450B1 (en) * 2015-08-18 2016-08-23 Intel Corporation Depth sensing auto focus multiple camera system
CN105763813A (en) * 2016-04-05 2016-07-13 广东欧珀移动通信有限公司 Photographing method, device and intelligent terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801910A (en) * 2011-05-27 2012-11-28 三洋电机株式会社 Image sensing device
US20140232900A1 (en) * 2011-10-11 2014-08-21 Sony Ericsson Mobile Communications Ab Light sensitive, low height, and high dynamic range camera
CN103108122A (en) * 2011-11-14 2013-05-15 卡西欧计算机株式会社 Image synthesizing apparatus and image recording method
CN105100615A (en) * 2015-07-24 2015-11-25 青岛海信移动通信技术股份有限公司 Image preview method, apparatus and terminal
CN106550184A (en) * 2015-09-18 2017-03-29 中兴通讯股份有限公司 Photo processing method and device
CN108111749A (en) * 2017-12-06 2018-06-01 广东欧珀移动通信有限公司 Image processing method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112104796A (en) * 2019-06-18 2020-12-18 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112104796B (en) * 2019-06-18 2023-10-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112785488A (en) * 2019-11-11 2021-05-11 宇龙计算机通信科技(深圳)有限公司 Image processing method and device, storage medium and terminal
CN117576247A (en) * 2024-01-17 2024-02-20 江西拓世智能科技股份有限公司 Picture generation method and system based on artificial intelligence
CN117576247B (en) * 2024-01-17 2024-03-29 江西拓世智能科技股份有限公司 Picture generation method and system based on artificial intelligence

Also Published As

Publication number Publication date
CN108111749A (en) 2018-06-01
CN108111749B (en) 2020-02-14

Similar Documents

Publication Publication Date Title
WO2019109805A1 (en) Method and device for processing image
JP7003238B2 (en) Image processing methods, devices, and devices
CN108055452B (en) Image processing method, device and equipment
KR102306304B1 (en) Dual camera-based imaging method and device and storage medium
WO2019105262A1 (en) Background blur processing method, apparatus, and device
CN108154514B (en) Image processing method, device and equipment
CN107509031B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108024054B (en) Image processing method, device, equipment and storage medium
CN107945105B (en) Background blurring processing method, device and equipment
CN108419028B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107493432B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
KR20200031168A (en) Image processing method and mobile terminal using dual cameras
WO2019105297A1 (en) Image blurring method and apparatus, mobile device, and storage medium
WO2019105261A1 (en) Background blurring method and apparatus, and device
CN108156369B (en) Image processing method and device
CN108024057B (en) Background blurring processing method, device and equipment
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
WO2019105298A1 (en) Image blurring processing method, device, mobile device and storage medium
WO2019105260A1 (en) Depth of field obtaining method, apparatus and device
CN108052883B (en) User photographing method, device and equipment
CN107911609B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18885015

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18885015

Country of ref document: EP

Kind code of ref document: A1