CN106899781B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN106899781B
CN106899781B CN201710128541.6A CN201710128541A CN106899781B CN 106899781 B CN106899781 B CN 106899781B CN 201710128541 A CN201710128541 A CN 201710128541A CN 106899781 B CN106899781 B CN 106899781B
Authority
CN
China
Prior art keywords
image
camera
background
depth
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710128541.6A
Other languages
Chinese (zh)
Other versions
CN106899781A (en
Inventor
闫明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN201710128541.6A priority Critical patent/CN106899781B/en
Publication of CN106899781A publication Critical patent/CN106899781A/en
Application granted granted Critical
Publication of CN106899781B publication Critical patent/CN106899781B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses an image processing method and electronic equipment, wherein the image processing method is used for the electronic equipment provided with a first camera and a second camera, the first camera and the second camera are used for shooting the same scene, and the image processing method comprises the following steps: controlling a first camera to shoot a first image and controlling a second camera to shoot a second image; synthesizing the first image and the second image into a third image; calculating depth information of pixels on the third image; determining a foreground image and a background image on the third image according to the depth information of the pixels; and replacing the background image of the third image by using a preset background image to obtain a third image with the background replaced. According to the invention, the extraction of the foreground image and the background image is more accurate, and the shot background image is replaced by directly utilizing the preset background image after the image is shot, so that the replacement of the background image is rapidly realized without post-processing.

Description

Image processing method and electronic equipment
Technical Field
The invention relates to the technical field of image processing, in particular to an image processing method and electronic equipment.
Background
Photographing has become a habit of people, and various electronic devices with photographing function are increasingly favored, such as: cell phones, cameras, digital cameras, video cameras, and the like. In order to meet different requirements of people on photographing effects, a plurality of electronic devices with double cameras, such as mobile phones with front double cameras or rear double cameras, are sold on the market at present.
When a user uses such an electronic device to take a picture, the user usually wants to highlight important contents in the image, for example, highlight a portrait during a self-timer shooting process, weaken the background image, and even replace the background image. However, in the process of processing an image, the existing electronic device with a photographing function often needs to perform image photographing first and then perform matting on a foreground image (such as a portrait) in the photographed image by means of other software, so as to replace the background image. But it is mainly done by manually selecting the matte regions or automatically matting according to the contours of the foreground image as distinguished from the pixels of the background image. Therefore, the prior art has the following defects:
1. the replacement of the background image cannot be realized in the photographing process, and the post-processing of the image needs to be realized by other software;
2. there is an error in the extraction of the foreground image and the background image, which easily causes the extracted foreground image to contain the content of the background image or generate burrs.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method and an electronic device, so as to solve the problem that, in the prior art, the electronic device cannot replace a background image during a photographing process, and needs to implement post-processing of an image by means of other software.
The invention provides an image processing method, which is used for an electronic device provided with a first camera and a second camera, wherein the first camera and the second camera are used for shooting the same scene, and the image processing method comprises the following steps: controlling the first camera to shoot a first image and controlling the second camera to shoot a second image; synthesizing the first image and the second image into a third image; calculating depth information for pixels on the third image; determining a foreground image and a background image on the third image according to the depth information of the pixel; and replacing the background image of the third image by using a preset background image to obtain a third image with the background replaced.
Through utilizing two cameras to shoot the image, and carry out the concatenation of image, then calculate the depth information of pixel on the image, utilize the depth information of pixel on the image to draw foreground image and background image on the image, for the mode that comes automatic cutout according to the pixel difference of the profile of foreground image and background image among the prior art, the extraction of foreground image and background image of this embodiment is more accurate, and after shooing and obtaining the image, the background image that directly utilizes the background image that sets up in advance will shoot replaces, need not to carry out post processing, realize the replacement of background image fast.
With reference to the first aspect of the present invention, in a first implementation manner of the first aspect of the present invention, determining a foreground image and a background image on the third image according to the depth information of the pixel includes: comparing the depth of each pixel on the third image with a depth threshold value in sequence; determining an image composed of pixels with depths greater than the depth threshold as the background image; determining an image composed of pixels having a depth less than the depth threshold as the foreground image.
By adopting the depth threshold as the judgment condition for segmenting the foreground image and the background image, the foreground image and the background image can be segmented quickly and accurately.
With reference to the first aspect of the present invention, in a second implementation manner of the first aspect of the present invention, the following steps are adopted to calculate the depth of each pixel: recording a first focal length f1 of the first camera shooting the first image and recording a second focal length f2 of the second camera shooting the second image; determining the position of an imaging point corresponding to a target pixel on the first image, and determining the position of the imaging point corresponding to the target pixel on the second image; determining a first distance X1 between the imaging point corresponding to the target pixel on the first image and the center point of the first image and a second distance X2 between the imaging point corresponding to the target pixel on the second image and the center point of the second image; the depth Z of the target pixel is calculated according to the following formula:
Z=T*f1*f2/(X1*f2+X2*f1)
wherein T represents a distance between a center point of the first camera and a center point of the second camera.
The depth information of the pixels is obtained through calculation of specific parameters when the camera shoots the image and is sequentially used as basic data for segmenting the foreground image and the background image, the positions of all pixel points on the image in a scene can be accurately represented, and therefore accurate segmentation of the foreground image and the background image is achieved.
With reference to the first aspect of the present invention, in a third implementation manner of the first aspect of the present invention, the synthesizing the first image and the second image into a third image includes: overlapping the first image and the second image, and gradually moving the second image according to a preset rule; calculating the similarity of the overlapping area between the first image and the second image; judging whether the similarity is greater than a preset threshold value or not; and when the similarity is larger than the preset threshold value, splicing the corresponding overlapped area serving as the common area of the first image and the second image to obtain the third image.
And the splicing position of the images is calculated by calculating the similarity of the overlapped areas, so that the images are accurately spliced.
With reference to the third implementation manner of the first aspect of the present invention, in a fourth implementation manner of the first aspect of the present invention, the calculating the similarity of the overlapping area between the first image and the second image includes: acquiring a gray level histogram of the first image in the overlapping area and a gray level histogram of the second image in the overlapping area; and calculating the Euclidean distance between the gray level histogram of the first image in the overlapping area and the gray level histogram of the second image in the overlapping area, and taking the Euclidean distance as the similarity.
By using the euclidean distance as the similarity, the accuracy of the determination of the image similarity can be improved.
With reference to the first aspect of the present invention, in a fifth embodiment of the first aspect of the present invention, before replacing the background image of the third image with a preset background image and obtaining the third image after replacing the background, the method further includes: and carrying out median filtering processing on the pixel points on the preset background image to obtain a blurred background image.
A second aspect of the present invention provides an electronic device, where the electronic device is provided with a first camera and a second camera, and the first camera and the second camera are used for shooting a same scene, and the electronic device includes: the control unit is used for controlling the first camera to shoot a first image and controlling the second camera to shoot a second image; a synthesizing unit configured to synthesize the first image and the second image into a third image; a calculation unit configured to calculate depth information of pixels on the third image; a determining unit, configured to determine a foreground image and a background image on the third image according to the depth information of the pixel; and the replacing unit is used for replacing the background image of the third image by using a preset background image to obtain the third image after replacing the background.
Through utilizing two cameras to shoot the image, and carry out the concatenation of image, then calculate the depth information of pixel on the image, utilize the depth information of pixel on the image to draw foreground image and background image on the image, for the mode that comes automatic cutout according to the pixel difference of the profile of foreground image and background image among the prior art, the extraction of foreground image and background image of this embodiment is more accurate, and after shooing and obtaining the image, the background image that directly utilizes the background image that sets up in advance will shoot replaces, need not to carry out post processing, realize the replacement of background image fast.
With reference to the second aspect of the present invention, in a first embodiment of the second aspect of the present invention, the determination unit includes: a comparison module, configured to compare the depth of each pixel on the third image with a depth threshold in sequence; a first determining module, configured to determine an image composed of pixels with depths greater than the depth threshold as the background image; determining an image composed of pixels having a depth less than the depth threshold as the foreground image.
By adopting the depth threshold as the judgment condition for segmenting the foreground image and the background image, the foreground image and the background image can be segmented quickly and accurately.
With reference to the second aspect of the present invention, in a second embodiment of the second aspect of the present invention, the calculation unit includes: the recording module is used for recording a first focal length f1 of the first image shot by the first camera and recording a second focal length f2 of the second image shot by the second camera; a second determining module, configured to determine a position of an imaging point corresponding to a target pixel on the first image, and determine a position of an imaging point corresponding to the target pixel on the second image; a third determining module, configured to determine a first distance X1 between the imaging point corresponding to the target pixel on the first image and the center point of the first image, and a second distance X2 between the imaging point corresponding to the target pixel on the second image and the center point of the second image; a first calculation module for calculating the depth Z of the target pixel according to the following formula:
Z=T*f1*f2/(X1*f2+X2*f1)
wherein T represents a distance between a center point of the first camera and a center point of the second camera.
The depth information of the pixels is obtained through calculation of specific parameters when the camera shoots the image and is sequentially used as basic data for segmenting the foreground image and the background image, the positions of all pixel points on the image in a scene can be accurately represented, and therefore accurate segmentation of the foreground image and the background image is achieved.
In a third embodiment of the second aspect of the present invention in combination with the second aspect of the present invention, the synthesis unit comprises: the moving module is used for overlapping the first image and the second image and gradually moving the second image according to a preset rule; the second calculation module is used for calculating the similarity of a superposition area between the first image and the second image; the judging module is used for judging whether the similarity is greater than a preset threshold value or not; and the splicing module is used for splicing the corresponding overlapped area as the shared area of the first image and the second image to obtain the third image when the similarity is greater than the preset threshold.
And the splicing position of the images is calculated by calculating the similarity of the overlapped areas, so that the images are accurately spliced.
With reference to the third embodiment of the second aspect of the present invention, in a fourth embodiment of the second aspect of the present invention, the second calculation module includes: the acquisition submodule is used for acquiring a gray level histogram of the first image in the overlapping area and a gray level histogram of the second image in the overlapping area; and the calculation submodule is used for calculating the Euclidean distance between the gray level histogram of the first image in the overlapping area and the gray level histogram of the second image in the overlapping area, and taking the Euclidean distance as the similarity.
By using the euclidean distance as the similarity, the accuracy of the determination of the image similarity can be improved.
With reference to the second aspect of the present invention, in a fifth embodiment of the second aspect of the present invention, the apparatus further includes: and the blurring unit is used for performing median filtering processing on pixel points on the preset background image to obtain a blurred background image before replacing the background image of the third image with the preset background image to obtain the background-replaced third image.
A third aspect of the present invention provides an electronic device comprising: the system comprises a first camera, a second camera, a memory and a processor, wherein the first camera, the second camera, the memory and the processor are connected with each other through a bus, computer instructions are stored in the memory, and the processor executes the computer instructions, so that the following method is realized: controlling the first camera to shoot a first image and controlling the second camera to shoot a second image; synthesizing the first image and the second image into a third image; calculating depth information for pixels on the third image; determining a foreground image and a background image on the third image according to the depth information of the pixel; and replacing the background image of the third image by using a preset background image to obtain a third image with the background replaced.
Through utilizing two cameras to shoot the image, and carry out the concatenation of image, then calculate the depth information of pixel on the image, utilize the depth information of pixel on the image to draw foreground image and background image on the image, for the mode that comes automatic cutout according to the pixel difference of the profile of foreground image and background image among the prior art, the extraction of foreground image and background image of this embodiment is more accurate, and after shooing and obtaining the image, the background image that directly utilizes the background image that sets up in advance will shoot replaces, need not to carry out post processing, realize the replacement of background image fast.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way, and in which:
fig. 1 is a schematic diagram of a hardware structure of an electronic device of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of an exemplary first image according to an embodiment of the invention;
FIG. 4 is a schematic view of a second image corresponding to FIG. 3;
FIG. 5 is a schematic illustration of the synthesized third image of FIGS. 3 and 4;
FIG. 6 is a flow diagram of another alternative image processing method according to an embodiment of the invention;
FIG. 7 is a flow diagram of a pixel depth calculation process according to an embodiment of the present invention;
FIG. 8 shows a schematic diagram of the depth calculation shown in FIG. 7;
FIG. 9 is a flow diagram of yet another alternative image processing method according to an embodiment of the present invention;
FIG. 10 is a schematic view of an electronic device according to an embodiment of the invention;
FIG. 11 is a schematic view of an alternative electronic device in accordance with embodiments of the invention;
FIG. 12 is a schematic diagram of yet another alternative electronic device in accordance with an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of a hardware structure of an electronic device of an image processing method according to an embodiment of the present invention, and as shown in fig. 1, the electronic device includes one or more processors 101 and a memory 102, and further includes a first camera 103, a second camera 104, a display 105, and an interface 106 connected to the processors. The first camera 103 and the second camera 104 are used to photograph the same scene, for example as front dual cameras or rear dual cameras.
An embodiment of the present invention provides an image processing method, which may be used in the electronic device in the foregoing embodiments of the present invention. As shown in fig. 2, the image processing method includes:
step S201, controlling the first camera to shoot the first image, and controlling the second camera to shoot the second image.
In this embodiment, the first image shot by the first camera and the second image shot by the second camera are both initial images, and both are images shot in the same scene. For example, when a user takes a self-timer, a first camera takes a first image containing a portrait of the user in a current scene, and a second camera takes a second image containing the portrait of the user in the same scene. Because the first camera and the second camera are at different positions and have different shooting angles, the backgrounds of the first image and the second image have some differences.
Step S202, the first image and the second image are synthesized into a third image.
After the two cameras respectively shoot to obtain corresponding images, the two cameras are combined into one image, namely the first image and the second image are spliced to obtain a third image.
Taking two horizontally placed cameras as an example, wherein an image shot by a left camera is shown in fig. 3 and is called a first image, an image shot by a right camera is shown in fig. 4 and is called a second image, and because the background range of an image shot by a single camera (left camera or right camera) is small, in this embodiment, images shot by the two cameras are spliced and synthesized to obtain a spliced third image, as shown in fig. 5, so that the background range of the shot image is large, and a larger-angle view field is obtained.
In step S203, depth information of the pixel on the third image is calculated.
After the third image is obtained by synthesis, depth information of each pixel on the image may be calculated, where the depth information may refer to a distance between a position of a pixel point on the image in a scene and a camera, and the depth information may be calculated based on a relevant parameter of the camera when the image is captured.
And step S204, determining a foreground image and a background image on the third image according to the depth information of the pixels.
In this embodiment, because the distance between the real object corresponding to the foreground image (e.g., portrait) and the real object corresponding to the background image and the camera is different and there is a large difference therebetween, based on this principle, the embodiment distinguishes the foreground image and the background image on the third image by using the depth information of the pixel, so that the boundary between the foreground image and the background image can be accurately determined, and the foreground image and the background image can be accurately extracted.
Step S205, replacing the background image of the third image with a preset background image to obtain a third image after replacing the background.
After the foreground image and the background image are determined, the determined background image can be replaced by a preset background image before shooting, a third image after replacing the background is obtained, and the third image is output.
According to the embodiment of the invention, the images are shot by using the double cameras, the images are spliced, then the depth information of the pixels on the images is calculated, the foreground images and the background images on the images are extracted by using the depth information of the pixels on the images, and compared with the mode of automatically matting according to the difference between the outline of the foreground images and the pixels of the background images in the prior art, the extraction of the foreground images and the background images is more accurate, the shot background images are directly replaced by using the preset background images after the images are shot, the post-processing is not needed, and the replacement of the background images is quickly realized.
Fig. 6 is a flow chart of another alternative image processing method according to an embodiment of the present invention. The image processing method of this embodiment can be applied to the electronic apparatus in the above-described embodiment of the present invention. As shown in fig. 6, the image processing method includes:
step S601, controlling the first camera to shoot the first image, and controlling the second camera to shoot the second image.
Step S602 synthesizes the first image and the second image into a third image.
In step S603, depth information of the pixel on the third image is calculated.
In this embodiment, steps S601 to S603 are the same as steps S201 to S203 shown in fig. 2, and refer to the above description specifically, which is not repeated here.
Step S604, comparing the depth of each pixel on the third image with the depth threshold in turn.
Step S605, determining an image composed of pixels with the depth larger than a depth threshold value as a background image; and determining an image composed of pixels with the depth smaller than the depth threshold value as a foreground image.
In this embodiment, the pixels on the foreground image and the background image on the third image may be determined by the set depth threshold. When taking a picture, the foreground object (such as a person) is generally close to the camera, the background at a far position is far away from the camera, and the two kinds of depth information are greatly different. In this embodiment, a depth threshold is set, an image area whose distance is greater than the threshold is used as a background image, that is, a background scene image, and an image area smaller than the threshold is used as a foreground image. Specifically, it may be sequentially determined whether the depth of each pixel is greater than a depth threshold, and if so, the pixel is taken as a pixel of the background image; if not, the image is taken as a pixel of a foreground image; after all the pixels are judged, the pixels of the foreground image form a foreground image, and the pixels of the background image form a background image, so that the accurate segmentation of the foreground image and the background image is realized.
It should be noted that, in the embodiment of the present invention, the depth threshold may be set to a fixed value according to the statistical data, or may be automatically adjusted according to the shot scene, and the present invention is not limited improperly.
Step S606, replacing the background image of the third image with a preset background image to obtain a third image after replacing the background.
According to the embodiment of the invention, the depth threshold is used as the judgment condition for segmenting the foreground image and the background image, so that the foreground image and the background image can be segmented quickly and accurately.
FIG. 7 is a flow chart of a pixel depth calculation process according to an embodiment of the present invention. The pixel depth calculating method described in this embodiment may be implemented as an alternative to step S203 shown in fig. 2 and step S603 shown in fig. 6, as shown in fig. 7 and 8, and the following steps are adopted to calculate the depth of each pixel:
in step S701, a first focal length f1 of the first camera shooting the first image is recorded, and a second focal length f2 of the second camera shooting the second image is recorded.
In step S702, the position of the imaging point P corresponding to the target pixel is determined on the first image, and the position of the imaging point Q corresponding to the target pixel is determined on the second image.
In step S703, a first distance X1 between the imaging point P corresponding to the target pixel on the first image and the center point P 'of the first image and a second distance X2 between the imaging point Q corresponding to the target pixel on the second image and the center point Q' of the second image are determined.
In step S704, the depth Z of the target pixel is calculated according to Z ═ T × f1 × f2/(X1 × f2+ X2 × f1), where T represents the distance between the center point of the first camera and the center point of the second camera.
With which depth information of pixels on the third pattern can be obtained.
According to the embodiment of the invention, the depth information of the pixels is obtained by calculating the specific parameters when the camera shoots the image and is sequentially used as the basic data for segmenting the foreground image and the background image, so that the position of each pixel point on the image in the scene can be accurately represented, and the accurate segmentation of the foreground image and the background image is further realized.
It should be noted that the method for calculating pixel depth information described in this embodiment is only applicable to the overlapping region of the first image and the second image on the third image, that is, the region including the foreground image, and for the images other than the overlapping region, the images can be directly recognized as a part of the background image, and the calculation of the pixel depth is not needed, so that the calculation amount can be reduced, and the rapid segmentation of the image can be realized.
FIG. 9 is a flow chart of yet another alternative image processing method according to an embodiment of the present invention. The image processing method of this embodiment can be applied to the electronic apparatus in the above-described embodiment of the present invention. As shown in fig. 9, the image processing method includes:
step S901, controlling a first camera to capture a first image, and controlling a second camera to capture a second image.
This step is the same as step S201 shown in fig. 2, and specific reference is made to the above description, which is not repeated here.
Step S902, overlapping the first image and the second image, and gradually moving the second image according to a preset rule.
In step S903, the similarity of the overlapping region between the first image and the second image is calculated.
Step S904, determining whether the similarity is greater than a preset threshold.
And step S905, when the similarity is greater than a preset threshold value, splicing the corresponding overlapped area serving as a shared area of the first image and the second image to obtain a third image.
Specifically, the image taken by the first camera (as shown in fig. 3) is panoramic stitched with the image taken by the second camera (as shown in fig. 4). The images shot by the two cameras have an overlapping area. In order to calculate the size of the overlapped area, a regular moving manner is as follows: firstly, aligning the image 3 with the image 4 at the leftmost position of the image 3, starting the image 4 from the leftmost position of the image 3, translating the image to the right, calculating the similarity of the overlapped area during translation, judging whether the similarity is greater than a preset threshold value, if so, continuing to translate the image 4 until the similarity of the overlapped area is greater than or equal to the preset threshold value, and stopping moving. In the stitching process, the part outside the overlapped area in fig. 4 may be stitched with fig. 3 to obtain a final image, as shown in fig. 5.
In step S906, depth information of the pixel on the third image is calculated.
In step S907, a foreground image and a background image on the third image are determined according to the depth information of the pixels.
Step S908 is to replace the background image of the third image with a preset background image, so as to obtain a third image with the replaced background.
Steps S906 to S908 are the same as steps S203 to S205 shown in fig. 2, and refer to the above description specifically, which is not described herein again.
According to the embodiment of the invention, the image splicing position is obtained by calculating the similarity of the overlapped areas, so that the accurate splicing of the images is realized.
As an optional implementation manner of the foregoing embodiment, in this embodiment, calculating the similarity of the overlapping area between the first image and the second image includes: acquiring a gray level histogram of the first image in the overlapping area and a gray level histogram of the second image in the overlapping area; and calculating the Euclidean distance between the gray level histogram of the first image in the overlapping region and the gray level histogram of the second image in the overlapping region, and taking the Euclidean distance as the similarity.
Specifically, the similarity of the overlapping regions is calculated by the similarity of the gray histogram: firstly, calculating gray level histograms of two images in the overlapping area, acquiring a distribution histogram from 0-255 gray level values, calculating Euclidean distance of the two images in the overlapping area by using an Euclidean distance formula, and comparing the Euclidean distance serving as the similarity with a preset threshold value to determine the overlapping area.
According to the embodiment of the invention, the Euclidean distance is used as the similarity, so that the accuracy of judging the image similarity can be improved.
As another optional implementation manner of the foregoing embodiment, since the images after being stitched are prone to have inconsistent brightness, in this embodiment, a histogram equalization or contrast stretching method may be adopted to perform image processing, so as to obtain a third image with balanced color and balanced brightness.
As another optional implementation manner, in this embodiment, before replacing the background image of the third image with a preset background image to obtain the third image after replacing the background, the method further includes: and carrying out median filtering processing on the pixel points on the preset background image to obtain a blurred background image.
The median filtering process in this embodiment may be: for a pixel matrix of an image, a sub-matrix window with a target pixel as the center is taken, the window can be 3 x 3, 5 x 5 and the like, and the pixels in the window are subjected to gray level sorting, and the middle value is taken as a new gray value of the target pixel.
In this embodiment, only the pixels belonging to the background image are subjected to median filtering, and the foreground image is not subjected to median filtering, and the edge of the foreground image can be sharpened to avoid the problem because the edge of the junction area between the background and the scene is blurred due to the median filtering.
An embodiment of the present invention provides an electronic device, as shown in fig. 10, the electronic device includes: a control unit 100, a synthesis unit 200, a calculation unit 300, a determination unit 400, and a replacement unit 500.
The control unit 100 is used for controlling the first camera to take the first image and controlling the second camera to take the second image.
In this embodiment, the first image shot by the first camera and the second image shot by the second camera are both initial images, and both are images shot in the same scene. For example, when a user takes a self-timer, a first camera takes a first image containing a portrait of the user in a current scene, and a second camera takes a second image containing the portrait of the user in the same scene. Because the first camera and the second camera are at different positions and have different shooting angles, the backgrounds of the first image and the second image have some differences.
The synthesizing unit 200 is configured to synthesize the first image and the second image into a third image.
After the two cameras respectively shoot to obtain corresponding images, the two cameras are combined into one image, namely the first image and the second image are spliced to obtain a third image.
Taking two horizontally placed cameras as an example, wherein an image shot by a left camera is shown in fig. 3 and is called a first image, an image shot by a right camera is shown in fig. 4 and is called a second image, and because the background range of an image shot by a single camera (left camera or right camera) is small, in this embodiment, images shot by the two cameras are spliced and synthesized to obtain a spliced third image, as shown in fig. 5, so that the background range of the shot image is large, and a larger-angle view field is obtained.
The calculation unit 300 is configured to calculate depth information of pixels on the third image.
After the third image is obtained by synthesis, depth information of each pixel on the image may be calculated, where the depth information may refer to a distance between a position of a pixel point on the image in a scene and a camera, and the depth information may be calculated based on a relevant parameter of the camera when the image is captured.
The determining unit 400 is configured to determine a foreground image and a background image on the third image according to the depth information of the pixels.
In this embodiment, because the distance between the real object corresponding to the foreground image (e.g., portrait) and the real object corresponding to the background image and the camera is different and there is a large difference therebetween, based on this principle, the embodiment distinguishes the foreground image and the background image on the third image by using the depth information of the pixel, so that the boundary between the foreground image and the background image can be accurately determined, and the foreground image and the background image can be accurately extracted.
The replacing unit 500 is configured to replace the background image of the third image with a preset background image, so as to obtain the third image after replacing the background.
After the foreground image and the background image are determined, the determined background image can be replaced by a preset background image before shooting, a third image after replacing the background is obtained, and the third image is output.
According to the embodiment of the invention, the images are shot by using the double cameras, the images are spliced, then the depth information of the pixels on the images is calculated, the foreground images and the background images on the images are extracted by using the depth information of the pixels on the images, and compared with the mode of automatically matting according to the difference between the outline of the foreground images and the pixels of the background images in the prior art, the extraction of the foreground images and the background images is more accurate, the shot background images are directly replaced by using the preset background images after the images are shot, the post-processing is not needed, and the replacement of the background images is quickly realized.
FIG. 11 is a schematic diagram of another alternative electronic device according to an embodiment of the invention. As shown in fig. 11, the electronic apparatus includes: a control unit 100, a synthesis unit 200, a calculation unit 300, a determination unit 400, and a replacement unit 500. Wherein the determining unit 400 includes: a comparison module 401 and a first determination module 402. The control unit 100, the synthesizing unit 200, the calculating unit 300, and the replacing unit 500 are the same as the respective units shown in fig. 10, specifically referring to the above description.
The comparing module 401 is configured to sequentially compare the depth of each pixel on the third image with a depth threshold.
The first determining module 402 is configured to determine an image composed of pixels with depths greater than a depth threshold as a background image; and determining an image composed of pixels with the depth smaller than the depth threshold value as a foreground image.
In this embodiment, the pixels on the foreground image and the background image on the third image may be determined by the set depth threshold. When taking a picture, the foreground object (such as a person) is generally close to the camera, the background at a far position is far away from the camera, and the two kinds of depth information are greatly different. In this embodiment, a depth threshold is set, an image area whose distance is greater than the threshold is used as a background image, that is, a background scene image, and an image area smaller than the threshold is used as a foreground image. Specifically, it may be sequentially determined whether the depth of each pixel is greater than a depth threshold, and if so, the pixel is taken as a pixel of the background image; if not, the image is taken as a pixel of a foreground image; after all the pixels are judged, the pixels of the foreground image form a foreground image, and the pixels of the background image form a background image, so that the accurate segmentation of the foreground image and the background image is realized.
It should be noted that, in the embodiment of the present invention, the depth threshold may be set to a fixed value according to the statistical data, or may be automatically adjusted according to the shot scene, and the present invention is not limited improperly.
According to the embodiment of the invention, the depth threshold is used as the judgment condition for segmenting the foreground image and the background image, so that the foreground image and the background image can be segmented quickly and accurately.
FIG. 12 is a schematic diagram of yet another alternative electronic device in accordance with an embodiment of the present invention. As shown in fig. 12, the electronic apparatus includes: a control unit 100, a synthesis unit 200, a calculation unit 300, a determination unit 400, and a replacement unit 500. Wherein, the calculating unit 300 comprises: a recording module 301, a second determining module 302, a third determining module 303 and a first calculating module 304. The control unit 100, the synthesizing unit 200, the determining unit 400, and the replacing unit 500 are the same as the respective units shown in fig. 10, and refer to the above description in detail.
The recording module 301 is configured to record a first focal length f1 of the first camera capturing the first image and record a second focal length f2 of the second camera capturing the second image.
The second determining module 302 is configured to determine a position of an imaging point corresponding to the target pixel on the first image, and determine a position of an imaging point corresponding to the target pixel on the second image.
The third determining module 303 is configured to determine a first distance X1 between the imaging point corresponding to the target pixel on the first image and the center point of the first image, and a second distance X2 between the imaging point corresponding to the target pixel on the second image and the center point of the second image.
The first calculation module 304 is used to calculate the depth Z of the target pixel according to the following formula:
Z=T*f1*f2/(X1*f2+X2*f1)
wherein, T represents the distance between the central point of the first camera and the central point of the second camera.
With which depth information of pixels on the third pattern can be obtained.
According to the embodiment of the invention, the depth information of the pixels is obtained by calculating the specific parameters when the camera shoots the image and is sequentially used as the basic data for segmenting the foreground image and the background image, so that the position of each pixel point on the image in the scene can be accurately represented, and the accurate segmentation of the foreground image and the background image is further realized.
It should be noted that the method for calculating pixel depth information described in this embodiment is only applicable to the overlapping region of the first image and the second image on the third image, that is, the region including the foreground image, and for the images other than the overlapping region, the images can be directly recognized as a part of the background image, and the calculation of the pixel depth is not needed, so that the calculation amount can be reduced, and the rapid segmentation of the image can be realized.
As an optional implementation manner, the synthesis unit in this embodiment includes: the moving module is used for overlapping the first image and the second image and gradually moving the second image according to a preset rule; the second calculation module is used for calculating the similarity of the overlapping area between the first image and the second image; the judging module is used for judging whether the similarity is greater than a preset threshold value or not; and the splicing module is used for splicing the corresponding overlapped area as a common area of the first image and the second image to obtain a third image when the similarity is greater than a preset threshold value.
Specifically, the image taken by the first camera (as shown in fig. 3) is panoramic stitched with the image taken by the second camera (as shown in fig. 4). The images shot by the two cameras have an overlapping area. In order to calculate the size of the overlapped area, a regular moving manner is as follows: firstly, aligning the image 3 with the image 4 at the leftmost position of the image 3, starting the image 4 from the leftmost position of the image 3, translating the image to the right, calculating the similarity of the overlapped area during translation, judging whether the similarity is greater than a preset threshold value, if so, continuing to translate the image 4 until the similarity of the overlapped area is greater than or equal to the preset threshold value, and stopping moving. In the stitching process, the part outside the overlapped area in fig. 4 may be stitched with fig. 3 to obtain a final image, as shown in fig. 5.
According to the embodiment of the invention, the image splicing position is obtained by calculating the similarity of the overlapped areas, so that the accurate splicing of the images is realized.
As an optional implementation manner of the foregoing embodiment, the second calculating module in this embodiment includes: the acquisition submodule is used for acquiring a gray level histogram of the first image in the overlapping area and a gray level histogram of the second image in the overlapping area; and the calculation submodule is used for calculating the Euclidean distance between the gray level histogram of the first image in the overlapping area and the gray level histogram of the second image in the overlapping area, and the Euclidean distance is used as the similarity.
Specifically, the similarity of the overlapping regions is calculated by the similarity of the gray histogram: firstly, calculating gray level histograms of two images in the overlapping area, acquiring a distribution histogram from 0-255 gray level values, calculating Euclidean distance of the two images in the overlapping area by using an Euclidean distance formula, and comparing the Euclidean distance serving as the similarity with a preset threshold value to determine the overlapping area.
According to the embodiment of the invention, the Euclidean distance is used as the similarity, so that the accuracy of judging the image similarity can be improved.
As another optional implementation manner of the foregoing embodiment, since the images after being stitched are prone to have inconsistent brightness, in this embodiment, a histogram equalization or contrast stretching method may be adopted to perform image processing, so as to obtain a third image with balanced color and balanced brightness.
As another optional implementation, in this embodiment, the electronic device further includes: and the blurring unit is used for performing median filtering processing on pixel points on the preset background image to obtain a blurred background image before replacing the background image of the third image with the preset background image to obtain the background-replaced third image.
The median filtering process in this embodiment may be: for a pixel matrix of an image, a sub-matrix window with a target pixel as the center is taken, the window can be 3 x 3, 5 x 5 and the like, and the pixels in the window are subjected to gray level sorting, and the middle value is taken as a new gray value of the target pixel.
In this embodiment, only the pixels belonging to the background image are subjected to median filtering, and the foreground image is not subjected to median filtering, and the edge of the foreground image can be sharpened to avoid the problem because the edge of the junction area between the background and the scene is blurred due to the median filtering.
An embodiment of the present invention further provides an electronic device, as shown in fig. 1, including one or more processors 101 and a memory 102, and further including a first camera 103 and a second camera 104, where the first camera 103, the second camera 104, the memory 102, and the processors 101 may be connected to each other through a bus, the first camera 103 and the second camera 104 are used to shoot a same scene, for example, a front-facing dual camera or a rear-facing dual camera, the first camera 103 may be used to shoot a first image of the scene at an angle thereof, and the second camera 104 may be used to shoot a second image of the same scene at an angle thereof.
The interface 106 may enable data communication between the first camera 103 and the second camera 104, and components such as the display 105 and the processor 101, for example, transmission of image data, transmission of control commands, and the like. The memory 102 may be used to store data used or generated in the above-described image processing method, for example, captured first image data and second image data, synthesized third image data, calculated depth information, background image data, foreground image data, and the like. The display 105 may then display the image after the composite or replacement background under the control of the processor 101.
The memory 102 further stores computer instructions, and the processor 101 executes the computer instructions to implement the following method:
controlling the first camera to shoot a first image and controlling the second camera to shoot a second image;
synthesizing the first image and the second image into a third image;
calculating depth information for pixels on the third image;
determining a foreground image and a background image on the third image according to the depth information of the pixel;
and replacing the background image of the third image by using a preset background image to obtain a third image with the background replaced.
Through utilizing two cameras to shoot the image, and carry out the concatenation of image, then calculate the depth information of pixel on the image, utilize the depth information of pixel on the image to draw foreground image and background image on the image, for the mode that comes automatic cutout according to the pixel difference of the profile of foreground image and background image among the prior art, the extraction of foreground image and background image of this embodiment is more accurate, and after shooing and obtaining the image, the background image that directly utilizes the background image that sets up in advance will shoot replaces, need not to carry out post processing, realize the replacement of background image fast.
Optionally, in some embodiments of the present invention, the processor 101 may further implement the following method for notifying execution of the calculation command:
comparing the depth of each pixel on the third image with a depth threshold value in sequence;
determining an image composed of pixels with depths greater than the depth threshold as the background image; determining an image composed of pixels having a depth less than the depth threshold as the foreground image.
Optionally, in some embodiments of the present invention, the processor 101 may further implement the following method for notifying execution of the calculation command:
recording a first focal length f1 of the first camera shooting the first image and recording a second focal length f2 of the second camera shooting the second image;
determining the position of an imaging point corresponding to a target pixel on the first image, and determining the position of the imaging point corresponding to the target pixel on the second image;
determining a first distance X1 between the imaging point corresponding to the target pixel on the first image and the center point of the first image and a second distance X2 between the imaging point corresponding to the target pixel on the second image and the center point of the second image;
the depth Z of the target pixel is calculated according to the following formula:
Z=T*f1*f2/(X1*f2+X2*f1)
wherein T represents a distance between a center point of the first camera and a center point of the second camera.
Optionally, in some embodiments of the present invention, the processor 101 may further implement the following method for notifying execution of the calculation command:
overlapping the first image and the second image, and gradually moving the second image according to a preset rule;
calculating the similarity of the overlapping area between the first image and the second image;
judging whether the similarity is greater than a preset threshold value or not;
and when the similarity is larger than the preset threshold value, splicing the corresponding overlapped area serving as the common area of the first image and the second image to obtain the third image.
Optionally, in some embodiments of the present invention, the processor 101 may further implement the following method for notifying execution of the calculation command:
acquiring a gray level histogram of the first image in the overlapping area and a gray level histogram of the second image in the overlapping area;
and calculating the Euclidean distance between the gray level histogram of the first image in the overlapping area and the gray level histogram of the second image in the overlapping area, and taking the Euclidean distance as the similarity.
Optionally, in some embodiments of the present invention, the processor 101 may further implement the following method for notifying execution of the calculation command:
and carrying out median filtering processing on the pixel points on the preset background image to obtain a blurred background image.
For the above method, reference is specifically made to the embodiment of the image processing method provided in the above embodiment of the present invention, and details are not described here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (6)

1. An image processing method, used for an electronic device provided with a first camera and a second camera, the first camera and the second camera being used for shooting a same scene, the image processing method comprising:
controlling the first camera to shoot a first image and controlling the second camera to shoot a second image;
synthesizing the first image and the second image into a third image;
calculating depth information for pixels on the third image;
determining a foreground image and a background image on the third image according to the depth information of the pixel;
replacing the background image of the third image by using a preset background image to obtain a third image after replacing the background;
the depth of each pixel is calculated using the following steps:
recording a first focal length f1 of the first camera shooting the first image and recording a second focal length f2 of the second camera shooting the second image;
determining the position of an imaging point corresponding to a target pixel on the first image, and determining the position of the imaging point corresponding to the target pixel on the second image;
determining a first distance X1 between the imaging point corresponding to the target pixel on the first image and the center point of the first image and a second distance X2 between the imaging point corresponding to the target pixel on the second image and the center point of the second image;
the depth Z of the target pixel is calculated according to the following formula:
Z=T*f1*f2/(X1*f2+X2*f1)
wherein T represents the distance between the center point of the first camera and the center point of the second camera;
synthesizing the first image and the second image into a third image includes:
overlapping the first image and the second image, and gradually moving the second image according to a preset rule;
calculating the similarity of the overlapping area between the first image and the second image;
judging whether the similarity is greater than a preset threshold value or not;
when the similarity is larger than the preset threshold value, splicing a corresponding overlapped area serving as a common area of the first image and the second image to obtain a third image;
calculating the similarity of the coincident region between the first image and the second image comprises:
acquiring a gray level histogram of the first image in the overlapping area and a gray level histogram of the second image in the overlapping area;
and calculating the Euclidean distance between the gray level histogram of the first image in the overlapping area and the gray level histogram of the second image in the overlapping area, and taking the Euclidean distance as the similarity.
2. The image processing method of claim 1, wherein determining the foreground image and the background image on the third image according to the depth information of the pixel comprises:
comparing the depth of each pixel on the third image with a depth threshold value in sequence;
determining an image composed of pixels with depths greater than the depth threshold as the background image; determining an image composed of pixels having a depth less than the depth threshold as the foreground image.
3. The image processing method according to claim 1, wherein the replacing of the background image of the third image with a preset background image further comprises, before obtaining the third image after replacing the background,:
and carrying out median filtering processing on the pixel points on the preset background image to obtain a blurred background image.
4. The utility model provides an electronic equipment, last first camera and the second camera of being provided with of electronic equipment, first camera with the second camera is used for shooing same scene, its characterized in that, electronic equipment includes:
the control unit is used for controlling the first camera to shoot a first image and controlling the second camera to shoot a second image;
a synthesizing unit configured to synthesize the first image and the second image into a third image;
a calculation unit configured to calculate depth information of pixels on the third image;
a determining unit, configured to determine a foreground image and a background image on the third image according to the depth information of the pixel;
the replacing unit is used for replacing the background image of the third image by using a preset background image to obtain a third image after replacing the background;
the calculation unit includes:
the recording module is used for recording a first focal length f1 of the first image shot by the first camera and recording a second focal length f2 of the second image shot by the second camera;
a second determining module, configured to determine a position of an imaging point corresponding to a target pixel on the first image, and determine a position of an imaging point corresponding to the target pixel on the second image;
a third determining module, configured to determine a first distance X1 between the imaging point corresponding to the target pixel on the first image and the center point of the first image, and a second distance X2 between the imaging point corresponding to the target pixel on the second image and the center point of the second image;
a first calculation module for calculating the depth Z of the target pixel according to the following formula:
Z=T*f1*f2/(X1*f2+X2*f1)
wherein T represents the distance between the center point of the first camera and the center point of the second camera;
the synthesis unit includes:
the moving module is used for overlapping the first image and the second image and gradually moving the second image according to a preset rule;
the second calculation module is used for calculating the similarity of a superposition area between the first image and the second image;
the judging module is used for judging whether the similarity is greater than a preset threshold value or not;
the splicing module is used for splicing a corresponding overlapped area serving as a common area of the first image and the second image to obtain a third image when the similarity is greater than the preset threshold;
the second calculation module includes:
the acquisition submodule is used for acquiring a gray level histogram of the first image in the overlapping area and a gray level histogram of the second image in the overlapping area;
and the calculation submodule is used for calculating the Euclidean distance between the gray level histogram of the first image in the overlapping area and the gray level histogram of the second image in the overlapping area, and taking the Euclidean distance as the similarity.
5. The electronic device according to claim 4, wherein the determination unit includes:
a comparison module, configured to compare the depth of each pixel on the third image with a depth threshold in sequence;
a first determining module, configured to determine an image composed of pixels with depths greater than the depth threshold as the background image; determining an image composed of pixels having a depth less than the depth threshold as the foreground image.
6. The electronic device of claim 4, further comprising:
and the blurring unit is used for performing median filtering processing on pixel points on the preset background image to obtain a blurred background image before replacing the background image of the third image with the preset background image to obtain the background-replaced third image.
CN201710128541.6A 2017-03-06 2017-03-06 Image processing method and electronic equipment Expired - Fee Related CN106899781B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710128541.6A CN106899781B (en) 2017-03-06 2017-03-06 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710128541.6A CN106899781B (en) 2017-03-06 2017-03-06 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN106899781A CN106899781A (en) 2017-06-27
CN106899781B true CN106899781B (en) 2020-11-10

Family

ID=59185503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710128541.6A Expired - Fee Related CN106899781B (en) 2017-03-06 2017-03-06 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN106899781B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273402A (en) * 2017-04-24 2017-10-20 广东小天才科技有限公司 A kind of method and device that examination question is searched for dual camera
CN107507239B (en) * 2017-08-23 2019-08-20 维沃移动通信有限公司 A kind of image partition method and mobile terminal
CN107564020B (en) * 2017-08-31 2020-06-12 北京奇艺世纪科技有限公司 Image area determination method and device
CN109146767A (en) * 2017-09-04 2019-01-04 成都通甲优博科技有限责任公司 Image weakening method and device based on depth map
CN107613161A (en) * 2017-10-12 2018-01-19 北京奇虎科技有限公司 Video data handling procedure and device, computing device based on virtual world
CN107948519B (en) * 2017-11-30 2020-03-27 Oppo广东移动通信有限公司 Image processing method, device and equipment
CN110009555B (en) * 2018-01-05 2020-08-14 Oppo广东移动通信有限公司 Image blurring method and device, storage medium and electronic equipment
CN110191332A (en) * 2018-02-23 2019-08-30 中兴通讯股份有限公司 The generation method and device of grating picture
CN109348114A (en) * 2018-11-26 2019-02-15 Oppo广东移动通信有限公司 Imaging device and electronic equipment
CN111246010B (en) * 2019-04-24 2020-11-10 吕衍荣 Environment monitoring method based on signal analysis
CN110166680B (en) * 2019-06-28 2021-08-24 Oppo广东移动通信有限公司 Device imaging method and device, storage medium and electronic device
CN110286092B (en) * 2019-07-03 2021-12-31 德丰电创科技股份有限公司 Crop growth trend analysis system
CN113379595B (en) * 2020-03-09 2024-04-09 北京沃东天骏信息技术有限公司 Page picture synthesis method and device
CN112422825A (en) * 2020-11-16 2021-02-26 珠海格力电器股份有限公司 Intelligent photographing method, device, equipment and computer readable medium
CN117455823A (en) * 2023-11-23 2024-01-26 镁佳(北京)科技有限公司 Image adjusting method, device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102547104A (en) * 2010-09-30 2012-07-04 卡西欧计算机株式会社 Image processing apparatus capable of generating wide angle image
CN106131434A (en) * 2016-08-18 2016-11-16 深圳市金立通信设备有限公司 A kind of image pickup method based on multi-camera system and terminal
CN205721063U (en) * 2016-04-08 2016-11-23 凯美斯三维立体影像(惠州)有限公司 A kind of mobile phone with 3-D view shoot function

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8542287B2 (en) * 2009-03-19 2013-09-24 Digitaloptics Corporation Dual sensor camera
CN103344213A (en) * 2013-06-28 2013-10-09 三星电子(中国)研发中心 Method and device for measuring distance of double-camera
CN105282421B (en) * 2014-07-16 2018-08-24 宇龙计算机通信科技(深圳)有限公司 A kind of mist elimination image acquisition methods, device and terminal
CN106303231A (en) * 2016-08-05 2017-01-04 深圳市金立通信设备有限公司 A kind of image processing method and terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102547104A (en) * 2010-09-30 2012-07-04 卡西欧计算机株式会社 Image processing apparatus capable of generating wide angle image
CN205721063U (en) * 2016-04-08 2016-11-23 凯美斯三维立体影像(惠州)有限公司 A kind of mobile phone with 3-D view shoot function
CN106131434A (en) * 2016-08-18 2016-11-16 深圳市金立通信设备有限公司 A kind of image pickup method based on multi-camera system and terminal

Also Published As

Publication number Publication date
CN106899781A (en) 2017-06-27

Similar Documents

Publication Publication Date Title
CN106899781B (en) Image processing method and electronic equipment
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
US10389948B2 (en) Depth-based zoom function using multiple cameras
EP3457683B1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
CN108335279B (en) Image fusion and HDR imaging
CN106210501B (en) Image synthesizing method and image processing apparatus
US9307134B2 (en) Automatic setting of zoom, aperture and shutter speed based on scene depth map
US9591237B2 (en) Automated generation of panning shots
WO2019105214A1 (en) Image blurring method and apparatus, mobile terminal and storage medium
JP4772839B2 (en) Image identification method and imaging apparatus
EP3480784B1 (en) Image processing method, and device
CN110493527B (en) Body focusing method and device, electronic equipment and storage medium
CN110324532B (en) Image blurring method and device, storage medium and electronic equipment
KR100953076B1 (en) Multi-view matching method and device using foreground/background separation
WO2019105151A1 (en) Method and device for image white balance, storage medium and electronic equipment
CN113129241B (en) Image processing method and device, computer readable medium and electronic equipment
CN110611768B (en) Multiple exposure photographic method and device
WO2022160857A1 (en) Image processing method and apparatus, and computer-readable storage medium and electronic device
CN112261292B (en) Image acquisition method, terminal, chip and storage medium
US11756221B2 (en) Image fusion for scenes with objects at multiple depths
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN114339042A (en) Image processing method and device based on multiple cameras and computer readable storage medium
CN111105370A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN116456191A (en) Image generation method, device, equipment and computer readable storage medium
CN105467741A (en) Panoramic shooting method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201110