CN105791659B - Image processing method and electronic device - Google Patents
Image processing method and electronic device Download PDFInfo
- Publication number
- CN105791659B CN105791659B CN201410804495.3A CN201410804495A CN105791659B CN 105791659 B CN105791659 B CN 105791659B CN 201410804495 A CN201410804495 A CN 201410804495A CN 105791659 B CN105791659 B CN 105791659B
- Authority
- CN
- China
- Prior art keywords
- image
- acquisition unit
- image acquisition
- pixel point
- level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention provides an image processing method and electronic equipment. The method is applied to an electronic device, the electronic device is provided with at least two image acquisition units, and the method comprises the following steps: acquiring a first image of a subject by a first image acquisition unit having first setting parameters such that the first image has a first level of color components and a first level of image detail; acquiring a second image of the same subject by a second image acquisition unit having second setting parameters such that the second image has a second level of color components and a second level of image detail, the second setting parameters being different from the first setting parameters; and fusing the first image and the second image according to a predetermined algorithm to generate a third image of the subject such that the third image has a first level of color components and a second level of image detail.
Description
Technical Field
The present invention relates to the field of electronic devices, and more particularly, to an image processing method and an electronic device.
Background
At present, cameras for capturing images have become increasingly popular. In addition, various electronic devices having camera modules have also appeared. Such as a mobile phone with a camera module, a tablet computer, etc.
However, in the case of taking a picture using a general camera or a mobile phone camera, the taken picture is often unsatisfactory.
In the related art, in order to make the image quality of photographing better in a dark light environment, one way is to set the exposure time longer or set the ISO (sensitivity) higher to obtain an image of a normal color. However, when the exposure time setting is long, object motion or camera shake easily causes image blur, and when the ISO setting is high, noise is easily caused to be excessive. On the other hand, if the exposure time is set to be short or the ISO (sensitivity) is set to be low, an image with normal color cannot be obtained.
Another way is to remove blur or noise in the image by image processing algorithms. However, the deblurring or denoising algorithms are not robust and are prone to introduce other errors.
For this reason, it is desirable to provide an image processing method and an electronic apparatus capable of obtaining high-quality images under various environments.
Disclosure of Invention
According to an embodiment of the present invention, there is provided an image processing method applied to an electronic device having at least two image capturing units, the method including:
acquiring a first image of a subject by a first image acquisition unit having first setting parameters such that the first image has a first level of color components and a first level of image detail;
acquiring a second image of the same subject by a second image acquisition unit having second setting parameters such that the second image has a second level of color components and a second level of image detail, the second setting parameters being different from the first setting parameters; and
fusing the first image and the second image according to a predetermined algorithm, generating a third image of the subject such that the third image has a first level of color components and a second level of image detail.
Preferably, the method further comprises:
before starting shooting, acquiring calibration parameters of positions between the first image acquisition unit and the second image, so that any pixel point of the first image shot by the first image acquisition unit corresponds to one horizontal line in the second image shot by the second image acquisition unit.
Preferably, the method further comprises:
detecting illuminance of ambient light to obtain a first illuminance value;
setting the first setting parameter of the first image acquisition unit according to the first illuminance value.
Preferably, the second setting parameter is preset to be smaller than a predetermined threshold.
Preferably, the first setting parameter and the second setting parameter further include an exposure time parameter and/or a sensitivity parameter, an
The exposure time of the first image capturing unit is set according to the first illuminance value, the exposure time of the first image capturing unit is set to be short when the first illuminance value is large, and the exposure time of the first image capturing unit is set to be long when the first illuminance value is small,
the exposure time of the second image acquisition unit is set to be less than a predetermined threshold.
Preferably, fusing the first image and the second image according to a predetermined algorithm, generating the third image of the subject further comprises:
acquiring a color value of each pixel point of the second image, and calculating a proper color value from a row of pixels of the first image corresponding to each pixel point as a replacement color value of each pixel point under the condition of considering horizontal constraint;
and replacing the color value of each pixel point of the second image by the replacement color value of each pixel point, thereby generating a third image of the shot object.
Preferably, the horizontal constraint is performed by the following formula:
where ei is the color value of the ith pixel point location in the second image, gi is the color value of the jth pixel point location in the first image, zij is the distance over their feature space, and λ is a predetermined coefficient.
Preferably, z is determined by the following formulaij:
Zij=exp(-||fei-fgj||2/σa)·exp(-||i-j||2/σs)
Wherein fei and fgj are eigenvectors of two points ei and gj, are first-order gradients, and satisfy
According to another embodiment of the present invention, there is provided an electronic apparatus including:
a first image acquisition unit configured to acquire a first image of a subject, the first image acquisition unit having first setting parameters such that the first image has a first level of color components and a first level of image details;
a second image acquisition unit configured to acquire a second image of the same subject, the second image acquisition unit having second setting parameters such that the second image has a second level of color components and a second level of image details, the second setting parameters being different from the first setting parameters; and
an image processing unit configured to fuse the first image and the second image according to a predetermined algorithm, and generate a third image of the subject such that the third image has a first level of color components and a second level of image details.
Preferably, the electronic device further includes:
a calibration unit configured to acquire calibration parameters of a position between the first image acquisition unit and the second image before starting the photographing so that any pixel point of the first image photographed by the first image acquisition unit corresponds to one horizontal line in the second image photographed by the second image acquisition unit.
Preferably, the electronic device further includes:
a setting unit configured to detect illuminance of ambient light to acquire a first illuminance value, and to set the first setting parameter of the first image acquisition unit according to the first illuminance value.
Preferably, the setting unit sets the second setting parameter to be smaller than a predetermined threshold value in advance.
Preferably, the first setting parameter and the second setting parameter further include an exposure time parameter and/or a sensitivity parameter, an
The exposure time of the first image capturing unit is set according to the first illuminance value, the exposure time of the first image capturing unit is set to be short when the first illuminance value is large, and the exposure time of the first image capturing unit is set to be long when the first illuminance value is small,
the exposure time of the second image acquisition unit is set to be less than a predetermined threshold.
Preferably, the image processing unit is further configured to:
acquiring a color value of each pixel point of the second image, and calculating a proper color value from a row of pixels of the first image corresponding to each pixel point as a replacement color value of each pixel point under the condition of considering horizontal constraint;
and replacing the color value of each pixel point of the second image by the replacement color value of each pixel point, thereby generating a third image of the shot object.
Preferably, the horizontal constraint is performed by the following formula:
where e _ i is the color value of the ith pixel point location in the second image, g _ i is the color value of the jth pixel point location in the first image, zij is the distance over their feature space, and λ is a predetermined coefficient.
Preferably, z is determined by the following formulaij:
Zij=exp(-||fei-fgj||2/σa)·exp(-||i-j||2/σs)
Wherein fei and fgj are eigenvectors of two points ei and gj, are first-order gradients, and satisfy
Therefore, according to the image processing method and the electronic device of the embodiment of the invention, a high-quality image can be obtained in a dark light environment.
Drawings
Fig. 1 is a flowchart illustrating an image processing method according to a first embodiment of the present invention;
FIGS. 2a-2c are effect diagrams illustrating an image processing method according to a first embodiment of the present invention;
fig. 3 is an explanatory diagram illustrating a color-detail fusion process according to the first embodiment of the present invention;
fig. 4 is a flowchart illustrating an image processing method according to a second embodiment of the present invention; and
fig. 5 is a functional configuration block diagram illustrating an electronic apparatus according to a third embodiment of the present invention.
Detailed Description
An image processing method according to an embodiment of the present invention will be described in detail below with reference to the accompanying drawings.
The image processing method according to the embodiment of the invention is applied to the electronic equipment comprising at least two image acquisition units. Such an electronic device is for example a camera or a mobile phone comprising two or more cameras.
< first embodiment >
Fig. 1 is a flowchart illustrating an image processing method according to a first embodiment of the present invention. As shown in fig. 1, the method 100 includes:
step S101: acquiring a first image of a subject by a first image acquisition unit having first setting parameters such that the first image has a first level of color components and a first level of image detail;
step S102: acquiring a second image of the same subject by a second image acquisition unit having second setting parameters such that the second image has a second level of color components and a second level of image detail, the second setting parameters being different from the first setting parameters; and
step S103: fusing the first image and the second image according to a predetermined algorithm, generating a third image of the subject such that the third image has a first level of color components and a second level of image detail.
In particular, as known to those skilled in the art, a good image needs to have correct color components and truly sharp detailed parts.
In order to obtain the correct color component, it is necessary to set the exposure time longer or set the ISO (sensitivity) higher. On the other hand, in order to obtain a true sharp detailed part, it is necessary to set the exposure time shorter or the ISO (sensitivity) lower, because when the exposure time is set longer, object motion or camera shake easily causes image blur, and when the ISO is set higher, it easily causes excessive noise. For this reason, in the conventional single-lens electronic apparatus, an image having both a correct color portion and a true sharp detail portion cannot be obtained.
On the other hand, especially in a dark light environment, in order to make the quality of a photographed image better, it is necessary to set the exposure time longer or the ISO (sensitivity) higher to obtain sufficient exposure, which inevitably causes image blur due to object motion or camera shake.
To this end, according to the image processing method of the first embodiment of the present invention, providing two image capturing units having different exposure parameters on an electronic device makes it possible to obtain an image having both a correct color portion and a true sharp detail portion.
Specifically, in step S101, a first image of a subject is acquired by a first image acquisition unit having first setting parameters such that the first image has a first level of color components and a first level of image detail.
For example, the first setting parameter includes an exposure time parameter and/or a sensitivity (ISO) parameter. The first image capturing unit may be set to have a normal exposure time parameter and/or sensitivity parameter.
As shown in fig. 2a, fig. 2a is a first image acquired by the first image acquiring unit of the normal exposure. Because of the normal exposure, the exposure time is relatively long, and/or the sensitivity parameter is relatively set high. At this time, the first image is blurred due to camera shake or the like, that is, the level of image detail is relatively low.
On the other hand, since it is the normal exposure, the exposure time is relatively long, and/or the sensitivity parameter is relatively set high. At this time, since the exposure is sufficient, the color components of the acquired first image are correct.
Then, in step S102, a second image of the same subject is acquired by a second image acquisition unit having second setting parameters such that the second image has a second level of color components and a second level of image details, the second setting parameters being different from the first setting parameters.
Similarly, for example, the second setting parameter includes an exposure time parameter and/or a sensitivity (ISO) parameter. A second image capturing unit may be set to have a short exposure time parameter and/or a low sensitivity parameter.
Fig. 2b is a second image acquired by the underexposed second image acquisition unit, as shown in fig. 2 b. Because of the underexposure, the exposure time is relatively short and/or the sensitivity parameter is set relatively low. At this time, since the exposure time is relatively short, the image detail portion of the acquired second image is truly sharp and free of noise (because of low ISO).
On the other hand, the exposure time is insufficient, resulting in incorrect color of the image, as shown in fig. 2 b.
It should be noted that step S101 and step S102 may be executed sequentially or simultaneously. It is preferable to simultaneously perform step S101 and step S102 to simultaneously obtain images of the same subject.
Then, in step S103, the first image and the second image may be fused according to a predetermined algorithm, and a third image of the subject may be generated such that the third image has a first level of color components and a second level of image details.
Specifically, as shown in fig. 2c, the generated third image has a first level of color components and a second level of image detail.
That is, in the image processing method according to the embodiment of the present invention, by fusing the first image having the correct image color and the second image having the true sharp image details, the third image having both the correct color component and the true sharp image details can be obtained.
The process of image color-detail fusion in the image processing method according to the embodiment of the present invention will be described in detail below.
First, a calibration process (camera calibration process) of a position between two cameras is performed for two image acquisition units of an electronic apparatus.
For example, multiple images may be taken using a checkerboard like picture for position calibration. The result of the calibration is a series of parameters including camera internal parameters (e.g., focal length f, camera center position Cx, Cy); camera extrinsic parameters (rotation matrix (R, 3x3 matrix) and translation matrix (T) between two cameras, T contains 3 parameters (tx, ty, tz) representing translations in three directions, etc.
It is to be noted that the camera calibration process is well known to those skilled in the art, and a detailed description thereof is omitted here.
The color-detail fusion process according to the image will be described below with reference to fig. 3. As shown in fig. 3, the left image in fig. 3 is the second image with clear details, and the right image in fig. 3 is the first image with correct color.
By the camera calibration process, any one point in an image obtained by one camera can be corresponded to one horizontal line in an image obtained by the other camera. Therefore, only constraints in the horizontal direction may be considered when performing color-detail fusion.
In order to achieve color-detail fusion, that is, to have the generated third image with the correct color components and the true sharp image details, it is necessary that the third image has the color components of the first image and the image details of the second image.
That is, the color value of each pixel point in the second image with real sharp image details is replaced with the color value of the corresponding pixel point of the first image. Because the camera calibration processing cannot make each pixel point of the first image and the second image completely correspond to each other one by one, but only can make each pixel point in the second image correspond to a row of pixel points in the first image, under the condition of considering horizontal constraint, a proper color value is calculated from a row of pixels in the first image corresponding to each pixel point to serve as a replacement color value of each pixel point.
Then, the color value of each pixel of the second image may be replaced with the replacement color value of each pixel, thereby generating a third image of the subject.
Preferably, the horizontal constraint is performed by the following equation (1):
Where ei is the color value of the ith pixel point location in the second image, gi is the color value of the jth pixel point location in the first image, zijAre their distances in feature space, and λ is a predetermined coefficient.
As shown in fig. 3, a color value of each pixel point of the second image may be obtained, and an appropriate color value is calculated from a row of pixels of the first image corresponding to each pixel point as a replacement color value of each pixel point under the condition that a horizontal constraint is considered.
In particular, assuming ei is the color of the ith point in the second image, it can be taken from the g-th point of the first image0,g1,g2,……,gj,……gN-1To obtain a color, the following constraints are defined:
wherein z isijIs a function of the distance between the i and j positions, which can be defined as the distance between two vectors.
Zij=exp(-||fei-fgj||2/σa)·exp(-||i-j||2/σs)
Where exp is an exponential (natural log base) function, fei and fgj are feature vectors for two points, ei and gj, typically a first order gradient, that is,
thus, for the ei point, z is usedijTo weight the pixels in a row for color.
the second term of the energy equation is a spatial term, which usually assumes that the colors between two adjacent points do not differ much, so the ei and ej colors cannot differ too much.
Thus, the total energy equation is obtained as:
the solution of this energy equation is as follows.
because E (e) is a quadratic function and is a linear function after derivation, the optimal solution can be obtained by using a standard linear equation solution.
Therefore, the energy function can replace the color value of each pixel point in the second image with the real sharp image details with the color value of the corresponding pixel point of the first image, so as to generate a third image with correct color components and the real sharp image details.
Therefore, according to the image processing method of the embodiment of the present invention, a high-quality image can be acquired under various environments.
< second embodiment >
In the first embodiment described above, the user can set the exposure parameters of the first image capturing unit and the second image capturing unit.
In the second embodiment, the exposure parameters of the first image capturing unit and the second image capturing unit can also be set by detecting ambient light.
As shown in fig. 4, an image processing method according to a second embodiment of the present invention includes:
step S401: detecting illuminance of ambient light to acquire a first illuminance value, setting the first setting parameter of the first image acquisition unit according to the first illuminance value, and setting a second setting parameter of the second image acquisition unit to be smaller than a predetermined threshold value.
Step S402: acquiring a first image of a subject by a first image acquisition unit having first setting parameters such that the first image has a first level of color components and a first level of image detail;
step S403: acquiring a second image of the same subject by a second image acquisition unit having second setting parameters such that the second image has a second level of color components and a second level of image detail, the second setting parameters being different from the first setting parameters; and
step S404: fusing the first image and the second image according to a predetermined algorithm, generating a third image of the subject such that the third image has a first level of color components and a second level of image detail.
In step S401, before starting shooting, illuminance of ambient light is first detected to acquire a first illuminance value, the first setting parameter of the first image acquisition unit is set according to the first illuminance value, and the second setting parameter of the second image acquisition unit is set to be smaller than a predetermined threshold value.
Specifically, for example, the first setting parameter and the second setting parameter further include an exposure time parameter and/or a sensitivity parameter.
The exposure time of the first image capturing unit is set according to the first illuminance value, the exposure time of the first image capturing unit is set to be short when the first illuminance value is large, and the exposure time of the first image capturing unit is set to be long when the first illuminance value is small. Then, an exposure time of the second image pickup unit is set to be less than a predetermined threshold value.
That is, the exposure parameters of the first image capturing unit are automatically set according to the illuminance of the ambient light, while the exposure parameters of the second image capturing unit are set by default to be smaller than the exposure time threshold, thereby preventing the occurrence of blur (for example, the threshold is set to 1/30 s).
After the setting of the exposure parameters of the first image acquisition unit and the second image acquisition unit is completed, the photographing using the first image acquisition unit and the second image acquisition unit may be started.
Then, subsequent steps S402 to S404 are substantially the same as steps S101 to S103 in the first embodiment, and detailed description thereof is omitted here.
According to the second embodiment of the present invention, even in a dark light environment, by setting the exposure time of the first image capturing unit to be long and/or setting the sensitivity to be high, correct color components can be captured. On the other hand, by setting the exposure parameter of the second image pickup unit to be smaller than the exposure time threshold by default, thereby preventing the occurrence of blur, it is possible to obtain image details with true sharpness.
Therefore, according to the image processing method of the embodiment of the present invention, a high-quality image can be acquired under various environments.
< third embodiment >
Next, an electronic apparatus according to an embodiment of the present invention is described with reference to fig. 5.
As shown in fig. 5, an electronic device 500 according to an embodiment of the present invention includes:
a first image acquisition unit 501 configured to acquire a first image of a subject, the first image acquisition unit having first setting parameters such that the first image has a first level of color components and a first level of image details;
a second image acquisition unit 502 configured to acquire a second image of the same subject, the second image acquisition unit having second setting parameters such that the second image has a second level of color components and a second level of image details, the second setting parameters being different from the first setting parameters; and
an image processing unit 503 configured to fuse the first image and the second image according to a predetermined algorithm, and generate a third image of the subject such that the third image has a first level of color components and a second level of image details.
Preferably, the electronic device further includes:
a calibration unit 504 configured to acquire calibration parameters of a position between the first image acquisition unit and the second image before starting the photographing so that any pixel point of the first image photographed by the first image acquisition unit corresponds to one horizontal line in the second image photographed by the second image acquisition unit.
Preferably, the electronic device further includes:
a setting unit 505 configured to detect illuminance of ambient light to acquire a first illuminance value, and to set the first setting parameter of the first image acquisition unit according to the first illuminance value.
Preferably, the setting unit sets the second setting parameter to be smaller than a predetermined threshold value in advance.
Preferably, the first setting parameter and the second setting parameter further include an exposure time parameter and/or a sensitivity parameter, an
The exposure time of the first image capturing unit is set according to the first illuminance value, the exposure time of the first image capturing unit is set to be short when the first illuminance value is large, and the exposure time of the first image capturing unit is set to be long when the first illuminance value is small,
the exposure time of the second image acquisition unit 502 is set to be less than a predetermined threshold.
Preferably, the image processing unit is further configured to:
acquiring a color value of each pixel point of the second image, and calculating a proper color value from a row of pixels of the first image corresponding to each pixel point as a replacement color value of each pixel point under the condition of considering horizontal constraint;
and replacing the color value of each pixel point of the second image by the replacement color value of each pixel point, thereby generating a third image of the shot object.
Preferably, the horizontal constraint is performed by the following formula:
where e _ i is the color value of the ith pixel point location in the second image, g _ i is the color value of the jth pixel point location in the first image, zij is the distance over their feature space, and λ is a predetermined coefficient.
Preferably, z is determined by the following formulaij:
Zij=exp(-||fei-fgj||2/σa)·exp(-||i-j||2/σs)
Wherein fei and fgj are eigenvectors of two points ei and gj, are first-order gradients, and satisfy
Therefore, according to the electronic device of the embodiment of the invention, the high-quality image can be obtained under various environments.
It is to be noted that, while the electronic device according to the respective embodiments is illustrated only for showing the functional units thereof, the connection relationship of the respective functional units is not specifically described, and it is understood by those skilled in the art that the respective functional units may be appropriately connected by a bus, an internal connection line, or the like, such connection being well known to those skilled in the art.
It should be noted that, in the present specification, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Finally, it should be noted that the series of processes described above includes not only processes performed in time series in the order described herein, but also processes performed in parallel or individually, rather than in time series.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus a necessary hardware platform, and may also be implemented by hardware entirely. With this understanding in mind, all or part of the technical solutions of the present invention that contribute to the background can be embodied in the form of a software product, which can be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes instructions for causing a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments or some parts of the embodiments of the present invention.
The present invention has been described in detail, and the principle and embodiments of the present invention are explained herein by using specific examples, which are only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (12)
1. An image processing method is applied to an electronic device, the electronic device is provided with at least two image acquisition units, and the method comprises the following steps:
acquiring a first image of a subject by a first image acquisition unit having first setting parameters such that the first image has a first level of color components and a first level of image detail;
acquiring a second image of the same subject by a second image acquisition unit, the second image acquisition unit having second setting parameters such that the first image has a second level of color components and a second level of image detail, the first and second setting parameters including an exposure time parameter and/or a sensitivity parameter, the second setting parameter being different from the first setting parameter, the exposure time of the first image acquisition unit being longer than the exposure time of the second image acquisition unit and/or the sensitivity parameter of the first image acquisition unit being higher than the sensitivity parameter of the second image acquisition unit; and
acquiring a color value of each pixel point of the second image, and calculating a proper color value from a row of pixels of the first image corresponding to each pixel point as a replacement color value of each pixel point under the condition of considering horizontal constraint;
replacing the color value of each pixel point of the second image with the replacement color value of each pixel point, thereby generating a third image of the subject, such that the third image has a first level of color components and a second level of image details, wherein the replacement color value of each pixel point is determined according to the feature vectors of the corresponding pixel points in the first image and the second image, an
Before starting shooting, acquiring calibration parameters of positions between the first image acquisition unit and the second image acquisition unit so that any pixel point of a first image shot by the first image acquisition unit corresponds to one horizontal line in a second image shot by the second image acquisition unit.
2. The method of claim 1, further comprising:
detecting illuminance of ambient light to obtain a first illuminance value;
setting the first setting parameter of the first image acquisition unit according to the first illuminance value.
3. The method of claim 2, wherein the second setting parameter is preset to be less than a predetermined threshold.
4. The method of claim 3, wherein,
the exposure time of the first image capturing unit is set according to the first illuminance value, the exposure time of the first image capturing unit is set to be short when the first illuminance value is large, and the exposure time of the first image capturing unit is set to be long when the first illuminance value is small,
the exposure time of the second image acquisition unit is set to be less than a predetermined threshold.
5. The method of claim 1, wherein the horizontal constraint is performed by the following equation:
where ei is the color value of the ith pixel point location in the second image, gj is the color value of the jth pixel point location in the first image, zij is the distance over their feature space, and λ is a predetermined coefficient.
7. An electronic device, comprising:
a first image acquisition unit configured to acquire a first image of a subject, the first image acquisition unit having first setting parameters such that the first image has a first level of color components and a first level of image details;
a second image acquisition unit configured to acquire a second image of the same subject, the second image acquisition unit having second setting parameters such that the first image has a second level of color components and a second level of image details, the first and second setting parameters including an exposure time parameter and/or a sensitivity parameter, the second setting parameter being different from the first setting parameter, the exposure time of the first image acquisition unit being longer than the exposure time of the second image acquisition unit and/or the sensitivity parameter of the first image acquisition unit being higher than the sensitivity parameter of the second image acquisition unit; and
an image processing unit configured to obtain a color value of each pixel point of a second image, calculate an appropriate color value from a row of pixels of the first image corresponding to each pixel point as a replacement color value of each pixel point under the condition of considering horizontal constraint, and replace the color value of each pixel point of the second image with the replacement color value of each pixel point, thereby generating a third image of the subject, so that the third image has a first-level color component and a second-level image detail, wherein the replacement color value of each pixel point is determined according to the feature vectors of the corresponding pixel points in the first image and the second image, and
a calibration unit configured to acquire calibration parameters of a position between the first image acquisition unit and the second image acquisition unit before starting the photographing so that any pixel point of the first image photographed by the first image acquisition unit corresponds to one horizontal line in the second image photographed by the second image acquisition unit.
8. The electronic device of claim 7, further comprising:
a setting unit configured to detect illuminance of ambient light to acquire a first illuminance value, and to set the first setting parameter of the first image acquisition unit according to the first illuminance value.
9. The electronic apparatus of claim 8, wherein the setting unit sets the second setting parameter to be smaller than a predetermined threshold value in advance.
10. The electronic device of claim 9,
the exposure time of the first image capturing unit is set according to the first illuminance value, the exposure time of the first image capturing unit is set to be short when the first illuminance value is large, and the exposure time of the first image capturing unit is set to be long when the first illuminance value is small,
the exposure time of the second image acquisition unit is set to be less than a predetermined threshold.
11. The electronic device of claim 7, wherein horizontal constraints are imposed by the formula:
where ei is the color value of the ith pixel point location in the second image, gj is the color value of the jth pixel point location in the first image, zij is the distance over their feature space, and λ is a predetermined coefficient.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410804495.3A CN105791659B (en) | 2014-12-19 | 2014-12-19 | Image processing method and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410804495.3A CN105791659B (en) | 2014-12-19 | 2014-12-19 | Image processing method and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105791659A CN105791659A (en) | 2016-07-20 |
CN105791659B true CN105791659B (en) | 2020-10-27 |
Family
ID=56385266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410804495.3A Active CN105791659B (en) | 2014-12-19 | 2014-12-19 | Image processing method and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105791659B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106851115B (en) * | 2017-03-31 | 2020-05-26 | 联想(北京)有限公司 | Image processing method and device |
CN110876014B (en) * | 2018-08-31 | 2022-04-08 | 北京小米移动软件有限公司 | Image processing method and device, electronic device and storage medium |
CN113758560B (en) * | 2020-06-04 | 2024-02-06 | 北京小米移动软件有限公司 | Parameter configuration method and device of photosensitive element, intelligent equipment and medium |
CN114691252B (en) * | 2020-12-28 | 2023-05-30 | 中国联合网络通信集团有限公司 | Screen display method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102090068A (en) * | 2008-08-01 | 2011-06-08 | 伊斯曼柯达公司 | Improved image formation using different resolution images |
CN102420944A (en) * | 2011-04-25 | 2012-04-18 | 展讯通信(上海)有限公司 | High dynamic-range image synthesis method and device |
CN104077759A (en) * | 2014-02-28 | 2014-10-01 | 西安电子科技大学 | Multi-exposure image fusion method based on color perception and local quality factors |
CN104104886A (en) * | 2014-07-24 | 2014-10-15 | 深圳市中兴移动通信有限公司 | Overexposure shooting method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060017597A1 (en) * | 2002-09-09 | 2006-01-26 | Koninklijke Philips Electronics N.V. | Method of signal reconstruction, imaging device and computer program product |
-
2014
- 2014-12-19 CN CN201410804495.3A patent/CN105791659B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102090068A (en) * | 2008-08-01 | 2011-06-08 | 伊斯曼柯达公司 | Improved image formation using different resolution images |
CN102420944A (en) * | 2011-04-25 | 2012-04-18 | 展讯通信(上海)有限公司 | High dynamic-range image synthesis method and device |
CN104077759A (en) * | 2014-02-28 | 2014-10-01 | 西安电子科技大学 | Multi-exposure image fusion method based on color perception and local quality factors |
CN104104886A (en) * | 2014-07-24 | 2014-10-15 | 深圳市中兴移动通信有限公司 | Overexposure shooting method and device |
Also Published As
Publication number | Publication date |
---|---|
CN105791659A (en) | 2016-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200374461A1 (en) | Still image stabilization/optical image stabilization synchronization in multi-camera image capture | |
WO2018228467A1 (en) | Image exposure method and device, photographing device, and storage medium | |
CN106899781B (en) | Image processing method and electronic equipment | |
CN104349066B (en) | A kind of method, apparatus for generating high dynamic range images | |
US9870602B2 (en) | Method and apparatus for fusing a first image and a second image | |
CN113992861B (en) | Image processing method and image processing device | |
WO2018176925A1 (en) | Hdr image generation method and apparatus | |
US20130124471A1 (en) | Metadata-Driven Method and Apparatus for Multi-Image Processing | |
WO2017096866A1 (en) | Method and apparatus for generating high dynamic range image | |
US20130089262A1 (en) | Metadata-Driven Method and Apparatus for Constraining Solution Space in Image Processing Techniques | |
US20130121525A1 (en) | Method and Apparatus for Determining Sensor Format Factors from Image Metadata | |
CN110675458B (en) | Method and device for calibrating camera and storage medium | |
CN105791659B (en) | Image processing method and electronic device | |
CN107864340B (en) | A kind of method of adjustment and photographic equipment of photographic parameter | |
CN107704798A (en) | Image weakening method, device, computer-readable recording medium and computer equipment | |
US10972676B2 (en) | Image processing method and electronic device capable of optimizing hdr image by using depth information | |
CN105391940B (en) | A kind of image recommendation method and device | |
CN105578020B (en) | Self-timer system and method | |
CN113793257B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
WO2016029380A1 (en) | Image processing method, computer storage medium, device, and terminal | |
CN113159229A (en) | Image fusion method, electronic equipment and related product | |
CN112106352A (en) | Image processing method and device | |
WO2016184060A1 (en) | Photographing method and device for terminal | |
CN110689502B (en) | Image processing method and related device | |
CN104125385B (en) | Image editing method and image processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |