CN108989699B - Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium - Google Patents

Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium Download PDF

Info

Publication number
CN108989699B
CN108989699B CN201810886741.2A CN201810886741A CN108989699B CN 108989699 B CN108989699 B CN 108989699B CN 201810886741 A CN201810886741 A CN 201810886741A CN 108989699 B CN108989699 B CN 108989699B
Authority
CN
China
Prior art keywords
image
frame
reference frame
exposure
adjusting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810886741.2A
Other languages
Chinese (zh)
Other versions
CN108989699A (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810886741.2A priority Critical patent/CN108989699B/en
Publication of CN108989699A publication Critical patent/CN108989699A/en
Application granted granted Critical
Publication of CN108989699B publication Critical patent/CN108989699B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides an image synthesis method, an image synthesis device and an imaging device, wherein the method comprises the following steps: respectively acquiring a reference frame shot with a first exposure duration and an adjusting frame shot with a second exposure duration; wherein the first exposure time length is less than the second exposure time length; further carrying out contour recognition on the imaging object on the reference frame and the adjusting frame; determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours; correcting the corresponding image area in the adjusting frame according to the image area in the reference frame; and finally, synthesizing the corrected adjusting frame with the reference frame to obtain a target image. The method takes the short-exposure image as a reference frame, corrects the longer-exposure image according to the displacement visual angle and the picture analysis, and solves the technical problem that the image is easy to be misplaced or blurred in the prior art when the HDR image is synthesized in images with different exposure durations and the shot object is a dynamic object.

Description

Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image synthesis method and apparatus, an imaging device, an electronic device, and a computer-readable storage medium.
Background
When a moving object is photographed, the edge of the moving object in the picture is easily dislocated or blurred as the exposure time increases. Conventionally, in order to obtain a High-Dynamic Range (HDR) image, it is often necessary to combine a plurality of images corresponding to a plurality of exposure levels in accordance with weights.
However, in the actual image synthesis process, the position and edge shape of the moving object under different exposure conditions may be different to some extent, so that the final synthesized image is likely to be misaligned or blurred in the moving area.
Disclosure of Invention
The application provides an image synthesis method, an image synthesis device, an imaging device, an electronic device and a computer readable storage medium, which are used for realizing correction and edge processing of different exposure images to be synthesized in an image preprocessing mode and solving the technical problem that when HDR images are synthesized in images with different exposure durations, the images are easy to be dislocated or blurred at the edges when a shot object is a moving object in the prior art.
An embodiment of an aspect of the present application provides an image synthesis method, including:
respectively acquiring a reference frame shot with a first exposure duration and an adjustment frame shot with a second exposure duration; wherein the first exposure duration is less than the second exposure duration;
carrying out contour identification on the imaging object on the reference frame and the adjusting frame;
determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours;
correcting the corresponding image area in the adjusting frame according to the image area in the reference frame;
and synthesizing the corrected adjusting frame with the reference frame to obtain a target image.
According to the image synthesis method, the reference frame shot in the first exposure duration and the adjusting frame shot in the second exposure duration are respectively obtained; wherein the first exposure time length is less than the second exposure time length; further carrying out contour recognition on the imaging object on the reference frame and the adjusting frame; determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours; correcting the corresponding image area in the adjusting frame according to the image area in the reference frame; and finally, synthesizing the corrected adjusting frame with the reference frame to obtain a target image. The method carries out calibration correspondence and edge processing on different exposure images to be synthesized in an image preprocessing mode, and solves the technical problem that in the prior art, when the HDR images are synthesized in images with different exposure durations, the images are easy to be misplaced or blurred in the case that a shot object is a dynamic object.
In another aspect, an embodiment of the present application provides an image synthesizing apparatus, including:
the acquisition module is used for respectively acquiring a reference frame shot in a first exposure duration and an adjustment frame shot in a second exposure duration; wherein the first exposure duration is less than the second exposure duration;
the identification module is used for carrying out contour identification on the imaging object on the reference frame and the adjusting frame;
the determining module is used for determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours;
the correction module is used for correcting the corresponding image area in the adjustment frame according to the image area in the reference frame;
and the synthesis module is used for synthesizing the corrected adjusting frame and the reference frame to obtain a target image.
The image synthesis device of the embodiment of the application respectively acquires a reference frame shot with a first exposure duration and an adjustment frame shot with a second exposure duration; wherein the first exposure time length is less than the second exposure time length; further carrying out contour recognition on the imaging object on the reference frame and the adjusting frame; determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours; correcting the corresponding image area in the adjusting frame according to the image area in the reference frame; and finally, synthesizing the corrected adjusting frame with the reference frame to obtain a target image. The method carries out calibration correspondence and edge processing on different exposure images to be synthesized in an image preprocessing mode, and solves the technical problem that in the prior art, when the HDR images are synthesized in images with different exposure durations, the images are easy to be misplaced or blurred in the case that a shot object is a dynamic object.
An embodiment of another aspect of the present application provides an image forming apparatus including a pixel unit array composed of a plurality of photosensitive pixels, the image forming apparatus further including a processor configured to:
respectively acquiring a reference frame shot with a first exposure duration and an adjustment frame shot with a second exposure duration; wherein the first exposure duration is less than the second exposure duration;
carrying out contour identification on the imaging object on the reference frame and the adjusting frame;
determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours;
correcting the corresponding image area in the adjusting frame according to the image area in the reference frame;
and synthesizing the corrected adjusting frame with the reference frame to obtain a target image.
The imaging device of the embodiment of the application respectively acquires a reference frame shot with a first exposure duration and an adjustment frame shot with a second exposure duration; wherein the first exposure time length is less than the second exposure time length; further carrying out contour recognition on the imaging object on the reference frame and the adjusting frame; determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours; correcting the corresponding image area in the adjusting frame according to the image area in the reference frame; and finally, synthesizing the corrected adjusting frame with the reference frame to obtain a target image. The method carries out calibration correspondence and edge processing on different exposure images to be synthesized in an image preprocessing mode, and solves the technical problem that in the prior art, when the HDR images are synthesized in images with different exposure durations, the images are easy to be misplaced or blurred in the case that a shot object is a dynamic object.
An embodiment of another aspect of the present application provides an electronic device, including: the image synthesis method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein when the processor executes the program, the image synthesis method is realized.
In yet another aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and when executed by a processor, the instructions in the storage medium implement the image synthesis method according to the foregoing embodiment.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an image synthesis method according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating an adjustment of an image according to an image feature point according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another image synthesis method provided in an embodiment of the present application;
FIG. 4 is an exemplary graph of images at different exposures provided by an embodiment of the present application;
FIG. 5 is a diagram illustrating an example of reference frame image contour identification and adjustment frame pixel adjustment provided by an embodiment of the present application;
FIG. 6 is a diagram illustrating an example of sharpening an image area in an adjustment frame according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an image synthesis apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an imaging apparatus provided in an embodiment of the present application;
fig. 9 is a schematic internal structure diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 10 is a block diagram of an image processing circuit according to some embodiments of the present disclosure.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The application mainly aims at the technical problem that in the prior art, a high dynamic range image is synthesized by adopting low dynamic range exposure, and under the condition that a moving object exists in a picture, the final synthesized image is easy to be dislocated or blurred in a moving area due to different exposure time points and different exposure times, so that an image synthesis method is provided.
According to the image synthesis method, the reference frame shot in the first exposure duration and the adjusting frame shot in the second exposure duration are respectively obtained; wherein the first exposure time length is less than the second exposure time length; further carrying out contour recognition on the imaging object on the reference frame and the adjusting frame; determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours; correcting the corresponding image area in the adjusting frame according to the image area in the reference frame; and finally, synthesizing the corrected adjusting frame with the reference frame to obtain a target image.
An image synthesis method and apparatus of an embodiment of the present application are described below with reference to the drawings.
Fig. 1 is a schematic flowchart of an image synthesis method according to an embodiment of the present application.
The image synthesis method is applied to an imaging device, wherein the imaging device comprises a pixel unit array formed by a plurality of photosensitive pixels and a processor.
As shown in fig. 1, the image synthesis method includes the steps of:
step 101, respectively acquiring a reference frame shot with a first exposure duration and an adjustment frame shot with a second exposure duration; wherein the first exposure time length is less than the second exposure time length.
In the embodiment of the application, images with different exposure durations are shot aiming at the same scene through the electronic equipment with the shooting function, and when a moving object is shot, the length of the exposure duration directly influences the shooting effect, so that the image shot with the first exposure duration is used as a reference frame, and the image shot with the second exposure duration is used as an adjusting frame. The imaging device acquires a reference frame photographed with a first exposure duration and an adjustment frame photographed with a second exposure duration, respectively. Wherein the first exposure time length is less than the second exposure time length.
In order to acquire the reference frame shot with the first exposure duration and the adjustment frame shot with the second exposure duration, the present embodiment provides two possible implementations.
As a possible implementation manner, the photosensitive pixels in the pixel unit array are controlled to perform multiple exposures according to the first exposure duration and the second exposure duration. The same exposure time period is used for the photosensitive pixels in the pixel cell array for the same exposure. Each photosensitive pixel in the photosensitive array of the imaging device may include a plurality of long exposure pixels, a plurality of middle exposure pixels, and a plurality of short exposure pixels, and the long exposure pixels, the middle exposure pixels, and the short exposure pixels included in the photosensitive pixels should all use the same exposure time duration.
As another possible implementation, the photosensitive pixels in the pixel unit array are controlled to perform a single exposure according to the first exposure duration and the two second exposure durations. Each photosensitive pixel in the photosensitive array of the imaging device may include a plurality of long exposure pixels, a plurality of middle exposure pixels, and a plurality of short exposure pixels, wherein the plurality of short exposure pixels are exposed with a first exposure duration, the plurality of long exposure pixels and the plurality of middle exposure pixels are exposed with a corresponding second exposure duration, and finally, an image captured by the short exposure pixels is used as a reference frame, an image captured by the middle exposure pixels is used as an adjustment frame, and the second exposure duration used by the plurality of long exposure pixels is longer than the second exposure duration used by the plurality of middle exposure pixels.
And 102, carrying out contour recognition on the imaging object on the reference frame and the adjusting frame.
In the embodiment of the application, the contour recognition of the imaging object is performed on the reference frame and the adjusting frame, which means that the influence of image background, texture inside the imaging object and noise interference is ignored, and a certain technology and method are adopted to realize the contour recognition process of the imaging object.
As a possible implementation manner, a contour recognition method of edge detection may be adopted to perform contour recognition of the imaging object on the reference frame and the adjustment frame. The method mainly defines low-level abrupt change of characteristics such as brightness and color, and completes edge detection by identifying points with obvious brightness change in the image. But this approach is not applicable in cases where there is a lot of noise or texture, so another possible implementation may be used in cases where there is a lot of noise or texture.
As another possible implementation manner, the target contour recognition can also be completed by adopting a mathematical model which can be used for extracting in a human visual system, and the method is generally suitable for the fields of image smoothing and edge detection.
In this embodiment, the process of identifying the contour of the imaging object for the reference frame and the adjustment frame is to extract a usable mathematical model from a human visual system to identify the contour of the object, and first, the input image is preprocessed, and a small two-dimensional gaussian template is usually used to perform smoothing filtering to remove image noise. And secondly, performing edge detection processing on the smoothed image to obtain a preliminary edge response image, wherein the preliminary edge response image usually relates to available gradient characteristic information of brightness, color and the like which can distinguish an object from a background. And then further processing the edge response to obtain a better edge response image, thereby obtaining an edge image which can be used as a contour.
And 103, determining that the image areas surrounded by the similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours.
Specifically, after the contour of the imaging object is identified for the reference frame and the adjustment frame, the reference frame and the adjustment frame are compared according to the shape of the identified contour and the image color in the contour, and the similar contour in the reference frame and the adjustment frame is determined. In the reference frame and the adjusting frame, the image areas surrounded by the similar contours are determined to have corresponding relation.
As an example, if the outline of the a region in the reference frame is similar to that of the B region in the adjustment frame, the a region and the B region have a correspondence relationship.
And 104, correcting the corresponding image area in the adjusting frame according to the image area in the reference frame.
Specifically, the image area in the reference frame is compared with the corresponding image area in the adjustment frame to obtain the position difference of the similar image areas, and further, the position of the corresponding image area in the adjustment frame is adjusted according to the obtained position difference. The corresponding image area in the adjustment frame is adjusted, and the corresponding image area in the adjustment frame can be subjected to position translation and rotation, so that the translation and rotation angle correction is effectively realized.
And 105, synthesizing the corrected adjusting frame with the reference frame to obtain a target image.
In the embodiment of the application, the corrected pixels corresponding to the adjustment frame and the reference frame are summed according to the weight to obtain the synthesized pixel value, and the synthesized pixel value further forms the target image with a high dynamic range.
As a possible implementation manner, after the modified adjustment frame and the reference frame are synthesized into an image with a high dynamic range, the image with the high dynamic range is subjected to gray scale compression according to a response curve of the restored imaging system.
It should be noted that, because there is a parallax between images of different frames, in order to optimize the correction effect, after the position of the corresponding image area in the adjustment frame is adjusted in step 104 in the above embodiment, it is further necessary to determine the feature points of the images in the reference frame and the adjustment frame, and appropriately deform the images according to the feature points, and specific steps refer to fig. 2, which are as follows:
in step 201, first feature points of an image region in a reference frame are determined.
Specifically, the first feature points are extracted at equal intervals on the contour of the image area in the reference frame, and then the contour of the image area in the reference frame is divided into a plurality of first contour segments according to the first feature points. The feature points are discrete point sets sampled at equal intervals on the contour of the image area.
Step 202, determining each second feature point of the corresponding image area in the adjustment frame.
Specifically, in the contour of the image region in the adjustment frame, second contour segments matched with a plurality of first contour segments divided in the reference frame are queried, and then the end points of the second contour segments are determined as second feature points of the corresponding image region in the adjustment frame.
Further, according to the matching relationship between each first contour segment and each second contour segment, for the mutual matching between each first contour segment and each second contour segment, the corresponding relationship between each first feature point and each second feature point can be determined.
And 203, performing deformation adjustment on the corresponding image area in the adjustment frame according to the relative position difference between the first characteristic point and the corresponding second characteristic point.
Specifically, first, the second feature points of the image area in the adjustment frame are connected, and the corresponding image area in the adjustment frame is divided into a plurality of sub-areas by using the connection lines between the second feature points. And then adjusting the relative position between the second characteristic points according to the relative position difference between the first characteristic points and the corresponding second characteristic points so as to change the shapes of the plurality of sub-regions. And finally, carrying out deformation adjustment on the image contents in the plurality of sub-regions to fill the sub-regions with the changed shapes, thereby realizing the deformation adjustment on the corresponding image regions in the adjustment frame.
Furthermore, after the corresponding image area in the adjustment frame is subjected to deformation adjustment, edge sharpening is performed on the adjusted image area, so that the situation that the edge of the synthesized image is fuzzy is avoided.
According to the image synthesis method, the reference frame shot in the first exposure duration and the adjusting frame shot in the second exposure duration are respectively obtained; wherein the first exposure time length is less than the second exposure time length; further carrying out contour recognition on the imaging object on the reference frame and the adjusting frame; determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours; correcting the corresponding image area in the adjusting frame according to the image area in the reference frame; and finally, synthesizing the corrected adjusting frame with the reference frame to obtain a target image. The method uses the short-exposure image as a reference frame, and corrects and edge-processes the longer-exposure image according to the displacement visual angle and the picture analysis, thereby solving the technical problem that the image is easy to be misplaced or blurred when the shot object is a dynamic object when the images with different exposure durations are synthesized in the prior art.
As an example, the present embodiment synthesizes a high Dynamic Range image by taking three Low Dynamic Range exposure (LDR) images of different exposure times. Fig. 3 is a schematic flowchart of another image synthesis method according to an embodiment of the present application.
As shown in fig. 3, the image synthesizing method includes the steps of:
in step 301, a reference frame photographed with a short exposure and an adjustment frame photographed with a medium exposure and a long exposure are acquired, respectively.
In the embodiment of the application, an electronic device with a shooting function is adopted to shoot images with different exposure durations for the same scene, and referring to fig. 4, the three images are respectively shot under short exposure, medium exposure and long exposure, the shot image with short exposure is used as a reference frame, and the shot image with medium exposure and long exposure is used as an adjustment frame.
And step 302, carrying out contour recognition on the imaging object on the reference frame and the adjusting frame.
In this embodiment, the method for identifying the contour of the imaging object for the reference frame and the adjustment frame is similar to the method in step 102 in the above embodiment, and therefore is not described herein again.
Step 303, compare the image area in the reference frame with the corresponding image area in the adjustment frame, and calculate the bit position difference and the feature point.
Specifically, after the contour of the imaging object is identified for the reference frame and the adjustment frame, the reference frame and the adjustment frame are compared according to the shape of the identified contour and the image color in the contour, and the similar contour in the reference frame and the adjustment frame is determined. The position difference of the similar image areas can be obtained by comparing the image areas in the reference frame with the corresponding image areas in the adjusting frame.
Further, the feature points are discrete point sets sampled at equal intervals on the contour of the image area. And extracting the first feature points at equal intervals on the contour of the image area in the reference frame, and dividing the contour of the image area in the reference frame into a plurality of first contour segments according to the first feature points. In the contour of the image area in the adjustment frame, second contour segments matched with a plurality of first contour segments divided in the reference frame are inquired, the end points of the second contour segments are determined as second feature points of the corresponding image area in the adjustment frame, and then the relative position difference between the first feature points and the second feature points is calculated.
And 304, adjusting the pixels of the adjusting frame according to the position difference and the characteristic points, so that the adjusting frame corresponds to the image pixels of the reference frame.
Specifically, pixel adjustment is performed on the adjustment frame according to the bit value difference of the image region calculated in step 303 and the relative position difference between the feature points, so that the adjustment frame corresponds to the image pixel of the reference frame. Referring to fig. 5, fig. 5 is a diagram illustrating the contour recognition of the reference frame image and the adjustment of the pixels of the adjustment frame. The dotted line of the reference frame image in the figure is the recognition of the image outline, and the image of the adjusting frame is subjected to pixel adjustment according to the image outline of the reference frame, so that the adjusting frame corresponds to the image pixel of the reference frame.
Step 305, sharpening the pixel region corresponding to the contour of the reference frame in the adjustment frame.
In this embodiment, the sharpening process is performed on the image area of the adjustment frame, so that the blurred image becomes clearer, and the situation that the edge of the synthesized image is blurred is avoided. Since the blurring of the image is substantially caused by the averaging or integrating operation of the image, the image can be sharpened by performing an inverse operation such as a differential operation on the image. From a spectral point of view, the essence of image blurring is that its high frequency components are attenuated, so the image can be sharpened by a high-pass filtering operation.
It should be noted that, when the image region in the adjustment frame is sharpened, the image needs to have a higher signal-to-noise ratio, otherwise, the signal-to-noise ratio of the sharpened image is lower, so that the noise is increased more than the signal, and therefore, the image region in the adjustment frame may be sharpened after the noise is removed or reduced. Referring to fig. 6, an image obtained by sharpening an image region in an adjustment frame is described.
And step 306, synthesizing the processed adjusting frame with the reference frame to obtain a target image.
In this embodiment, the method for synthesizing the processed adjustment frame and the reference frame to obtain the target image is similar to the method in step 105 in the above embodiment, and therefore is not described herein again.
According to the image synthesis method, the reference frame shot in short exposure and the adjusting frame shot in medium exposure and long exposure are respectively obtained; carrying out contour recognition on the imaging object on the reference frame and the adjusting frame; comparing the image area in the reference frame with the corresponding image area in the adjusting frame, and calculating the position difference and the characteristic point; adjusting the pixels of the adjusting frame according to the position difference and the characteristic points, so that the adjusting frame corresponds to the image pixels of the reference frame; sharpening a pixel area corresponding to the outline of the reference frame in the adjusting frame; and synthesizing the processed adjusting frame and the reference frame to obtain a target image. The method carries out pixel adjustment and edge processing on different exposure images in a preprocessing mode, and solves the technical problem that in the prior art, when images with different exposure durations are synthesized, the images are easy to be misplaced or blurred when a shot object is a dynamic object.
In order to implement the above embodiments, the present application also proposes an image synthesizing apparatus.
Fig. 7 is a schematic structural diagram of an image synthesis apparatus according to an embodiment of the present application.
As shown in fig. 7, the image synthesizing apparatus 100 includes: an acquisition module 110, an identification module 120, a determination module 130, a correction module 140, and a synthesis module 150.
An obtaining module 110, configured to obtain a reference frame captured with a first exposure duration and an adjustment frame captured with a second exposure duration; wherein the first exposure time length is less than the second exposure time length.
And the identification module 120 is used for performing contour identification on the imaging object on the reference frame and the adjustment frame.
And the determining module 130 is configured to determine, according to the contours, that image regions surrounded by similar contours have a corresponding relationship for the reference frame and the adjustment frame.
And the correcting module 140 is configured to correct the corresponding image area in the adjustment frame according to the image area in the reference frame.
And a synthesizing module 150, configured to synthesize the modified adjustment frame with the reference frame to obtain a target image.
The determining module 130 is further configured to determine similar contours in the reference frame and the adjustment frame according to the shape of the contour and the color of the image in the contour;
in the reference frame and the adjusting frame, image areas surrounded by similar contours are determined to have corresponding relation.
As a possible implementation manner, the modification module 140 further includes:
and the first comparison unit is used for comparing the image area in the reference frame with the corresponding image area in the adjustment frame to obtain the position difference.
And the first adjusting unit is used for adjusting the position of the corresponding image area in the adjusting frame according to the position difference.
As another possible implementation manner, the image synthesizing apparatus further includes:
the first determining module is used for determining each first characteristic point of the image area in the reference frame.
And the second determining module is used for determining each second characteristic point of the corresponding image area in the adjusting frame.
And the adjusting module is used for carrying out deformation adjustment on the corresponding image area in the adjusting frame according to the relative position difference between the first characteristic point and the corresponding second characteristic point.
The adjusting module is further configured to divide the corresponding image area in the adjusting frame into a plurality of sub-areas by using a connection line between the second feature points;
adjusting the relative position between the second characteristic points according to the relative position difference between the first characteristic points and the corresponding second characteristic points so as to change the shapes of the plurality of sub-regions;
the image content within the plurality of sub-regions is deformed to fill the shape-changed sub-regions.
As another possible implementation manner, the first determining module is further configured to extract, at equal intervals, each first feature point from the contour of the image region in the reference frame;
and dividing the contour of the image area in the reference frame into a plurality of first contour segments according to the first characteristic points.
The second determining module is also used for inquiring a second contour segment matched with the first contour segment in the contour of the image area in the adjusting frame;
determining the end points of the second contour segments as second characteristic points;
and determining the corresponding relation between each first characteristic point and each second characteristic point according to the matching relation between each first contour segment and each second contour segment.
As another possible implementation manner, the image synthesizing apparatus further includes:
and the sharpening module is used for carrying out edge sharpening on the image area after the deformation adjustment.
As another possible implementation manner, the image synthesizing apparatus further includes:
the first control module is used for controlling the photosensitive pixels in the pixel unit array to carry out multiple exposure according to the first exposure time length and the second exposure time length; wherein, for the same exposure, the photosensitive pixels in the pixel unit array adopt the same exposure time length.
The second control module is used for controlling the photosensitive pixels in the pixel unit array to carry out single exposure according to the first exposure duration and the two second exposure durations; the plurality of long exposure pixels and the plurality of middle exposure pixels are exposed by adopting corresponding second exposure time length; the second exposure time length adopted by the plurality of long exposure pixels is longer than the second exposure time length adopted by the plurality of middle exposure pixels;
the plurality of short exposure pixels are exposed with a first exposure duration.
The image synthesis device of the embodiment of the application respectively acquires a reference frame shot with a first exposure duration and an adjustment frame shot with a second exposure duration; wherein the first exposure time length is less than the second exposure time length; further carrying out contour recognition on the imaging object on the reference frame and the adjusting frame; determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours; correcting the corresponding image area in the adjusting frame according to the image area in the reference frame; and finally, synthesizing the corrected adjusting frame with the reference frame to obtain a target image. The method carries out calibration correspondence and edge processing on different exposure images to be synthesized in an image preprocessing mode, and solves the technical problem that in the prior art, when the HDR images are synthesized in images with different exposure durations, the images are easy to be misplaced or blurred in the case that a shot object is a dynamic object.
It should be noted that the foregoing explanation of the embodiment of the image synthesis method is also applicable to the image synthesis apparatus of this embodiment, and is not repeated here.
In order to implement the above embodiments, the present application also proposes an image forming apparatus.
Fig. 8 is a schematic structural diagram of an imaging apparatus according to an embodiment of the present application.
As shown in fig. 8, the image forming apparatus 200 includes: a pixel cell array 210 of a plurality of photosensitive pixels, and a processor 220.
A processor 220 for obtaining a reference frame captured with a first exposure duration and an adjustment frame captured with a second exposure duration, respectively; wherein the first exposure duration is less than the second exposure duration;
carrying out contour recognition on the imaging object on the reference frame and the adjusting frame;
determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours;
correcting the corresponding image area in the adjusting frame according to the image area in the reference frame;
and synthesizing the corrected adjusting frame with the reference frame to obtain a target image.
The imaging device of the embodiment of the application respectively acquires a reference frame shot with a first exposure duration and an adjustment frame shot with a second exposure duration; wherein the first exposure time length is less than the second exposure time length; further carrying out contour recognition on the imaging object on the reference frame and the adjusting frame; determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours; correcting the corresponding image area in the adjusting frame according to the image area in the reference frame; and finally, synthesizing the corrected adjusting frame with the reference frame to obtain a target image. The method carries out calibration correspondence and edge processing on different exposure images to be synthesized in an image preprocessing mode, and solves the technical problem that in the prior art, when the HDR images are synthesized in images with different exposure durations, the images are easy to be misplaced or blurred in the case that a shot object is a dynamic object.
In order to implement the foregoing embodiments, the present application further provides an electronic device, and fig. 9 is a schematic internal structure diagram of the electronic device provided in the embodiments of the present application. As shown in figure 9 of the drawings,
the electronic device 300 includes a processor 320, a memory 330, a display 340, and an input device 350 connected by a system bus 310. The memory 330 of the electronic device 300 has stored therein an operating system and computer readable instructions. The computer readable instructions can be executed by the processor 320 to implement the image synthesis method of the embodiment of the present application. The processor 320 is used to provide computing and control capabilities that support the operation of the overall electronic device 300. The display 340 of the electronic device 300 may be a liquid crystal display or an electronic ink display, and the input device 350 may be a touch layer covered on the display 340, a button, a trackball or a touch pad arranged on a housing of the electronic device 300, or an external keyboard, a touch pad or a mouse. The electronic device 300 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, smart glasses), etc.
Those skilled in the art will appreciate that the configuration shown in fig. 9 is merely a schematic diagram of a portion of the configuration associated with the present application, and does not constitute a limitation on the electronic device 300 to which the present application is applied, and that a particular electronic device 300 may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
For clearly explaining the electronic device provided in the present embodiment, please refer to fig. 10, which provides an Image Processing circuit 40 according to the embodiment of the present application, wherein the Image Processing circuit can be implemented by hardware and/or software components, and includes various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 10 is a schematic diagram of image processing circuitry 40 in one embodiment. As shown in fig. 10, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 10, the image processing circuit 40 includes an ISP processor 41 (the ISP processor 41 may be the processor 320) and a control logic 42. The image data captured by the camera 43 is first processed by the ISP processor 41, and the ISP processor 41 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of the camera 43. Camera 43 may include one or more lenses 432 and an image sensor 434. Image sensor 434 may include an array of color filters (e.g., Bayer filters), and image sensor 434 may acquire light intensity and wavelength information captured by each imaging pixel and provide a set of raw image data that may be processed by ISP processor 41. The sensor 44 (e.g., gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 41 based on the type of interface of the sensor 44. The sensor 44 interface may be a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interface, or a combination of the above.
In addition, the image sensor 434 may also transmit raw image data to the sensor 44, the sensor 44 may provide the raw image data to the ISP processor 41 based on the type of interface of the sensor 44, or the sensor 44 may store the raw image data in the image memory 45.
The ISP processor 41 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 41 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 41 may also receive image data from the image memory 45. For example, the sensor 44 interface sends raw image data to the image memory 45, and the raw image data in the image memory 45 is then provided to the ISP processor 41 for processing. The image Memory 45 may be the Memory 50, a portion of the Memory 50, a storage device, or a separate dedicated Memory within the electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from image sensor 434 interface or from sensor 44 interface or from image memory 45, ISP processor 41 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 45 for additional processing before being displayed. The ISP processor 41 receives the processed data from the image memory 45 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 41 may be output to display 47 (display 47 may include display screen 83) for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 41 may also be sent to the image memory 45, and the display 47 may read image data from the image memory 45. In one embodiment, image memory 45 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 41 may be sent to an encoder/decoder 46 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on the display 47 device. The encoder/decoder 46 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the ISP processor 41 may be sent to the control logic 42 unit. For example, the statistical data may include image sensor 434 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 432 shading correction, and the like. Control logic 42 may include a processing element and/or microcontroller that executes one or more routines (e.g., firmware) that determine control parameters for camera 43 and ISP processor 41 based on the received statistical data. For example, the control parameters of camera 43 may include sensor 44 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 432 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 432 shading correction parameters.
For example, the following steps are steps to implement the image synthesis method using the processor 320 in fig. 9 or using the image processing circuit 40 (specifically, the ISP processor 41) in fig. 10:
respectively acquiring a reference frame shot with a first exposure duration and an adjustment frame shot with a second exposure duration; wherein the first exposure duration is less than the second exposure duration;
carrying out contour identification on the imaging object on the reference frame and the adjusting frame;
determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours;
correcting the corresponding image area in the adjusting frame according to the image area in the reference frame;
and synthesizing the corrected adjusting frame with the reference frame to obtain a target image.
For another example, the following steps are performed to implement the image synthesis method using the processor in fig. 9 or using the image processing circuit 40 (specifically, ISP processor) in fig. 10:
comparing the image area in the reference frame with the corresponding image area in the adjusting frame to obtain a position difference;
and adjusting the position of the corresponding image area in the adjusting frame according to the position difference.
In order to implement the above embodiments, the present application also proposes a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements the image synthesis method as described in the above embodiments.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. An image synthesis method, characterized in that it comprises the steps of:
respectively acquiring a reference frame shot with a first exposure duration and an adjustment frame shot with a second exposure duration; wherein the first exposure duration is less than the second exposure duration;
carrying out contour identification on the imaging object on the reference frame and the adjusting frame;
determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours; comparing the reference frame with the adjusting frame according to the shape of the outline and the color of the image in the outline to determine similar outlines in the reference frame and the adjusting frame; determining that image regions surrounded by the similar contours have corresponding relation in the reference frame and the adjusting frame;
comparing image areas surrounded by similar outlines of the reference frame and the adjusting frame, and calculating position difference and feature points;
correcting the corresponding image area in the adjusting frame according to the image area in the reference frame; adjusting the pixels of the adjusting frame according to the position difference and the characteristic points; carrying out sharpening processing on a pixel region corresponding to the outline of the reference frame in the adjusting frame;
and synthesizing the processed adjusting frame and the reference frame to obtain a target image.
2. The image synthesis method according to claim 1, wherein the modifying the corresponding image region in the adjustment frame according to the image region in the reference frame comprises:
comparing the image area in the reference frame with the corresponding image area in the adjusting frame to obtain a position difference;
and adjusting the position of the corresponding image area in the adjusting frame according to the position difference.
3. The image synthesis method according to claim 1, wherein after the adjusting the position of the corresponding image region in the adjustment frame, the method further comprises:
determining first feature points of an image region in the reference frame;
determining each second feature point of the corresponding image area in the adjusting frame;
and carrying out deformation adjustment on the corresponding image area in the adjustment frame according to the relative position difference between the first characteristic point and the corresponding second characteristic point.
4. The image synthesis method according to claim 3, wherein the performing deformation adjustment on the corresponding image region in the adjustment frame according to the relative position difference between the first feature point and the corresponding second feature point comprises:
dividing the corresponding image area in the adjusting frame into a plurality of sub-areas by using connecting lines among the second characteristic points;
adjusting the relative position between the second characteristic points according to the relative position difference between the first characteristic points and the corresponding second characteristic points so as to change the shapes of the plurality of sub-regions;
deforming the image content within the plurality of sub-regions to fill the shape-changed sub-regions.
5. The image synthesis method according to claim 3, wherein the determining of the first feature points of the image regions in the reference frame includes:
extracting the contour of the image area in the reference frame at equal intervals to obtain each first feature point;
the determining each second feature point of the corresponding image area in the adjustment frame includes:
dividing the contour of the image area in the reference frame into a plurality of first contour segments according to each first feature point;
querying a second contour segment matching the first contour segment in the contour of the image region in the adjustment frame;
determining the end points of the second contour segments as second characteristic points;
and determining the corresponding relation between each first characteristic point and each second characteristic point according to the matching relation between each first contour segment and each second contour segment.
6. The image synthesis method according to claim 3, wherein after performing deformation adjustment on the corresponding image region in the adjustment frame according to the relative position difference between the first feature point and the corresponding second feature point, the method further comprises:
and carrying out edge sharpening on the image area after the deformation adjustment.
7. The image synthesizing method according to any one of claims 1 to 6, used for an imaging apparatus including a pixel unit array composed of a plurality of photosensitive pixels, the acquiring of the reference frame taken with the first exposure period and the acquiring of the adjustment frame taken with the second exposure period, respectively, further comprising:
controlling the photosensitive pixels in the pixel unit array to perform multiple exposures according to the first exposure duration and the second exposure duration;
and for the same exposure, the photosensitive pixels in the pixel unit array adopt the same exposure time length.
8. The image synthesizing method according to any one of claims 1 to 6, which is used for an image forming apparatus including a pixel unit array composed of a plurality of photosensitive pixels including a plurality of long-exposure pixels, a plurality of middle-exposure pixels, and a plurality of short-exposure pixels;
before the obtaining the reference frame shot with the first exposure duration and the adjusting frame shot with the second exposure duration, the method further includes:
controlling the photosensitive pixels in the pixel unit array to carry out single exposure according to the first exposure time length and the two second exposure time lengths;
the plurality of long exposure pixels and the plurality of middle exposure pixels are exposed by adopting corresponding second exposure time length; the second exposure time length adopted by the multiple long exposure pixels is longer than that adopted by the multiple middle exposure pixels;
and the short exposure pixels are exposed by adopting the first exposure duration.
9. An image synthesizing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for respectively acquiring a reference frame shot in a first exposure duration and an adjustment frame shot in a second exposure duration; wherein the first exposure duration is less than the second exposure duration;
the identification module is used for carrying out contour identification on the imaging object on the reference frame and the adjusting frame;
the determining module is used for determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours; comparing the reference frame with the adjusting frame according to the shape of the outline and the color of the image in the outline to determine similar outlines in the reference frame and the adjusting frame; determining that image regions surrounded by the similar contours have corresponding relation in the reference frame and the adjusting frame; comparing image areas surrounded by similar outlines of the reference frame and the adjusting frame, and calculating position difference and feature points;
the correction module is used for correcting the corresponding image area in the adjustment frame according to the image area in the reference frame; adjusting the pixels of the adjusting frame according to the position difference and the characteristic points; carrying out sharpening processing on a pixel region corresponding to the outline of the reference frame in the adjusting frame;
and the synthesis module is used for synthesizing the processed adjusting frame and the reference frame to obtain a target image.
10. An imaging apparatus comprising a pixel cell array of a plurality of photosensitive pixels, the imaging apparatus further comprising a processor configured to:
respectively acquiring a reference frame shot with a first exposure duration and an adjustment frame shot with a second exposure duration; wherein the first exposure duration is less than the second exposure duration;
carrying out contour identification on the imaging object on the reference frame and the adjusting frame;
determining that image areas surrounded by similar contours have corresponding relation for the reference frame and the adjusting frame according to the contours; comparing the reference frame with the adjusting frame according to the shape of the outline and the color of the image in the outline to determine similar outlines in the reference frame and the adjusting frame; determining that image regions surrounded by the similar contours have corresponding relation in the reference frame and the adjusting frame; comparing image areas surrounded by similar outlines of the reference frame and the adjusting frame, and calculating position difference and feature points;
correcting the corresponding image area in the adjusting frame according to the image area in the reference frame; adjusting the pixels of the adjusting frame according to the position difference and the characteristic points; carrying out sharpening processing on a pixel region corresponding to the outline of the reference frame in the adjusting frame;
and synthesizing the processed adjusting frame and the reference frame to obtain a target image.
11. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which when executed by the processor implements the image composition method as claimed in any one of claims 1 to 8.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the image synthesis method according to any one of claims 1 to 8.
CN201810886741.2A 2018-08-06 2018-08-06 Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium Active CN108989699B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810886741.2A CN108989699B (en) 2018-08-06 2018-08-06 Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810886741.2A CN108989699B (en) 2018-08-06 2018-08-06 Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN108989699A CN108989699A (en) 2018-12-11
CN108989699B true CN108989699B (en) 2021-03-23

Family

ID=64554972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810886741.2A Active CN108989699B (en) 2018-08-06 2018-08-06 Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN108989699B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102628911B1 (en) * 2019-01-07 2024-01-24 삼성전자주식회사 Method of processing image and image processing device performing the same
CN111489320A (en) * 2019-01-29 2020-08-04 华为技术有限公司 Image processing method and device
CN110728648B (en) * 2019-10-25 2022-07-19 北京迈格威科技有限公司 Image fusion method and device, electronic equipment and readable storage medium
CN111835982B (en) * 2020-07-02 2021-08-17 维沃移动通信有限公司 Image acquisition method, image acquisition device, electronic device, and storage medium
CN112581415A (en) * 2020-11-20 2021-03-30 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114205525B (en) * 2021-12-02 2024-05-31 信利光电股份有限公司 Roller shutter exposure image correction method and device and readable storage medium
CN114847989A (en) * 2022-03-25 2022-08-05 深圳市创谷科技发展有限公司 Beam limiter window control method and device, computer equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102422124A (en) * 2010-05-31 2012-04-18 松下电器产业株式会社 Imaging device, imaging means and program
CN103973958A (en) * 2013-01-30 2014-08-06 阿里巴巴集团控股有限公司 Image processing method and image processing equipment
CN104135612A (en) * 2014-07-11 2014-11-05 深圳市中兴移动通信有限公司 A shooting method and a shooting device with an adjustable location of a shot object
CN104144298A (en) * 2014-07-16 2014-11-12 浙江宇视科技有限公司 Wide dynamic image synthesis method
CN104349066A (en) * 2013-07-31 2015-02-11 华为终端有限公司 Method and device for generating images with high dynamic ranges
CN107240128A (en) * 2017-05-09 2017-10-10 北京理工大学 A kind of X-ray film and photochrome method for registering based on contour feature
CN107465882A (en) * 2017-09-22 2017-12-12 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN107665486A (en) * 2017-09-30 2018-02-06 深圳绰曦互动科技有限公司 A kind of method for automatically split-jointing, device and terminal device applied to radioscopic image
JP2018064201A (en) * 2016-10-13 2018-04-19 パナソニックIpマネジメント株式会社 Image processing device, image processing method, and image processing circuit
CN107995425A (en) * 2017-12-11 2018-05-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102422124A (en) * 2010-05-31 2012-04-18 松下电器产业株式会社 Imaging device, imaging means and program
CN103973958A (en) * 2013-01-30 2014-08-06 阿里巴巴集团控股有限公司 Image processing method and image processing equipment
CN104349066A (en) * 2013-07-31 2015-02-11 华为终端有限公司 Method and device for generating images with high dynamic ranges
CN104135612A (en) * 2014-07-11 2014-11-05 深圳市中兴移动通信有限公司 A shooting method and a shooting device with an adjustable location of a shot object
CN104144298A (en) * 2014-07-16 2014-11-12 浙江宇视科技有限公司 Wide dynamic image synthesis method
JP2018064201A (en) * 2016-10-13 2018-04-19 パナソニックIpマネジメント株式会社 Image processing device, image processing method, and image processing circuit
CN107240128A (en) * 2017-05-09 2017-10-10 北京理工大学 A kind of X-ray film and photochrome method for registering based on contour feature
CN107465882A (en) * 2017-09-22 2017-12-12 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN107665486A (en) * 2017-09-30 2018-02-06 深圳绰曦互动科技有限公司 A kind of method for automatically split-jointing, device and terminal device applied to radioscopic image
CN107995425A (en) * 2017-12-11 2018-05-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Also Published As

Publication number Publication date
CN108989699A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
CN108989699B (en) Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium
CN109005364B (en) Imaging control method, imaging control device, electronic device, and computer-readable storage medium
CN109767467B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108683862B (en) Imaging control method, imaging control device, electronic equipment and computer-readable storage medium
JP7003238B2 (en) Image processing methods, devices, and devices
EP3480783B1 (en) Image-processing method, apparatus and device
EP3836534B1 (en) Imaging control method, electronic device, and computer-readable storage medium
CN108012080B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108322669B (en) Image acquisition method and apparatus, imaging apparatus, and readable storage medium
CN110290289B (en) Image noise reduction method and device, electronic equipment and storage medium
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
CN108111749B (en) Image processing method and device
EP3480784B1 (en) Image processing method, and device
CN109672819B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110191291B (en) Image processing method and device based on multi-frame images
US20200045219A1 (en) Control method, control apparatus, imaging device, and electronic device
CN110248106B (en) Image noise reduction method and device, electronic equipment and storage medium
CN108419028B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109089046B (en) Image noise reduction method and device, computer readable storage medium and electronic equipment
CN110166707B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN107864337B (en) Sketch image processing method, device and equipment and computer readable storage medium
CN108683861A (en) Shoot exposal control method, device, imaging device and electronic equipment
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN109005369B (en) Exposure control method, exposure control device, electronic apparatus, and computer-readable storage medium
CN110166706B (en) Image processing method, image processing apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant