CN106851115B - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN106851115B CN106851115B CN201710207173.4A CN201710207173A CN106851115B CN 106851115 B CN106851115 B CN 106851115B CN 201710207173 A CN201710207173 A CN 201710207173A CN 106851115 B CN106851115 B CN 106851115B
- Authority
- CN
- China
- Prior art keywords
- image
- edge
- template
- area
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention provides an image processing method and device, wherein the image processing method comprises the following steps: acquiring a first image and a second image, wherein the exposure time of the first image is shorter than that of the second image; respectively extracting an edge region of the first image and an edge region of the second image; processing the edge area of the first image and the edge area of the second image to obtain a template area; and performing fusion processing on the first image and the second image based on the template area.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image processing method and device.
Background
With the development and popularization of intelligent terminal technology, image shooting methods based on the intelligent terminal technology are more and more common. Generally, in the process of capturing an image, the requirements on the capturing environment are high, for example, when the capturing environment is not bright enough, the image exposure time may be forced to be prolonged in order to obtain enough dark details, which tends to cause the camera to move during the exposure and cause image blurring.
In the prior art, a method for deblurring a shot image by image calculation can be selected, and the currently adopted main method for deblurring the image is directly deblurred on an already blurred picture by calculation, but the method has huge calculation amount, so that the method cannot be used on a portable terminal, and the effect of deblurring the image is not ideal.
Disclosure of Invention
According to an aspect of the present invention, there is provided an image processing method including: acquiring a first image and a second image, wherein the exposure time of the first image is shorter than that of the second image; respectively extracting an edge region of the first image and an edge region of the second image; processing the edge area of the first image and the edge area of the second image to obtain a template area; and performing fusion processing on the first image and the second image based on the template area.
According to another aspect of the present invention, there is provided an image processing apparatus comprising: an acquisition unit configured to acquire a first image and a second image, wherein an exposure time of the first image is shorter than an exposure time of the second image; an extraction unit configured to extract an edge region of the first image and an edge region of the second image, respectively; the processing unit is configured to process the edge area of the first image and the edge area of the second image to obtain a template area; a fusion unit configured to perform fusion processing on the first image and the second image based on the template region.
In the image processing method and apparatus provided by the present invention, the long-exposure image and the short-exposure image can be acquired separately, and the edge regions of the acquired long-exposure image and short-exposure image can be extracted and processed separately to acquire the result of the fusion process. The image processing method and the image processing device can avoid the technical problems of image blurring caused by a long-exposure image and dark detail loss and large image noise caused by a short-exposure image, improve the quality of the finally obtained image and improve the user experience.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 shows a schematic view of a photograph taken at a long exposure time;
FIG. 2 shows a schematic view of a photograph taken at a short exposure time;
FIG. 3 shows a flow diagram of an image processing method according to an embodiment of the invention;
fig. 4 is a schematic diagram illustrating a template region obtained after the edge regions of the first image and the second image are superimposed in the embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a first image and a second image fused based on a template region according to an embodiment of the present invention;
fig. 6 shows a block diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 7 shows a block diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention.
When a picture is taken under the condition of insufficient ambient light, if the picture is taken by adopting a long-time exposure mode, although more dark part details can be obtained, the phenomenon of image blurring can be generated due to the movement of a camera. Fig. 1 shows a schematic representation of a photograph taken at a long exposure time. As shown in fig. 1, the exposure time is long due to the excessively dark environment, and it is difficult to avoid the camera moving in the exposure time range, so that the edge of the picture taken in fig. 1 is blurred, and the shooting effect is affected.
Fig. 2 shows a schematic view of a photograph taken at a short exposure time. Although the exposure time of the picture shown in fig. 2 is short, the situation of edge blurring caused by camera movement is avoided, but because the ambient light is insufficient, the details of the dark part are lost, the color brightness of the shot picture is poor, and the image noise is too large.
In order to solve the problems of the long-exposure image and the short-exposure image shown in fig. 1 and fig. 2, respectively, it is generally considered to optimize the image effect of the long-exposure image in fig. 1 by a calculation method of image deblurring. However, since the edge of the long-exposure image is already very blurred, the conventional deblurring algorithm is difficult to determine a blur kernel, and repeated iteration is required, so that the calculation amount is large, and the effect is very undesirable.
In view of the above, considering the image processing method proposed as follows, fig. 3 shows a flowchart of an image processing method 300 according to an embodiment of the present invention, which can be applied to electronic devices, such as various terminals that can be used to acquire and process images, for example, mobile phones, PDAs, tablet computers, notebook computers, and the like, and also portable, pocket, hand-held, computer-built, or vehicle-mounted devices.
As shown in fig. 3, the image processing method 300 may include step S301: acquiring a first image and a second image, wherein the exposure time of the first image is shorter than the exposure time of the second image. In the embodiment of the present invention, the exposure time of the first image is shorter than that of the second image, and therefore, the first image may be referred to as a short-exposure image, and the second image may be referred to as a long-exposure image. Examples of the first image and the second image may be seen in fig. 2 and fig. 1, respectively. It should be noted that the length of the exposure time between the first image and the second image is a relative value, and the specific value is not limited herein. For example, the exposure time for the first image may be 1/60 seconds, while the exposure time for the second image may be 1/8 seconds, in contrast. Of course, the exposure time for the first image may be 1/250 seconds, and the exposure time for the second image at this time may be 1/60 seconds. In practical application of the embodiment of the present invention, any exposure time length may be selected for the first image and the second image as long as the exposure time of the first image is shorter than that of the second image, and preferably, the selection of the exposure time of the first image and the second image may be determined depending on the conditions of the ambient lighting condition and whether the photographed object is still or moving. Wherein the first image and the second image may be for the same object or objects or within the same field of view. Preferably, the first image and the second image can be obtained continuously without moving the camera by presetting (such as automatic setting or manual setting), that is, the first image with short exposure and the second image with long exposure can be obtained simultaneously by pressing the shooting button once. In one embodiment of the invention, the first image may be taken first and the second image may be taken subsequently, although it is also possible to take the second image first and the first image subsequently. Wherein the first image is relatively less likely to produce a blurred image due to the shorter exposure time, and the second image is more likely to produce a blurred image due to the relatively longer exposure time, although the color brightness of the image is better.
In step S302, edge regions of the first image and the second image are extracted, respectively. In this step, edge extraction is performed for the first image and the second image that are photographed, respectively. The edge can be defined as a place where the gray value at the boundary of each object in the captured image changes sharply. In the embodiment of the present invention, various edge detection methods may be used to extract the edge contour of the object captured in the image based on the variation of parameters such as the gray-scale values in the first image and the second image, for example, edge detection may be performed based on a first-order or second-order differential operator, edge detection may be performed based on mathematical morphology, and/or edge detection may be performed based on wavelet transformation. In the embodiment of the present invention, the above-mentioned edge extraction manners for the first image and the second image are only examples, and in practical applications, any edge extraction manner may be selected as long as the edge extraction can be performed on the first image and the second image, where the edge area extraction manners for the first image and the second image may be the same or different.
In step S303, the edge region of the first image and the edge region of the second image are processed to obtain a template region. Specifically, in the embodiment of the present invention, the edge region of the first image and the edge region of the second image may be superimposed to obtain a template region after the superimposition. Since the photographic subjects of the first image and the second image are the same object or the same visual field range, the edge regions extracted in the first image and the second image are similar. In this step, the edge region extracted for the first image and the edge region extracted for the second image are superimposed to obtain a superimposed edge region as a template region. Fig. 4 is a schematic diagram illustrating a template region obtained after the edge regions of the first image and the second image are superimposed in the embodiment of the present invention. Preferably, filtering can be performed on the overlapped template region to achieve better image effect. The above processing manner for the edge regions of the first image and the second image is only an example, and in practical applications, any processing manner for the edge regions of the first image and the second image may be adopted to obtain the template region.
In step S304, the first image and the second image are subjected to fusion processing based on the template region. Specifically, it is possible to extract a template region image in which the first image corresponds to the template region, and extract a non-template region image in which the second image is outside the template region; and fusing the template area image of the first image and the non-template area image of the second image. When the first image is extracted to correspond to the template area image in the template area, the brightness of the template area image of the first image can be corrected according to the brightness information of the second image corresponding to the template area.
In this step, after determining the template region from the edge region of the first image and the edge region of the second image, the edge region portion of the first image, i.e., the short-exposure image, and the non-edge region portion of the second image, i.e., the long-exposure image, may be extracted to form a fused image, respectively. The first image and the second image have the same visual field because the first image and the second image can be obtained by continuous shooting, so that the edge area and the non-edge area of the first image and the second image can be directly fused to obtain a fused image with the same visual field range after fusion processing. Wherein the fused image is the same as the first image and the second image in view field range or the shot object. In particular, since the fused image includes the edge region of the first image and the non-edge region of the second image, the fused image generated by the fusion process includes both the sharp edge region of the short-exposure image and the non-edge region of the long-exposure image with sufficient dark detail and low noise. In another embodiment of the present invention, for the edge region of the first image used for fusion, the brightness information in the edge region of the second image may be used for correction, so as to obtain the edge region of the first image after brightening, and fuse with the non-edge region of the second image, thereby achieving a more perfect fusion effect. The method comprises the steps of acquiring the brightness parameters of the edge area of the first image, and processing the corresponding position of the edge area of the second image, wherein the brightness parameters of the edge area of the first image are acquired according to the brightness parameters of the edge area of the first image, and the brightness parameters of the edge area of the second image are acquired according to the brightness parameters of the edge area of the first image.
Fig. 5 is a schematic diagram illustrating a first image and a second image fused based on a template region in an embodiment of the present invention. Wherein, the long exposure image in fig. 1 and the short exposure image in fig. 2 are respectively subjected to edge extraction, and an edge region after superposition shown in fig. 4 is obtained as a template region, and the fused image shown in fig. 5 is obtained according to the fusion result of the non-template region in fig. 1 and the template region in fig. 2. In the fusion process, gradual change processing can be adopted at the junction of the first image and the second image to improve the fusion effect.
According to the image processing method provided by the invention, the long exposure image and the short exposure image can be respectively acquired, and the edge areas of the acquired long exposure image and the acquired short exposure image are respectively extracted and processed to acquire the result of the fusion processing. The image processing method provided by the embodiment of the invention can avoid the technical problems of image blurring caused by a long-exposure image and dark detail loss and large image noise caused by a short-exposure image, improve the quality of the finally obtained image and improve the user experience.
Next, a block diagram of an image processing apparatus according to an embodiment of the present invention is described with reference to fig. 6. The apparatus may perform the image processing method described above. Since the operation of the apparatus is substantially the same as the respective steps of the image processing method described above with reference to fig. 3, only a brief description thereof will be given here, and a repetitive description of the same will be omitted.
As shown in fig. 6, the image processing apparatus 600 includes an acquisition unit 610, an extraction unit 620, a processing unit 630, and a fusion unit 640. It should be appreciated that fig. 6 only shows components relevant to embodiments of the present invention, and other components are omitted, but this is merely illustrative, and the apparatus 600 may include other components as desired.
The electronic device in which the image processing apparatus 600 in fig. 6 is located may be various terminals that can be used to acquire and process images, such as a mobile phone, a PDA, a tablet computer, a notebook computer, and the like, and may also be a portable, pocket, hand-held, computer-embedded, or vehicle-mounted apparatus.
As shown in fig. 6, the acquisition unit 610 acquires a first image and a second image, wherein the exposure time of the first image is shorter than the exposure time of the second image. In the embodiment of the present invention, the exposure time of the first image is shorter than that of the second image, and therefore, the first image may be referred to as a short-exposure image, and the second image may be referred to as a long-exposure image. Examples of the first image and the second image may be seen in fig. 2 and fig. 1, respectively. It should be noted that the length of the exposure time between the first image and the second image is a relative value, and the specific value is not limited herein. For example, the exposure time for the first image may be 1/60 seconds, while the exposure time for the second image may be 1/8 seconds, in contrast. Of course, the exposure time for the first image may be 1/250 seconds, and the exposure time for the second image at this time may be 1/60 seconds. In practical application of the embodiment of the present invention, the first image and the second image may be selected to have any exposure time length as long as the exposure time of the first image is shorter than that of the second image, and preferably, the selection of the exposure time of the first image and the second image may be determined depending on the conditions of the ambient light condition and whether the photographed object is still or moving. Wherein the first image and the second image may be for the same object or objects or within the same field of view. Preferably, the first image and the second image can be obtained continuously without moving the camera by presetting (such as automatic setting or manual setting), that is, the first image with short exposure and the second image with long exposure can be obtained simultaneously by pressing the shooting button once. In one embodiment of the invention, the first image may be taken first and the second image may be taken subsequently, although it is also possible to take the second image first and the first image subsequently. Wherein the first image is relatively less likely to produce a blurred image due to the shorter exposure time, and the second image is more likely to produce a blurred image due to the relatively longer exposure time, although the color brightness of the image is better.
The extracting unit 620 extracts an edge region of the first image and an edge region of the second image, respectively. Wherein the extraction unit 620 performs edge extraction for the captured first image and second image, respectively. The edge can be defined as the place where the change of the gray level value at the boundary of each object in the shot image is severe. In the embodiment of the present invention, the extracting unit 620 may extract the edge contour of the object captured in the image by using various edge detection methods based on the variation of parameters such as gray values in the first image and the second image, for example, the extracting unit 620 may perform edge detection based on a first-order or second-order differential operator, perform edge detection based on mathematical morphology, and/or perform edge detection based on wavelet transform. In the embodiment of the present invention, the manner of extracting the edge of the first image and the second image by the extracting unit 620 is only an example, and in practical applications, any manner of extracting the edge may be selected as long as the edge of the first image and the second image can be extracted, wherein the manner of extracting the edge of the first image and the second image by the extracting unit 620 may be the same or different.
The processing unit 630 processes the edge region of the first image and the edge region of the second image to obtain a template region. Specifically, in this embodiment of the present invention, the processing unit 630 may superimpose an edge area of the first image and an edge area of the second image, and obtain a template area after superimposing. Since the photographic subjects of the first image and the second image are the same object or the same visual field, the edge regions extracted by the processing unit 630 in the first image and the second image are similar. Preferably, the processing unit 630 superimposes the edge region extracted for the first image and the edge region extracted for the second image to obtain the superimposed edge region as the template region. Fig. 4 is a schematic diagram illustrating a template region obtained after the processing unit 630 superimposes the edge regions of the first image and the second image in the embodiment of the present invention. Preferably, the processing unit 630 may also perform filtering on the overlapped template region to achieve better image effect. The above processing manners of the edge regions of the first image and the second image are only examples, and in practical applications, the processing unit 630 may adopt any processing manner of the edge regions of the first image and the second image to obtain the template region.
The fusion unit 640 performs fusion processing on the first image and the second image based on the template region. Specifically, the fusion unit 640 may extract that the first image corresponds to a template region image within the template region, and extract a non-template region image of the second image outside the template region; and fusing the template area image of the first image and the non-template area image of the second image. When extracting that the first image corresponds to the template region image in the template region, the fusion unit 640 may correct the brightness of the template region image of the first image according to the brightness information of the second image corresponding to the template region.
After the fusion unit 640 determines the template region according to the edge region of the first image and the edge region of the second image, the edge region portion of the first image, i.e., the short-exposure image, and the non-edge region portion of the second image, i.e., the long-exposure image, may be respectively extracted to form a fused image. The first image and the second image have the same visual field because the first image and the second image can be obtained by continuous shooting, so that the edge area and the non-edge area of the first image and the second image can be directly fused to obtain a fused image with the same visual field range after fusion processing. Wherein the fused image is the same as the first image and the second image in view field range or the shot object. In particular, since the fused image includes the edge region of the first image and the non-edge region of the second image, the fused image generated by the fusion processing of the fusion unit 640 includes both the sharp edge region of the short-exposure image and the non-edge region of the long-exposure image with sufficient dark detail and low noise. In another embodiment of the present invention, for the edge region of the first image used for fusion, the fusion unit 640 may use brightness information in the edge region of the second image to perform correction, so as to obtain fusion between the edge region of the first image after being brightened and the non-edge region of the second image, thereby achieving a more perfect fusion effect. The fusion unit 640 may collect parameters such as a mean and/or a variance of the brightness around each pixel point in the edge region of the second image, and process the corresponding position of the edge region of the first image, so that the parameters such as the mean and/or the variance around the pixel point at the corresponding position of the edge region of the first image are the same as or similar to those in the second image as much as possible, thereby achieving an effect of brightening the edge region of the first image.
Fig. 5 is a schematic diagram illustrating a first image and a second image fused based on a template region in an embodiment of the present invention. The extracting unit 620 performs edge extraction on the long-exposure image in fig. 1 and the short-exposure image in fig. 2 acquired by the acquiring unit 610, and obtains the superimposed edge region shown in fig. 4 as a template region through the processing unit 630, and the fusing unit 640 obtains the fused image shown in fig. 5 according to the fusion result of the non-template region in fig. 1 and the template region in fig. 2. In the fusion process, gradual change processing can be adopted at the junction of the first image and the second image to improve the fusion effect.
According to the image processing device provided by the invention, the long exposure image and the short exposure image can be respectively acquired, and the edge areas of the acquired long exposure image and the acquired short exposure image are respectively extracted and processed to acquire the result of the fusion processing. The image processing device provided by the embodiment of the invention can avoid the technical problems of image blurring caused by a long-exposure image and dark detail loss and large image noise caused by a short-exposure image, improve the quality of the finally obtained image and improve the user experience.
Next, a block diagram of an image processing apparatus according to an embodiment of the present invention is described with reference to fig. 7. The image processing apparatus may perform the image processing method described above. Since the operation of the apparatus is substantially the same as the respective steps of the image processing method described above with reference to fig. 3, only a brief description thereof will be given here, and a repetitive description of the same will be omitted.
The image processing apparatus 700 in fig. 7 may include one or more processors 710 and memory 720, although the image processing apparatus 700 may also include other components such as an input unit, an output unit (not shown), etc., which are interconnected via a bus system and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the image processing apparatus 700 shown in fig. 7 are merely exemplary and not limiting, and the image processing apparatus 700 may have other components and structures as necessary.
The processor 710 is a control center, connects various parts of the entire apparatus using various interfaces and lines, and performs various functions of the image processing apparatus 700 and processes data by operating or executing software programs and/or modules stored in the memory 720 and calling data stored in the memory 720, thereby performing overall monitoring of the image processing apparatus 700. Preferably, processor 710 may include one or more processing cores; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
Memory 720 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium.
Processor 710 may execute the program instructions to perform the steps of: acquiring a first image and a second image, wherein the exposure time of the first image is shorter than that of the second image; respectively extracting an edge region of the first image and an edge region of the second image; processing the edge area of the first image and the edge area of the second image to obtain a template area; and performing fusion processing on the first image and the second image based on the template area.
An input unit, not shown, may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Preferably, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, and can receive and execute commands sent by the processor 710. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit may comprise other input devices than a touch sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The output unit may output various information, such as image information, application control information, and the like, to the outside (e.g., a user). For example, the output unit may be a display unit operable to display information input by a user or information provided to the user, and various graphical user interfaces of the image processing apparatus 700, which may be configured by graphics, text, icons, video, and any combination thereof. The display unit may include a display panel, and preferably, the display panel may be configured in the form of an LCD (Liquid crystal display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 710 to determine the type of touch event, and then the processor 710 provides a corresponding visual output on the display panel according to the type of touch event. The touch-sensitive surface and the display panel may be implemented as two separate components for input and output functions, or in some embodiments, the touch-sensitive surface may be integrated with the display panel for input and output functions.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific implementation of the information processing method described above may refer to the corresponding description in the product embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not implemented.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (6)
1. An image processing method comprising:
acquiring a first image and a second image, wherein the exposure time of the first image is shorter than that of the second image;
respectively extracting an edge region of the first image and an edge region of the second image;
processing the edge area of the first image and the edge area of the second image to obtain a template area;
performing fusion processing on the first image and the second image based on the template region, including:
extracting a template area image of the first image corresponding to the template area and extracting a non-template area image of the second image outside the template area;
fusing a template area image of the first image and a non-template area image of the second image;
wherein the processing the edge region of the first image and the edge region of the second image to obtain the template region includes:
and overlapping the edge area of the first image and the edge area of the second image to obtain the overlapped edge area as a template area.
2. The method of claim 1, wherein the separately extracting the edge region of the first image and the edge region of the second image comprises:
and respectively extracting the edge areas of the first image and the second image according to the gray value changes of the first image and the second image.
3. The method of claim 1, wherein said extracting that the first image corresponds to a template region image within the template region comprises:
and correcting the brightness of the template area image of the first image according to the brightness information of the second image corresponding to the template area.
4. An image processing apparatus comprising:
an acquisition unit configured to acquire a first image and a second image, wherein an exposure time of the first image is shorter than an exposure time of the second image;
an extraction unit configured to extract an edge region of the first image and an edge region of the second image, respectively;
the processing unit is configured to process the edge area of the first image and the edge area of the second image to obtain a template area;
a fusion unit configured to perform fusion processing on the first image and the second image based on the template region, wherein
The fusion unit extracts a template region image of the first image corresponding to the template region and extracts a non-template region image of the second image outside the template region;
fusing a template area image of the first image and a non-template area image of the second image;
and the processing unit superposes the edge area of the first image and the edge area of the second image, and acquires the superposed edge area as a template area.
5. The apparatus of claim 4, wherein,
the extraction unit extracts edge regions of the first image and the second image according to the gray value changes of the first image and the second image respectively.
6. The apparatus of claim 4, wherein,
the fusion unit corrects the brightness of the template area image of the first image according to the brightness information of the second image corresponding to the template area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710207173.4A CN106851115B (en) | 2017-03-31 | 2017-03-31 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710207173.4A CN106851115B (en) | 2017-03-31 | 2017-03-31 | Image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106851115A CN106851115A (en) | 2017-06-13 |
CN106851115B true CN106851115B (en) | 2020-05-26 |
Family
ID=59142020
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710207173.4A Active CN106851115B (en) | 2017-03-31 | 2017-03-31 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106851115B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114926351B (en) * | 2022-04-12 | 2023-06-23 | 荣耀终端有限公司 | Image processing method, electronic device, and computer storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101424856A (en) * | 2007-10-31 | 2009-05-06 | 华晶科技股份有限公司 | Image- acquiring device for providing image compensating function and image compensation process thereof |
CN101867721A (en) * | 2010-04-15 | 2010-10-20 | 青岛海信网络科技股份有限公司 | Implement method, implement device and imaging device for wide dynamic images |
US8965120B2 (en) * | 2012-02-02 | 2015-02-24 | Canon Kabushiki Kaisha | Image processing apparatus and method of controlling the same |
CN105791659A (en) * | 2014-12-19 | 2016-07-20 | 联想(北京)有限公司 | Image processing method and electronic device |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3578246B2 (en) * | 1997-02-21 | 2004-10-20 | 松下電器産業株式会社 | Solid-state imaging device |
CN101390384A (en) * | 2005-12-27 | 2009-03-18 | 京瓷株式会社 | Imaging device and its image processing method |
CN101222584A (en) * | 2007-01-12 | 2008-07-16 | 三洋电机株式会社 | Apparatus and method for blur detection, and apparatus and method for blur correction |
US20090086174A1 (en) * | 2007-09-28 | 2009-04-02 | Sanyo Electric Co., Ltd. | Image recording apparatus, image correcting apparatus, and image sensing apparatus |
KR101574733B1 (en) * | 2008-11-19 | 2015-12-04 | 삼성전자 주식회사 | Image processing apparatus for obtaining high-definition color image and method therof |
KR101605129B1 (en) * | 2009-11-26 | 2016-03-21 | 삼성전자주식회사 | Digital photographing apparatus, controlling method thereof and recording medium for the same |
JP5409829B2 (en) * | 2012-02-17 | 2014-02-05 | キヤノン株式会社 | Image processing apparatus, imaging apparatus, image processing method, and program |
JP6020199B2 (en) * | 2013-01-24 | 2016-11-02 | 株式会社ソシオネクスト | Image processing apparatus, method, program, and imaging apparatus |
CN104851079B (en) * | 2015-05-06 | 2016-07-06 | 中国人民解放军国防科学技术大学 | Low-light (level) license plate image restoration methods based on noise/broad image pair |
CN104966071B (en) * | 2015-07-03 | 2018-07-24 | 武汉烽火众智数字技术有限责任公司 | It is a kind of based on the night car plate detection of infrared light filling and recognition methods and device |
-
2017
- 2017-03-31 CN CN201710207173.4A patent/CN106851115B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101424856A (en) * | 2007-10-31 | 2009-05-06 | 华晶科技股份有限公司 | Image- acquiring device for providing image compensating function and image compensation process thereof |
CN101867721A (en) * | 2010-04-15 | 2010-10-20 | 青岛海信网络科技股份有限公司 | Implement method, implement device and imaging device for wide dynamic images |
US8965120B2 (en) * | 2012-02-02 | 2015-02-24 | Canon Kabushiki Kaisha | Image processing apparatus and method of controlling the same |
CN105791659A (en) * | 2014-12-19 | 2016-07-20 | 联想(北京)有限公司 | Image processing method and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN106851115A (en) | 2017-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102126300B1 (en) | Method and apparatus for generating an all-in-focus image | |
CN106161967B (en) | Backlight scene panoramic shooting method and mobile terminal | |
CN106937045B (en) | Display method of preview image, terminal equipment and computer storage medium | |
CN106937055A (en) | A kind of image processing method and mobile terminal | |
US20120242852A1 (en) | Gesture-Based Configuration of Image Processing Techniques | |
CN109040523B (en) | Artifact eliminating method and device, storage medium and terminal | |
CN107172347B (en) | Photographing method and terminal | |
CN106981048B (en) | Picture processing method and device | |
CN106815809B (en) | Picture processing method and device | |
CN104902143B (en) | A kind of image de-noising method and device based on resolution ratio | |
US20170322680A1 (en) | Method and apparatus for setting background of ui control, and terminal | |
CN107360375B (en) | Shooting method and mobile terminal | |
CN107592458B (en) | Shooting method and mobile terminal | |
CN111614905A (en) | Image processing method, image processing device and electronic equipment | |
CN112258404A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN112734659A (en) | Image correction method and device and electronic equipment | |
CN112308797A (en) | Corner detection method and device, electronic equipment and readable storage medium | |
CN113794831B (en) | Video shooting method, device, electronic equipment and medium | |
US8866921B2 (en) | Devices and methods involving enhanced resolution image capture | |
CN106851115B (en) | Image processing method and device | |
CN107798662B (en) | Image processing method and mobile terminal | |
CN112887614B (en) | Image processing method and device and electronic equipment | |
CN107395983B (en) | Image processing method, mobile terminal and computer readable storage medium | |
CN105373316B (en) | A kind of Zoom method and device of mobile terminal display interface | |
CN109040604B (en) | Shot image processing method and device, storage medium and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |