CN107633498B - Image dark state enhancement method and device and electronic equipment - Google Patents
Image dark state enhancement method and device and electronic equipment Download PDFInfo
- Publication number
- CN107633498B CN107633498B CN201710864602.5A CN201710864602A CN107633498B CN 107633498 B CN107633498 B CN 107633498B CN 201710864602 A CN201710864602 A CN 201710864602A CN 107633498 B CN107633498 B CN 107633498B
- Authority
- CN
- China
- Prior art keywords
- image
- detail
- edge
- pixel point
- expansion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000012545 processing Methods 0.000 claims abstract description 12
- 230000000007 visual effect Effects 0.000 claims abstract description 8
- 230000004927 fusion Effects 0.000 claims description 22
- 230000002708 enhancing effect Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 14
- 230000004297 night vision Effects 0.000 description 7
- 230000002093 peripheral effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006386 memory function Effects 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The embodiment of the invention provides an image dark state enhancement method and device and electronic equipment, and relates to the field of image processing. The method comprises the steps of obtaining a first image and a second image which are shot by a camera device and have overlapped visual fields, carrying out three-dimensional matching on the first image and the second image, obtaining a parallax image comprising a plurality of parallax layers, obtaining masks of the parallax layers of the parallax image, obtaining a plurality of masks, overlapping the masks on the first image and the second image respectively, obtaining a first detail image with first edge details and a second detail image with second edge details respectively, obtaining a registered image pair by adjusting the relative positions of the first detail image and the second detail image, fusing brightness information of the registered image pair, and obtaining a dark state enhanced image.
Description
Technical Field
The invention relates to the field of image processing, in particular to an image dark state enhancement method and device and electronic equipment.
Background
In recent years, with the increasing popularity of consumer and professional digital cameras, a huge amount of image data is being generated. However, due to the influence of the scene conditions, the visual effect of many images shot in high dynamic range scenes, dim environments or special light conditions is poor, and the requirements of display and printing can be met only by performing post-enhancement processing, adjusting the dynamic range or extracting consistent color sense.
The image dark state enhancement technology is used for solving the problem of image darkness, and in an early search stage, the image dark state enhancement technology adopts a disparity map to perform registration attempt, specifically, integral disparity map cutting is performed on an image calibrated by the disparity map, then pixel registration is performed according to the segmented blocks.
Disclosure of Invention
Based on the above research, the present invention provides a method and an apparatus for enhancing image dark state, and an electronic device, so as to solve the above problems.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
the embodiment of the invention provides an image dark state enhancement method, which comprises the following steps:
acquiring a first image and a second image which are acquired by a camera device and have overlapped visual fields, wherein the first image is acquired by a first camera of the camera device, and the second image is acquired by a second camera of the camera device;
performing stereo matching on the first image and the second image to obtain a disparity map comprising a plurality of disparity layers;
obtaining masks of all parallax layers of the parallax map to obtain a plurality of masks;
overlaying the plurality of masks to the first image to obtain a first detail image with first edge details; overlaying the plurality of masks on the second image to obtain a second detail image with second edge details;
adjusting the relative position of the first detail image and the second detail image to obtain a registration image pair, wherein the first edge detail of the first detail image in the registration image pair is superposed with the second edge detail of the second detail image;
and fusing the registration image pair to obtain a dark state enhanced image.
As a further step, the adjusting the relative position of the first detail image and the second detail image to obtain a registered image pair includes:
calculating a moving pixel value according to the first edge detail and the second edge detail;
and adjusting the relative position of the first detail image and the second detail image according to the moving pixel value to obtain a registration image pair.
As a further step, the step of obtaining the value of the moving pixel according to the first edge details and the second edge details comprises:
performing expansion processing on the first edge details to obtain a first expansion image with a first expansion edge, and performing expansion processing on the second edge details to obtain a second expansion image with a second expansion edge;
superposing the first expansion edge and the second expansion edge to obtain a superposed edge;
and solving a moving pixel value according to the superposition edge.
As a further step, the step of overlapping the first expansion edge and the second expansion edge to obtain an overlapped edge includes:
adjusting the relative positions of the first and second dilated images such that the first and second dilated images coincide;
aiming at each pixel point of a first expansion edge of the first expansion image, obtaining a pixel point corresponding to the pixel point in a second expansion edge of the second expansion image;
extracting the position information of each pixel point of the first expansion edge and the position information of the pixel point corresponding to the pixel point in the second expansion edge;
adding the position information of each pixel point of the first expansion edge and the position information of the pixel point corresponding to the pixel point in the second expansion edge to obtain accumulated edge position information;
and obtaining a superposition edge according to the accumulated edge position information.
As a further step, the step of fusing the registered image pair to obtain a dark state enhanced image includes:
and respectively acquiring brightness information of the first detail image and the second detail image, and obtaining a dark state enhanced image according to the brightness information of the first detail image and the second detail image.
As a further step, the step of obtaining a dark state enhanced image according to the brightness information of the first detail image and the second detail image includes:
performing brightness fusion on the first detail image, wherein if the brightness information of the pixel point of the first detail image is greater than or equal to the brightness information of the pixel point corresponding to the pixel point in the second detail image, the brightness information of the pixel point of the first detail image is kept unchanged, and if the brightness information of the pixel point of the first detail image is smaller than the brightness information of the pixel point corresponding to the pixel point in the second detail image, the brightness information of the pixel point corresponding to the pixel point in the second detail image is used for replacing the brightness information of the pixel point of the first detail image;
and outputting the first detail image subjected to brightness fusion as a dark state enhanced image.
The embodiment of the invention provides an image dark state enhancement device, which comprises an image acquisition module, a stereo matching module, a mask extraction module, an image detail generation module, an image registration module and an image fusion module;
the image acquisition module is used for acquiring a first image and a second image which are acquired by a camera device and have overlapped visual fields, wherein the first image is acquired by a first camera of the camera device, and the second image is acquired by a second camera of the camera device;
the stereo matching module is used for carrying out stereo matching on the first image and the second image to obtain a disparity map comprising a plurality of disparity layers;
the mask extraction module is used for acquiring masks of all parallax layers of the parallax map to obtain a plurality of masks;
the image detail generating module is used for overlaying the plurality of masks on the first image to obtain a first detail image with first edge details, and overlaying the plurality of masks on the second image to obtain a second detail image with second edge details;
the image registration module is used for adjusting the relative position of the first detail image and the second detail image to obtain a registration image pair, and the first edge detail and the second edge detail in the registration image pair are overlapped;
the image fusion module is used for fusing the registration image pair to obtain a dark state enhanced image.
Further, the image registration module comprises a unit for obtaining the value of the moving pixel and a unit for adjusting the relative position of the image;
the unit for obtaining the moving pixel value is used for obtaining the moving pixel value according to the first edge details and the second edge details;
and the image relative position adjusting unit is used for adjusting the relative positions of the first detail image and the second detail image according to the moving pixel value to obtain a registration image pair.
Further, the image fusion module comprises a brightness information acquisition unit and an image dark state enhancement unit;
the brightness information acquiring unit is used for respectively acquiring the brightness information of the first detail image and the second detail image;
and the image dark state enhancement unit is used for obtaining a dark state enhanced image according to the brightness information of the first detail image and the second detail image.
An embodiment of the present invention further provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor implements the following steps when executing the program:
acquiring a first image and a second image which are acquired by a camera device and have overlapped visual fields, wherein the first image is acquired by a first camera of the camera device, and the second image is acquired by a second camera of the camera device;
performing stereo matching on the first image and the second image to obtain a disparity map comprising a plurality of disparity layers;
obtaining masks of all parallax layers of the parallax map to obtain a plurality of masks;
overlaying the plurality of masks to the first image to obtain a first detail image with first edge details; overlaying the plurality of masks on the second image to obtain a second detail image with second edge details;
adjusting the relative position of the first detail image and the second detail image to obtain a registration image pair, wherein the first edge detail of the first detail image in the registration image pair is superposed with the second edge detail of the second detail image;
and fusing the registration image pair to obtain a dark state enhanced image.
The embodiment of the invention provides an image dark state enhancement method, an image dark state enhancement device and electronic equipment.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic device 100 according to a preferred embodiment of the invention.
Fig. 2 shows a flowchart of an image dark state enhancement method provided by an embodiment of the present invention.
Fig. 3 is a schematic diagram illustrating the sub-steps included in step S105 described in fig. 2.
Fig. 4 is a schematic diagram illustrating sub-steps included in step S1051 illustrated in fig. 3 according to an embodiment.
FIG. 5 is a schematic diagram illustrating the sub-steps included in step A2 described in FIG. 4 according to an embodiment.
FIG. 6 is a block diagram of an image dark enhancement apparatus according to a preferred embodiment of the present invention.
Fig. 7 is a block diagram of an image registration module in the image dark state enhancing apparatus shown in fig. 6.
Fig. 8 is a block diagram of an image fusion module in the image dark state enhancement device shown in fig. 6.
Icon: 100-an electronic device; 101-a memory; 102-a memory controller; 103-a processor; 104-peripheral interfaces; 105-an image pick-up device; 106-a display device; 200-image dark state enhancement means; 201-an image acquisition module; 202-stereo matching module; 203-a mask extraction module; 204-an image detail generation module; 205-an image registration module; 206-an image fusion module; 2051-calculating a moving pixel value unit; 2052-adjust image relative position unit; 2061-acquiring luminance information unit; 2062-image dark enhancement unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a block diagram illustrating an electronic device 100 according to a preferred embodiment of the invention. The electronic device 100 may be, but is not limited to, a smart phone, a tablet computer, a laptop portable computer, a car computer, a Personal Digital Assistant (PDA), a wearable mobile terminal, a desktop computer, and the like. The electronic device 100 comprises a memory 101, a memory controller 102, a processor 103, a peripheral interface 104, a camera device 105, a display device 106 and an image dark state enhancing device 200.
The memory 101, the memory controller 102, the processor 103, the peripheral interface 104, the camera device 105 and the display device 106 are electrically connected directly or indirectly to realize data transmission or interaction. For example, these components may be electrically connected to each other via one or more communication buses or signal lines. The image dark state enhancing apparatus 200 includes at least one software functional module which can be stored in the memory 101 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the electronic device 100. The processor 103 is used for executing an executable module or a computer program stored in the memory 101, such as a software functional module or a computer program included in the image dark state enhancement device 200.
The Memory 101 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 101 is used for storing a program, and the processor 103 executes the program after receiving an execution instruction, and the method executed by the server defined by the process explained in any embodiment of the present invention can be applied to the processor 103, or implemented by the processor 103.
The processor 103 may be an integrated chip having signal processing capabilities. The processor 103 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), a voice processor, a video processor, and the like; but may also be a digital processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor 103 may be any conventional processor or the like.
The peripheral interface 104 is used to couple various input/output devices to the processor 103 as well as to the memory 101. In some embodiments, the peripheral interface 104, the processor 103, and the memory controller 102 may be implemented in a single chip. In other examples, they may be implemented separately from separate chips.
The camera device 105 is used for collecting images, and the camera device 105 comprises a first camera and a second camera, wherein the first camera is used for collecting a first image, and the second camera is used for collecting a second image. In the present embodiment, the camera 105 may be, but is not limited to, a binocular camera or a multi-view camera. In an embodiment of the present invention, the first camera may include a three-primary-color sensor, such as an RGB sensor, and the first camera may capture an RGB image through the RGB sensor, so that the first image captured by the first camera may be, but is not limited to, an RGB image. The second camera may include a black-and-white night vision sensor, such as a mono sensor, through which the second camera may capture a black-and-white night vision image, and thus, the second image captured by the second camera may be, but is not limited to, a black-and-white night vision image.
Referring to fig. 2, fig. 2 is a flowchart illustrating an image dark state enhancement method according to an embodiment of the present invention. The image dark state enhancement method comprises the following steps:
step S101: a first image and a second image with overlapping fields of view acquired by the camera device 105 are acquired, the first image being acquired by a first camera of the camera device 105, the second image being acquired by a second camera of said camera device 105.
In the embodiment of the present invention, the first image and the second image may be acquired by the image pickup device 105 provided on the apparatus that can execute the computer program for implementing the image dark state enhancement, or may be acquired by another binocular or multi-view image pickup device 105, such as a binocular digital camera having a memory function.
In an embodiment of the present invention, the camera device 105 may be a binocular camera, wherein the first camera may include a three primary color sensor, such as an RGB sensor, for capturing color images, and the second camera may include a black and white night vision sensor, such as a mono sensor, for capturing black and white night vision images. The first camera can be a left camera in the binocular camera, the second camera can be a right camera in the binocular camera, correspondingly, the first image can be a left image collected by the binocular camera, and the second image can be a right image collected by the right camera of the binocular camera.
Step S102: and performing stereo matching on the first image and the second image to obtain a disparity map comprising a plurality of disparity layers.
In the embodiment of the present invention, the method for stereo matching the first image and the second image to obtain the disparity map including a plurality of disparity layers may be, but is not limited to: 1. and obtaining a plurality of pairs of matched pixel points by establishing a one-to-one correspondence relationship between the first mark pixel points on the first image and the second mark pixel points of the second image. 2. And respectively extracting the position information of the first mark pixel point and the second mark pixel point in the matching pixel point pair. 3. And solving the parallax between the first mark pixel point and the second mark pixel point according to the position information of the first mark pixel point and the position information of the second mark pixel point. 4. And calculating a plurality of pairs of matched pixel points to obtain a plurality of corresponding parallaxes, converting the plurality of parallaxes into pixel information to obtain a parallax map, wherein each pixel point in the parallax map corresponds to each pair of matched pixel points in a one-to-one correspondence relationship. 5. And (4) grouping the points with the same pixel information in the disparity map as the same disparity layer for output. Points in the disparity map having the same pixel value indicate that the disparity values of the points are the same, and further, the points correspond to the same distance from the camera to the points in the scene. In the embodiment of the present invention, the first flag pixel point and the second flag pixel point refer to pixel points in the first image, which can be successfully matched with the portion having the overlapping view with the second image, and refer to all pixel points if all pixel points of the first image and the second image can be successfully matched.
In the embodiment of the present invention, the method for stereo matching the first image and the second image to obtain the disparity map including a plurality of disparity layers may be, but is not limited to: 1. and obtaining a plurality of pairs of matched pixel points by establishing a one-to-one correspondence relationship between the left pixel point on the first image and the right pixel point of the second image. 2. And respectively extracting the position information of the left pixel point and the right pixel point in the matched pixel point pair. 3. And solving the parallax between the left pixel point and the right pixel point according to the position information of the left pixel point and the position information of the right pixel point. 4. And calculating a plurality of pairs of matched pixel points to obtain a plurality of corresponding parallaxes, converting the plurality of parallaxes into pixel information to obtain a parallax map, wherein each pixel point in the parallax map corresponds to each pair of matched pixel points in a one-to-one correspondence relationship. 5. And (4) grouping the points with the same pixel information in the disparity map as the same disparity layer for output. Points in the disparity map having the same pixel value indicate that the disparity values of the points are the same, and further, the points correspond to the same distance from the camera to the points in the scene.
In the embodiment of the present invention, for example, an Efficient Large-scale stereo-matching (ELAS) algorithm may be used to perform stereo matching on the first image and the second image, so as to obtain a disparity map having a plurality of disparity layers.
Step S103: and acquiring masks of the parallax layers of the parallax map to obtain a plurality of masks.
In the embodiment of the present invention, a mask of each parallax layer of the parallax map is obtained, where the mask may be, but is not limited to, a contour of each parallax layer in the parallax map corresponding to an object in the first image or the second image that needs to be subjected to dark state enhancement, and may also be all edge, texture, or contour features in the parallax map. The method for obtaining the mask of each parallax layer of the parallax map may be, but is not limited to, extracting, by using a Laplacian (Laplacian) operator, an outline of each parallax layer corresponding to an object to be subjected to dark enhancement in the first image or the second image in the parallax map, and obtaining a plurality of masks, where the size of each mask is the same as the size of each first detail image and the size of each second detail image.
Step S104: overlaying a plurality of masks on the first image to obtain a first detail image with first edge details; and overlapping the plurality of masks to the second image to obtain a second detail image with second edge details.
In this embodiment, the method for overlaying the plurality of masks on the first image to obtain the first detail image with the first edge details may be, but is not limited to: adjusting the relative positions of the disparity map and the first image so that the disparity map and the first image are overlapped; and overlapping a plurality of masks attached to the parallax map to corresponding positions in the first image to obtain a first detail image with first edge details. Similarly, the method of overlaying the plurality of masks onto the second image to obtain the second detail image with the second edge detail may be, but is not limited to: adjusting the relative positions of the parallax map and the second image so that the parallax map and the second image are overlapped; and overlapping a plurality of masks attached to the parallax map to corresponding positions in the second image to obtain a second detail image with second edge details.
In the embodiment of the present invention, the method of overlaying the plurality of masks on the first image and the second image respectively may be, but is not limited to: and finally, a target needing dark state enhancement in the first image and a target needing dark state enhancement in the second image can be reserved, namely a first detail image with first edge details and a second detail image with second edge details are obtained.
Step 105: and adjusting the relative position of the first detail image and the second detail image to obtain a registration image pair, wherein the first edge detail of the first detail image in the registration image pair is coincident with the second edge detail of the second detail image.
In the embodiment of the invention, the calculation amount in the image alignment process can be reduced by adjusting the relative position between the first detail image and the second detail image so that the first edge detail of the first detail image and the second edge detail of the second detail image are overlapped.
Referring to fig. 3, step S105 may include step S1051 and step S1052, and fig. 3 is a schematic diagram of the sub-steps included in step S105 described in fig. 2 in the embodiment.
Step S1051: and calculating the value of the moving pixel according to the first edge detail and the second edge detail.
In the embodiment of the present invention, the method for obtaining the value of the moving pixel according to the first edge details and the second edge details may be, but is not limited to: and overlapping the first edge details and the second edge details to obtain an overlapped edge, and solving a moving pixel value according to the overlapped edge. Referring to fig. 4, the step S1051 may include three substeps, i.e., a step a1, a step a2, and a step A3. Fig. 4 is a schematic diagram illustrating sub-steps included in step S1051 illustrated in fig. 3 according to an embodiment.
Step A1: and performing expansion processing on the first edge details to obtain a first expansion image with a first expansion edge, and performing expansion processing on the second edge details to obtain a second expansion image with a second expansion edge.
In this embodiment of the present invention, the method for performing dilation processing on the first edge detail and the second edge detail to obtain the first dilated image and the second dilated image may be, but is not limited to: 1. constructing a 3x3 structural element; 2. scanning each pixel of the image with a 3x3 structuring element; 3. and respectively carrying out 'AND' operation on each pixel point in the first edge details and each pixel point in the second edge details by using the structural elements: if the pixel value of the pixel point in the first edge detail and the pixel value of the structural element are both 0, setting the pixel value of the pixel point in the first edge detail as 0, and otherwise, setting the pixel value as 1; obtaining a first expansion image with a first expansion edge after completing the AND operation of all pixel points in the structural elements and the second edge details; if the pixel value of the pixel point in the second edge detail and the pixel value of the structural element are both 0, setting the pixel value of the pixel point in the second edge detail as 0, and otherwise, setting the pixel value as 1; and obtaining a second expansion image with a second expansion edge after the AND operation of all the pixel points in the structural elements and the second edge details is completed.
Step A2: and overlapping the first expansion edge and the second expansion edge to obtain an overlapped edge.
Referring to FIG. 5, the step A2 may include steps A21-A25. FIG. 5 is a schematic diagram illustrating the sub-steps included in step A2 described in FIG. 4 according to an embodiment.
Step A21: the relative positions of the first and second dilated images are adjusted so that the first and second dilated images coincide.
In the embodiment of the present invention, the method for adjusting the relative position of the first detail image and the second detail image includes:
respectively extracting four vertexes of the first expansion image and the second expansion image to form a set: vl ═ Vl1,vl2,vl3,vl4}、Vr={vr1,vr2,vr3,vr4Wherein vliWhere i is 1,2,3,4 denotes the ith vertex of the first dilated image, and vr isiI is 1,2,3,4, which denotes the ith vertex of the second dilated image; determining a threshold value of coincidence of the first dilated image and the second dilated image as epsilon, wherein epsilon is a constant;
calculating the Euler distance between the vertex of the first dilated image and the corresponding vertex of the vertex in the second dilated image:
wherein,respectively representing coordinates of an ith vertex of the first dilated image and coordinates of an ith vertex of the second dilated image;
and solving the sum of Euler distances between the four vertexes of the first expansion image and the four vertexes of the second expansion image to obtain a total error:
judging whether the total error is larger than a threshold value for superposition of the first expansion image and the second expansion image, if so, moving the second expansion image, recalculating the total error, and judging whether the total error is larger than the threshold value for superposition of the first expansion image and the second expansion image; otherwise, the first expansion image and the second expansion image are overlapped, and the second expansion image is stopped moving.
Step A22: and aiming at each pixel point of the first expansion edge, obtaining a pixel point corresponding to the pixel point in the second expansion edge.
In the embodiment of the present invention, the method for obtaining the pixel point corresponding to the pixel point in the first expansion edge of the first expansion image in the second expansion edge of the second expansion image may be implemented by, for example, the following method: acquiring the position information of each pixel point of a first expansion edge in a first overlapped expansion image and the position information of a pixel point corresponding to a second expansion edge in a second overlapped expansion image; and solving a pixel point corresponding to each pixel point in the first expansion edge of the first expansion image in the second expansion edge of the second expansion image according to the position information. For another example, a matching method based on pixel information may be used to obtain a pixel point corresponding to a pixel point in a first expansion edge of a first expansion image in a second expansion edge of a second expansion image.
Step A23: and extracting the position information of each pixel point of the first expansion edge and the position information of the pixel point corresponding to the pixel point in the second expansion edge.
Step A24: adding the position information of each pixel point of the first expansion edge and the position information of the pixel point corresponding to the pixel point in the second expansion edge to obtain accumulated edge position information;
step A25: and obtaining the superposition edge according to the accumulated edge position information.
In the embodiment of the present invention, the method of obtaining the superimposed edge according to the accumulated edge position information, for example, the accumulated edge position information is divided by 2 to obtain the superimposed edge.
In the embodiment of the present invention, the superimposed edge may be obtained according to the pixel information of each pixel point of the first expansion edge and the pixel information of each pixel point of the second expansion edge by extracting the pixel information of each pixel point of the first expansion edge and the pixel information of each pixel point of the second expansion edge.
Step A3: and solving the value of the moving pixel according to the superposition edge.
In the embodiment of the invention, the value of the moving pixel is obtained according to the superposition edge. The method of (a) may be, but is not limited to: and acquiring the minimum value of the superposition edge, and taking the minimum value as the value of the moving pixel.
Step S1052: and adjusting the relative positions of the first detail image and the second detail image according to the moving pixel value to obtain a registration image pair.
In the embodiment of the present invention, the method in step S1052 may be, but is not limited to: and adjusting the relative position of the first detail image and the second detail image by taking the moving pixel value as a step length. The method of step S1052 may further include, after adjusting the relative positions of the first detail image and the second detail image by using the moving pixel value as a step length, determining whether the first edge detail coincides with the second edge detail, and if so, completing adjusting the relative positions of the first detail image and the second detail image to obtain a registered image pair, where the registered image pair includes the first detail image and the second detail image; and if not, obtaining a new moving pixel value, taking the new moving pixel value as a step length for adjusting the relative position of the first detail image and the second detail image, and adjusting the relative position of the first detail image and the second detail image.
Step S106: and fusing the registration image pair to obtain a dark state enhanced image.
In the embodiment of the present invention, the method for fusing the registered image pair to obtain the dark state enhanced image may be, but is not limited to: and respectively acquiring the brightness information of the first detail image and the second detail image, and obtaining a dark state enhanced image according to the brightness information of the first detail image and the second detail image. As a further method for obtaining the dark state enhanced image according to the brightness information of the first detail image and the second detail image, but not limited to: and performing brightness fusion on the first detail image, wherein if the brightness information of the pixel point of the first detail image is greater than or equal to the brightness information of the pixel point corresponding to the pixel point in the second detail image, the brightness information of the pixel point of the first detail image is kept unchanged, and if the brightness information of the pixel point of the first detail image is smaller than the brightness information of the pixel point corresponding to the pixel point in the second detail image, the brightness information of the pixel point corresponding to the pixel point in the second detail image replaces the brightness information of the pixel point of the first detail image. And outputting the first detail image subjected to brightness fusion as a dark state enhanced image.
In this embodiment of the present invention, the method for obtaining the dark state enhanced image according to the luminance information of the first detail image and the second detail image may be: establishing a blank image, and assigning the pixel value of the first detail image to the blank image to obtain an original image; comparing the brightness information of the first detail image and the second detail image, if the brightness value of the first detail image is larger than or equal to the brightness value of the second detail image, assigning the brightness value of the first detail image to the brightness channel of the original image, otherwise, assigning the brightness value of the second detail image to the brightness channel of the original image; and outputting the original image subjected to brightness fusion as a dark state enhanced image.
Referring to fig. 6, fig. 6 is a block diagram illustrating an image dark state enhancement device 200 according to a preferred embodiment of the present invention. The image dark state enhancement apparatus 200 includes an image acquisition module 201, a stereo matching module 202, a mask extraction module 203, an image detail generation module 204, an image registration module 205, and an image fusion module 206.
The image acquisition module 201 is configured to acquire a first image and a second image having overlapping fields of view acquired by the camera device 105, where the first image is acquired by a first camera of the camera device 105, and the second image is acquired by a second camera of the camera device 105. In the embodiment of the present invention, the image acquiring module 201 may be configured to execute step S101.
The stereo matching module 202 is configured to perform stereo matching on the first image and the second image to obtain a disparity map including a plurality of disparity layers. In this embodiment of the present invention, the stereo matching module 202 may be configured to execute step S102.
The mask extraction module 203 is configured to obtain masks of the parallax layers of the parallax map, and obtain a plurality of masks. In an embodiment of the present invention, the mask extraction module 203 may be configured to perform step S103.
The image detail generating module 204 is configured to overlay a plurality of masks onto the first image to obtain a first detail image with first edge details, and overlay a plurality of masks onto the second image to obtain a second detail image with second edge details. In an embodiment of the present invention, the image detail generating module 204 may be configured to execute step S204.
The image registration module 205 is configured to adjust the relative position of the first detail image and the second detail image, resulting in a registered image pair, where the first edge detail and the second edge detail in the registered image pair coincide. In an embodiment of the present invention, the image registration module 205 may be configured to perform step S105.
Referring to fig. 7, fig. 7 is a block diagram illustrating an image registration module 205 in the image dark state enhancing apparatus 200 shown in fig. 6. The image registration module 205 includes a calculate moving pixel values unit 2051 and an adjust image relative position unit 2052. The find moving pixel value unit 2051 is configured to find a moving pixel value according to the first edge details and the second edge details. In this embodiment of the present invention, the find moving pixel value unit 2051 may be used to perform step S1051. The adjust image relative position unit 2052 is configured to adjust the relative position of the first detail image and the second detail image according to the moving pixel value. In an embodiment of the present invention, the adjust image relative position unit 2052 may be used to perform step S1052.
The image fusion module 206 is configured to fuse the registered image pair to obtain a dark state enhanced image. In an embodiment of the present invention, the image fusion module 206 may be configured to execute step S106. Referring to fig. 8, fig. 8 is a block diagram illustrating the image fusion module 206 in the image dark state enhancement device 200 shown in fig. 6. The image fusion module 206 includes an acquisition brightness information unit 2061 and an image dark state enhancement unit 2062. The acquiring luminance information unit 2061 is configured to acquire luminance information of the first detail image and the second detail image, respectively. In the embodiment of the present invention, the unit 2061 of obtaining the brightness information may be used to execute step S1061. The image dark state enhancing unit 2062 is configured to obtain a dark state enhanced image according to the brightness information of the first detail image and the second detail image. In this embodiment of the present invention, the image dark state enhancing unit 2062 may be used to execute step S1062.
In summary, according to the image dark state enhancement method, the image dark state enhancement device and the electronic device provided by the embodiments of the present invention, the plurality of masks based on the depth map are respectively overlaid and covered on the first image and the second image, so that the first edge details and the second edge details can be accurately obtained, the first detail image and the second detail image are registered based on the first edge details and the second edge details, convenience and simplicity are achieved, the calculation amount is small, image fusion is performed according to the first image and the second image, so that the dark state enhancement is realized, and the output dark state enhanced image has the information of the first image and the information of the second image. Furthermore, the first camera comprises an RGB sensor, a first image shot by the first camera has color information, the second camera comprises a mono sensor, a second image shot by the second camera has black-and-white night vision information, image fusion is carried out according to the first image and the second image, dark state enhancement is achieved, the output dark state enhanced image has the color information of the first image and the black-and-white night vision information of the second image, and therefore the first camera has high resolution and abundant image details, and the dark state enhancement effect is good. By using a camera device comprising a combination of an RGB sensor and a mono sensor, the device is low cost and suitable for a large number of application scenarios, in particular consumer-grade application scenarios.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A method for enhancing dark states of an image, the method comprising:
acquiring a first image and a second image which are acquired by a camera device and have overlapped visual fields, wherein the first image is acquired by a first camera of the camera device, and the second image is acquired by a second camera of the camera device; performing stereo matching on the first image and the second image to obtain a disparity map comprising a plurality of disparity layers; the disparity values of points with the same pixel value in the disparity map are the same;
obtaining masks of all parallax layers of the parallax map to obtain a plurality of masks;
overlaying the plurality of masks to the first image to obtain a first detail image with first edge details; overlaying the plurality of masks on the second image to obtain a second detail image with second edge details;
adjusting the relative position of the first detail image and the second detail image to obtain a registration image pair, wherein the first edge detail of the first detail image in the registration image pair is superposed with the second edge detail of the second detail image;
and fusing the registration image pair to obtain a dark state enhanced image.
2. The method according to claim 1, wherein the step of adjusting the relative positions of the first detail image and the second detail image to obtain a registered image pair comprises:
calculating a moving pixel value according to the first edge detail and the second edge detail;
and adjusting the relative position of the first detail image and the second detail image according to the moving pixel value to obtain a registration image pair.
3. The method according to claim 2, wherein the step of obtaining the value of the moving pixel according to the first edge details and the second edge details comprises:
performing expansion processing on the first edge details to obtain a first expansion image with a first expansion edge, and performing expansion processing on the second edge details to obtain a second expansion image with a second expansion edge;
superposing the first expansion edge and the second expansion edge to obtain a superposed edge;
and solving a moving pixel value according to the superposition edge.
4. The method for enhancing the dark state of the image according to claim 3, wherein the step of superimposing the first expansion edge and the second expansion edge to obtain a superimposed edge comprises:
adjusting the relative positions of the first and second dilated images such that the first and second dilated images coincide;
aiming at each pixel point of a first expansion edge of the first expansion image, obtaining a pixel point corresponding to the pixel point in a second expansion edge of the second expansion image;
extracting the position information of each pixel point of the first expansion edge and the position information of the pixel point corresponding to the pixel point in the second expansion edge;
adding the position information of each pixel point of the first expansion edge and the position information of the pixel point corresponding to the pixel point in the second expansion edge to obtain accumulated edge position information;
and obtaining a superposition edge according to the accumulated edge position information.
5. The method of claim 1, wherein the step of fusing the registered image pair to obtain a dark state enhanced image comprises:
and respectively acquiring brightness information of the first detail image and the second detail image, and obtaining a dark state enhanced image according to the brightness information of the first detail image and the second detail image.
6. The method according to claim 5, wherein the step of obtaining the dark state enhanced image according to the brightness information of the first detail image and the second detail image comprises:
performing brightness fusion on the first detail image, wherein if the brightness information of the pixel point of the first detail image is greater than or equal to the brightness information of the pixel point corresponding to the pixel point in the second detail image, the brightness information of the pixel point of the first detail image is kept unchanged, and if the brightness information of the pixel point of the first detail image is smaller than the brightness information of the pixel point corresponding to the pixel point in the second detail image, the brightness information of the pixel point corresponding to the pixel point in the second detail image is used for replacing the brightness information of the pixel point of the first detail image;
and outputting the first detail image subjected to brightness fusion as a dark state enhanced image.
7. An image dark state enhancement device is characterized by comprising an image acquisition module, a stereo matching module, a mask extraction module, an image detail generation module, an image registration module and an image fusion module;
the image acquisition module is used for acquiring a first image and a second image which are acquired by a camera device and have overlapped visual fields, wherein the first image is acquired by a first camera of the camera device, and the second image is acquired by a second camera of the camera device;
the stereo matching module is used for carrying out stereo matching on the first image and the second image to obtain a disparity map comprising a plurality of disparity layers; the disparity values of points with the same pixel value in the disparity map are the same;
the mask extraction module is used for acquiring masks of all parallax layers of the parallax map to obtain a plurality of masks;
the image detail generating module is used for overlaying the plurality of masks on the first image to obtain a first detail image with first edge details, and overlaying the plurality of masks on the second image to obtain a second detail image with second edge details;
the image registration module is used for adjusting the relative position of the first detail image and the second detail image to obtain a registration image pair, and the first edge detail and the second edge detail in the registration image pair are overlapped;
the image fusion module is used for fusing the registration image pair to obtain a dark state enhanced image.
8. The image dark state enhancement device of claim 7, wherein the image registration module comprises a unit for finding a moving pixel value and a unit for adjusting the relative position of the image;
the unit for obtaining the moving pixel value is used for obtaining the moving pixel value according to the first edge details and the second edge details;
and the image relative position adjusting unit is used for adjusting the relative positions of the first detail image and the second detail image according to the moving pixel value to obtain a registration image pair.
9. The image dark state enhancement device of claim 7, wherein the image fusion module comprises a luminance information obtaining unit and an image dark state enhancement unit;
the brightness information acquiring unit is used for respectively acquiring the brightness information of the first detail image and the second detail image;
and the image dark state enhancement unit is used for obtaining a dark state enhanced image according to the brightness information of the first detail image and the second detail image.
10. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the program:
acquiring a first image and a second image which are acquired by a camera device and have overlapped visual fields, wherein the first image is acquired by a first camera of the camera device, and the second image is acquired by a second camera of the camera device;
performing stereo matching on the first image and the second image to obtain a disparity map comprising a plurality of disparity layers; the disparity values of points with the same pixel value in the disparity map are the same;
obtaining masks of all parallax layers of the parallax map to obtain a plurality of masks;
overlaying the plurality of masks to the first image to obtain a first detail image with first edge details; overlaying the plurality of masks on the second image to obtain a second detail image with second edge details;
adjusting the relative position of the first detail image and the second detail image to obtain a registration image pair, wherein the first edge detail of the first detail image in the registration image pair is superposed with the second edge detail of the second detail image;
and fusing the registration image pair to obtain a dark state enhanced image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710864602.5A CN107633498B (en) | 2017-09-22 | 2017-09-22 | Image dark state enhancement method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710864602.5A CN107633498B (en) | 2017-09-22 | 2017-09-22 | Image dark state enhancement method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107633498A CN107633498A (en) | 2018-01-26 |
CN107633498B true CN107633498B (en) | 2020-06-23 |
Family
ID=61102553
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710864602.5A Active CN107633498B (en) | 2017-09-22 | 2017-09-22 | Image dark state enhancement method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107633498B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115348390A (en) * | 2022-08-23 | 2022-11-15 | 维沃移动通信有限公司 | Shooting method and shooting device |
CN117237258B (en) * | 2023-11-14 | 2024-02-09 | 山东捷瑞数字科技股份有限公司 | Night vision image processing method, system, equipment and medium based on three-dimensional engine |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101765019B (en) * | 2008-12-25 | 2012-07-18 | 北京大学 | Stereo matching algorithm for motion blur and illumination change image |
CN104778721B (en) * | 2015-05-08 | 2017-08-11 | 广州小鹏汽车科技有限公司 | The distance measurement method of conspicuousness target in a kind of binocular image |
CN106296624B (en) * | 2015-06-11 | 2020-05-26 | 联想(北京)有限公司 | Image fusion method and device |
-
2017
- 2017-09-22 CN CN201710864602.5A patent/CN107633498B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN107633498A (en) | 2018-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108932051B (en) | Augmented reality image processing method, apparatus and storage medium | |
CN106981078B (en) | Sight line correction method and device, intelligent conference terminal and storage medium | |
CN105049718A (en) | Image processing method and terminal | |
CN109525786B (en) | Video processing method and device, terminal equipment and storage medium | |
CN112749613B (en) | Video data processing method, device, computer equipment and storage medium | |
TW202011353A (en) | Method for operating a depth data processing system | |
CN113301320B (en) | Image information processing method and device and electronic equipment | |
WO2022160857A1 (en) | Image processing method and apparatus, and computer-readable storage medium and electronic device | |
CN113378705B (en) | Lane line detection method, device, equipment and storage medium | |
CN113673584A (en) | Image detection method and related device | |
CN110770786A (en) | Shielding detection and repair device based on camera equipment and shielding detection and repair method thereof | |
CN106952247A (en) | A kind of dual camera terminal and its image processing method and system | |
CN111462164A (en) | Foreground segmentation method and data enhancement method based on image synthesis | |
CN107633498B (en) | Image dark state enhancement method and device and electronic equipment | |
CN112802081A (en) | Depth detection method and device, electronic equipment and storage medium | |
CN110177216B (en) | Image processing method, image processing device, mobile terminal and storage medium | |
CN114119701A (en) | Image processing method and device | |
CN109040612B (en) | Image processing method, device and equipment of target object and storage medium | |
US10282633B2 (en) | Cross-asset media analysis and processing | |
CN112950641B (en) | Image processing method and device, computer readable storage medium and electronic equipment | |
CN114066731A (en) | Method and device for generating panorama, electronic equipment and storage medium | |
CN112615993A (en) | Depth information acquisition method, binocular camera module, storage medium and electronic equipment | |
CN113240602A (en) | Image defogging method and device, computer readable medium and electronic equipment | |
CN112258435A (en) | Image processing method and related product | |
CN116091572B (en) | Method for acquiring image depth information, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |