CN108364263B - Vehicle-mounted image processing method for standard definition input and high definition output - Google Patents
Vehicle-mounted image processing method for standard definition input and high definition output Download PDFInfo
- Publication number
- CN108364263B CN108364263B CN201810112289.4A CN201810112289A CN108364263B CN 108364263 B CN108364263 B CN 108364263B CN 201810112289 A CN201810112289 A CN 201810112289A CN 108364263 B CN108364263 B CN 108364263B
- Authority
- CN
- China
- Prior art keywords
- image
- coefficient
- pixel
- images
- adjustment coefficient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 44
- 238000007499 fusion processing Methods 0.000 claims abstract description 19
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 238000000034 method Methods 0.000 claims abstract description 12
- 238000012937 correction Methods 0.000 claims abstract description 10
- 230000008859 change Effects 0.000 claims abstract description 3
- 230000008569 process Effects 0.000 claims description 4
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 8
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007790 scraping Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/92—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/73—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Abstract
A vehicle-mounted image processing method of standard definition input and high definition output; the method comprises image enhancement processing and image fusion processing; the image enhancement processing includes: inputting an RGB color image, and respectively performing the following processing on R, G, B three channel images; secondly, carrying out gray stretching correction on the image; thirdly, calculating the brightness coefficient and the adjustment coefficient of the whole image; fourthly, dividing the image into M by M sub-block images; fifthly, calculating the brightness coefficient and the adjusting coefficient of each sub-block image; sixthly, correcting the image adjustment coefficient of each sub-block according to the image overall adjustment coefficient; seventhly, calculating an adjusting coefficient of each pixel point; eighthly, adjusting the brightness of each pixel point; the image fusion processing is performed on the overlapping area of two adjacent images in a weighted gradual change mode; and defining two adjacent images as a first image and a second image, and gradually transitioning from the first image to the second image in an overlapping area through position weighted fusion to realize the fusion of the two adjacent images.
Description
Technical Field
The invention relates to the technical field of vehicle-mounted image processing, in particular to a vehicle-mounted image processing method for standard definition input and high definition output.
Background
With the development of the automobile industry, in order to improve driving safety, panoramic looking-around systems are increasingly used in vehicles. As a part of vehicle-mounted monitoring, a surround view system is provided with at least four wide-angle cameras (usually, ultra-wide or fisheye cameras are preferred) covering the field range of the vehicle periphery at the periphery of a vehicle body, the cameras transmit collected multi-path video images to a system of the vehicle at the same time, the multi-path video images are processed into a 360-degree vehicle body top view of the vehicle periphery through the system, and finally the multi-path video images are displayed on a screen of a center console, so that a driver can clearly check whether obstacles exist around the vehicle and know the relative direction and distance of the obstacles, and the driver is helped to avoid the obstacles. Since the look-around system is very intuitive and can eliminate blind areas, it can help the driver to maneuver the vehicle from time to time such as: the car can be backed and parked in place, and accidents such as scraping, collision, collapse and the like can be effectively reduced through operations such as complex pavement and the like.
The conventional panoramic looking around system includes: the device comprises a plurality of cameras, an image acquisition component, a video synthesis/processing component, a digital image processing component and an on-board display which are arranged on the periphery of a vehicle body. The cameras respectively shoot front, back, left and right images of the automobile, the images are converted into digital information by the image acquisition part and sent to the video synthesis/processing part, the digital information is processed into a panoramic image by the video synthesis/processing part and then converted into an analog signal by the digital image processing part to be output, and the panoramic image information of the automobile and the surrounding environment of the automobile is displayed on a vehicle-mounted display installed on a console of the automobile. The panoramic all-round looking system mainly comprises a plurality of paths of camera splicing and fusing pictures displayed on a screen, and simultaneously, pictures of a certain path of camera can be displayed in a split screen mode so as to assist a specific scene, for example, a rear-view picture is displayed for car backing observation, or pictures on two sides are displayed for turning blind spots observation and the like.
At present, a mainstream panoramic all-around system is divided into a standard definition system and a high definition system according to video resolution. The standard definition look-around system uses at least four standard definition cameras as input, and outputs the input to a display end through a standard definition channel after splicing and fusion, wherein video systems are NTSC and PAL generally, and video resolutions are 480P and 576P generally; the high-definition look-around system uses at least four high-definition cameras as input, and outputs the input to a display end through a high-definition channel after splicing and fusion, wherein the high-definition look-around system generally adopts an LVDS scheme called in the industry, the video resolution is generally 720P or higher, and the GMSL SerDes technology of the American signal semiconductor company and the FPD-Link technology of TI correspond to the GMSL SerDes technology and the FPD-Link technology of the TI technology respectively.
The main problems of the standard definition all-round system are that the video definition is not high and the image quality is poor; the high-definition panoramic system has obvious improvement on the two points, but the corresponding problem is high price, particularly the cost of a camera and a connecting wire. Therefore, from the viewpoint of cost, it is necessary to select a standard definition all-round system, and the image quality of the standard definition all-round system is difficult to satisfy the requirement, and image enhancement processing is required.
The image enhancement technology is an image processing technology for enhancing interested information in an image, and aims to improve image quality and visual effect. Features such as edge information, contour information, contrast and the like of an image are enhanced, so that some interesting features in the image are highlighted, uninteresting features are suppressed, image quality is improved, information content is enriched, and use value is improved.
The unprocessed images collected by the conventional vehicle-scale camera have the problems of blurriness, low contrast and dim color in different degrees, and the overexposure phenomenon is likely to occur under the condition of strong light, while the problem that the overall brightness of the images is insufficient and the image content cannot be normally distinguished when the light is dark occurs. The general image enhancement technology has difficulty in simultaneously considering the problems, and can only solve some of the problems. At this time, an image enhancement method is needed, which can simultaneously consider these aspects, for example, while solving the problem of overexposure of strong light, it can also well deal with the problem of overall dark image when the light is dark, and even can lighten the dark part in an image without distorting the bright part.
In addition, the overlapped part of the vehicle-mounted panoramic all-around images is usually displayed by cutting through a dividing line, or the two images are displayed in a mode of 1: the mixed display of the proportion of 1 does not distinguish the merits of the two images of the overlapped part, so that the bad part in the original image is displayed, and the imaging quality is greatly reduced.
Therefore, how to solve the above-mentioned deficiencies of the prior art is a problem to be solved by the present invention.
Disclosure of Invention
The invention aims to provide a vehicle-mounted image processing method for standard definition input and high definition output.
In order to achieve the purpose, the invention adopts the technical scheme that:
a vehicle-mounted image processing method of standard definition input and high definition output; the system is used for processing a panoramic image which is formed by splicing and synthesizing images shot by a plurality of vehicle-mounted standard definition cameras simultaneously;
the synthesized panoramic image comprises two types of areas, wherein the first type of area is an area formed by single images shot by each camera; the second type of area is an area formed by overlapping images shot by two adjacent cameras;
the method comprises image enhancement processing and image fusion processing; wherein the image enhancement processing is for a first type of region; the image fusion process is directed to a second type of region;
wherein the image enhancement processing includes:
inputting an RGB color image, and respectively performing the following processing on R, G, B three channel images;
secondly, stretching and correcting the gray scale of the image;
step three, calculating the brightness coefficient and the adjustment coefficient of the whole image;
dividing the image into M by M sub-block images, wherein M is a positive integer greater than or equal to 2;
calculating the brightness coefficient and the adjustment coefficient of each sub-block image;
step six, correcting the image adjustment coefficient of each sub-block according to the whole image adjustment coefficient;
step seven, calculating an adjusting coefficient of each pixel point;
step eight, adjusting the brightness of each pixel point;
in the second step, the gray stretching correction processing is performed on the image according to the following formula 1:
x and y in formula 1 respectively represent the horizontal and vertical distances from a pixel point in an image to the origin of coordinates of the image; i (x, y) represents the brightness value of the (x, y) pixel point in the image; i' (x, y) represents the brightness value of the (x, y) pixel point in the image after processing; max (I), min (I) respectively represent the brightness values of the brightest pixel point and the darkest pixel point in the image;
in the third step, the process of calculating the overall brightness coefficient of the image is performed according to the following formula 2:
l in formula 2totalRepresenting the overall brightness coefficient of the image; i' (x, y) has the same meaning as defined in equation 1;
in the third step, the processing of calculating the overall image adjustment coefficient is performed according to the following formula 3:
αtotal=Ltotal/0.5
α in equation 3totalRepresenting the overall image adjustment coefficient; l istotalHas the same meaning as defined in formula 2;
in the fifth step, the brightness coefficient of each sub-block image is calculated according to a formula 2, and then the adjustment coefficient of each sub-block image is calculated according to a formula 3 to obtain the adjustment coefficient alpha of each image sub-blockpiece;
In the sixth step, the brightness coefficient of each sub-block image is corrected according to the overall image adjustment coefficient, and the adjustment is carried out according to the following modes:
when in useOf no alpha'piece=5*αtotalWhen is coming into contact withOf no alpha'piece=0.2*αtotalWhen is coming into contact withOf no alpha'piece=αtotal;
Wherein alpha ispieceDenotes a pre-correction image sub-block adjustment coefficient of'pieceRepresenting the adjustment coefficient, alpha, of the image subblock after correctiontotalRepresenting the overall image adjustment coefficient;
in the seventh step, the coefficient alpha 'is adjusted according to each corrected sub-block image'pieceCalculating the adjustment coefficient of each pixel point according to the following formula 4:
wherein alpha (x, y) is equal to an image adjusting coefficient alpha 'of the subblock to which the (x, y) pixel point in the image belongs'pieceWherein x, y have the same meaning as defined in formula 1; α' (x, y) represents an adjustment coefficient of the (x, y) pixel point;
a1 represents the lower limit of the horizontal domain of the pixel (x, y), a2 represents the upper limit of the horizontal domain of the pixel (x, y), b1 represents the lower limit of the vertical domain of the pixel (x, y), and b2 represents the upper limit of the vertical domain of the pixel (x, y);
wherein a1 is x-w/(2 × M), a2 is x + w/(2 × M), b1 is y-h/(2 × M), b2 is y + h/(2 × M), h represents a sub-image height, w represents a sub-image width, and M means the same as step four;
n ═ N (a2-a1+1) × (b2-b1+1), and indicates the number of pixel points within the range of shading;
in the step eight, the brightness of each pixel point is adjusted according to the following formula 5:
wherein, I '(x, y) represents the pixel brightness before the pixel point (x, y) is adjusted, I ″ (x, y) represents the pixel brightness after the pixel point (x, y) is adjusted, and α' (x, y) has the same meaning as formula 4;
after the adjustment is finished, outputting an image after the enhancement processing;
the image fusion processing is carried out on the overlapping area of two adjacent images in a weighted gradual change mode; the pixel color of the overlapping area image is determined by the color of the adjacent two images in the overlapping area; defining two adjacent images as a first image and a second image, wherein in the images of the overlapping area, the color of the pixel closer to the first image is closer to the color of the first image, and the color of the pixel closer to the second image is closer to the color of the second image; and gradually transiting from the first image to the second image in the overlapping area through position weighted fusion, thereby realizing the fusion of two adjacent images.
The relevant content in the above technical solution is explained as follows:
1. in the above scheme, the first type of area is an area formed by individual images shot by each camera, that is, a part of the panoramic image without image overlap; the second type of area is an area formed by overlapping images captured by two adjacent cameras, i.e., a portion where images overlap in the panoramic image.
2. In the above solution, the object of the image enhancement processing is a portion of the panoramic image without image overlap, such as a reverse image of a rear camera and a blind spot monitoring image of a side camera.
3. In the above scheme, the image enhancement processing can eliminate image display problems such as blurring, low contrast, dim color, overexposure, and dark picture.
4. In the above scheme, the origin of coordinates of the image may be the upper left corner of the image, or may define other positions.
5. In the foregoing solution, the image fusion processing includes:
defining the closest distance from the overlapping region point P (x, y) to the left boundary as DL (x, y), and the closest distance from the overlapping region point P (x, y) to the right boundary as DR (x, y), wherein x and y respectively represent the horizontal and vertical distances from the pixel point in one image to the coordinate origin of the image (such as the upper left corner of the image);
the weight of the pointCoefficient betal(x, y) is:
when the pixel values of the first image and the second image at the point P are respectively Il(x,y)、Ir(x, y), the pixel value I of the target fusion image at that pointc(x, y) is:
Ic(x,y)=βl(x,y)Il(x,y)+(1-βl(x,y))Ir(x,y)
wherein c ∈ (R, G, B) represents three color channel components of red, green and blue, betalAnd (x, y) is a weight coefficient.
6. In the above scheme, the image fusion process is performed on the overlapped region of two adjacent images after registration; the meaning of the "registration", that is, the splicing and matching of two images, is to make the two cameras separately acquire images and display the images in one image, and then the two images need to be spliced and matched, so as to overlap the same contents.
The working principle and the advantages of the invention are as follows:
the invention is based on a standard definition input and high definition output architecture of a vehicle-mounted image, and the quality and definition of the image are enhanced through image processing; the image processing method of the invention uses two different methods aiming at different image types, uses the image enhancement processing aiming at the part without image overlapping and uses the image fusion processing aiming at the part with image overlapping;
the invention can enhance the contrast of the image, simultaneously can not cause image distortion and can improve the color expressive force of the image; when the image is over-exposed, the problem that the over-exposed part in the image is not darkened in the normal part can be eliminated; when a dark area exists in an image, the dark area can be lightened to increase detail information, and meanwhile, a normal brightness part in the image is not over-lightened to lose useful information.
Drawings
FIG. 1 is a block flow diagram of the steps of an image enhancement process in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating calculation of an adjustment coefficient of an image pixel according to an embodiment of the present invention;
fig. 3 is a schematic diagram of image fusion processing according to an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples:
example (b): referring to fig. 1-3, a method for processing a vehicle-mounted image with standard definition input and high definition output is disclosed; the system is used for processing a panoramic image which is formed by splicing and synthesizing images shot by a plurality of vehicle-mounted standard definition cameras simultaneously;
the synthesized panoramic image comprises two types of areas, wherein the first type of area is an area formed by single images shot by each camera, namely a part without image overlapping in the panoramic image; the second type of area is an area formed by overlapping images captured by two adjacent cameras, i.e., a portion where images overlap in the panoramic image.
The method comprises image enhancement processing and image fusion processing; wherein the image enhancement processing is for a first type of region; the image fusion process is directed to a second type of region;
as shown in fig. 1, the image enhancement processing includes:
inputting an RGB color image, and respectively performing the following processing on R, G, B three channel images;
secondly, stretching and correcting the gray scale of the image;
step three, calculating the brightness coefficient and the adjustment coefficient of the whole image;
dividing the image into M by M sub-block images, wherein M is a positive integer greater than or equal to 2;
calculating the brightness coefficient and the adjustment coefficient of each sub-block image;
step six, correcting the image adjustment coefficient of each sub-block according to the whole image adjustment coefficient;
step seven, calculating an adjusting coefficient of each pixel point;
step eight, adjusting the brightness of each pixel point;
in the second step, the gray stretching correction processing is performed on the image according to the following formula 1:
x and y in formula 1 respectively represent the horizontal and vertical distances from a pixel point in an image to the origin of coordinates of the image; i (x, y) represents the brightness value of the (x, y) pixel point in the image; i' (x, y) represents the brightness value of the (x, y) pixel point in the image after processing; max (I), min (I) respectively represent the brightness values of the brightest pixel point and the darkest pixel point in the image;
in the third step, the process of calculating the overall brightness coefficient of the image is performed according to the following formula 2:
l in formula 2totalRepresenting the overall brightness coefficient of the image; i' (x, y) has the same meaning as defined in equation 1;
in the third step, the processing of calculating the overall image adjustment coefficient is performed according to the following formula 3:
αtotal=Ltotal/0.5
α in equation 3totalRepresenting the overall image adjustment coefficient; l istotalHas the same meaning as defined in formula 2;
in the fifth step, the brightness coefficient of each sub-block image is calculated according to a formula 2, and then the adjustment coefficient of each sub-block image is calculated according to a formula 3 to obtain the adjustment coefficient alpha of each image sub-blockpiece;
In the sixth step, the brightness coefficient of each sub-block image is corrected according to the overall image adjustment coefficient, and the adjustment is carried out according to the following modes:
when in useOf no alpha'piece=5*αtotalWhen is coming into contact withOf no alpha'piece=0.2*αtotalWhen is coming into contact withOf no alpha'piece=αtotal;
Wherein alpha ispieceDenotes a pre-correction image sub-block adjustment coefficient of'pieceRepresenting the adjustment coefficient, alpha, of the image subblock after correctiontotalRepresenting the overall image adjustment coefficient;
in the seventh step, the coefficient alpha 'is adjusted according to each corrected sub-block image'pieceCalculating the adjustment coefficient of each pixel point according to the following formula 4:
wherein alpha (x, y) is equal to an image adjusting coefficient alpha 'of the subblock to which the (x, y) pixel point in the image belongs'pieceWherein x, y have the same meaning as defined in formula 1; α' (x, y) represents an adjustment coefficient of the (x, y) pixel point;
as shown in fig. 2, a1 represents the lower limit of the horizontal domain of the pixel (x, y), a2 represents the upper limit of the horizontal domain of the pixel (x, y), b1 represents the lower limit of the vertical domain of the pixel (x, y), and b2 represents the upper limit of the vertical domain of the pixel (x, y);
wherein a1 is x-w/(2 × M), a2 is x + w/(2 × M), b1 is y-h/(2 × M), b2 is y + h/(2 × M), h represents a sub-image height, w represents a sub-image width, and M means the same as step four;
n ═ N (a2-a1+1) × (b2-b1+1), and indicates the number of pixel points within the range of shading;
in the step eight, the brightness of each pixel point is adjusted according to the following formula 5:
wherein, I '(x, y) represents the pixel brightness before the pixel point (x, y) is adjusted, I ″ (x, y) represents the pixel brightness after the pixel point (x, y) is adjusted, and α' (x, y) has the same meaning as formula 4;
and after the adjustment is finished, outputting the image after the enhancement processing.
The object of the image enhancement processing is a part without image overlapping in the panoramic image, such as a reverse image of a rear camera and a blind spot monitoring image of a side camera. Image display problems such as blurring, low contrast, dim color, overexposure, and a dark picture can be eliminated by the image enhancement processing.
The image coordinate origin may be the upper left corner of the image, or may define other positions.
As shown in fig. 3, the image fusion process performs fusion process in a weighted gradient manner on the overlapping area of two adjacent images; the pixel color of the overlapping area image is determined by the color of the adjacent two images in the overlapping area; defining two adjacent images as a first image and a second image, wherein in the images of the overlapping area, the color of the pixel closer to the first image is closer to the color of the first image, and the color of the pixel closer to the second image is closer to the color of the second image; and gradually transiting from the first image to the second image in the overlapping area through position weighted fusion, thereby realizing the fusion of two adjacent images.
The image fusion process includes:
defining the closest distance from the overlapping region point P (x, y) to the left boundary as DL (x, y), and the closest distance from the overlapping region point P (x, y) to the right boundary as DR (x, y), wherein x and y respectively represent the horizontal and vertical distances from the pixel point in one image to the coordinate origin of the image (such as the upper left corner of the image);
the weight coefficient beta of that pointl(x, y) is:
when the pixel values of the first image and the second image at the point P are respectively Il(x,y)、Ir(x, y), the pixel value I of the target fusion image at that pointc(x, y) is:
Ic(x,y)=βl(x,y)Il(x,y)+(1-βl(x,y))Ir(x,y)
wherein c ∈ (R, G, B) represents three color channel components of red, green and blue, betalAnd (x, y) is a weight coefficient.
Wherein the image fusion processing aims at the overlapped area of two adjacent images which are registered; the meaning of the "registration", that is, the splicing and matching of two images, is to make the two cameras separately acquire images and display the images in one image, and then the two images need to be spliced and matched, so as to overlap the same contents.
The invention can enhance the contrast of the image, simultaneously can not cause image distortion and can improve the color expressive force of the image; when the image is over-exposed, the problem that the over-exposed part in the image is not darkened in the normal part can be eliminated; when a dark area exists in an image, the dark area can be lightened to increase detail information, and meanwhile, a normal brightness part in the image is not over-lightened to lose useful information.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.
Claims (2)
1. A vehicle-mounted image processing method of standard definition input and high definition output; the system is used for processing a panoramic image which is formed by splicing and synthesizing images shot by a plurality of vehicle-mounted standard definition cameras simultaneously; the method is characterized in that:
the synthesized panoramic image comprises two types of areas, wherein the first type of area is an area formed by single images shot by each camera; the second type of area is an area formed by overlapping images shot by two adjacent cameras;
the method comprises image enhancement processing and image fusion processing; wherein the image enhancement processing is for a first type of region; the image fusion process is directed to a second type of region;
wherein the image enhancement processing includes:
inputting an RGB color image, and respectively performing the following processing on R, G, B three channel images;
secondly, stretching and correcting the gray scale of the image;
step three, calculating the brightness coefficient and the adjustment coefficient of the whole image;
dividing the image into M by M sub-block images, wherein M is a positive integer greater than or equal to 2;
calculating the brightness coefficient and the adjustment coefficient of each sub-block image;
step six, correcting the image adjustment coefficient of each sub-block according to the whole image adjustment coefficient;
step seven, calculating an adjusting coefficient of each pixel point;
step eight, adjusting the brightness of each pixel point;
in the second step, the gray stretching correction processing is performed on the image according to the following formula 1:
x and y in formula 1 respectively represent the horizontal and vertical distances from a pixel point in an image to the origin of coordinates of the image; i (x, y) represents the brightness value of the (x, y) pixel point in the image; i' (x, y) represents the brightness value of (x, y) pixel points in the image after being processed by formula 1; max (I), min (I) respectively represent the brightness values of the brightest pixel point and the darkest pixel point in the image;
in the third step, the process of calculating the overall brightness coefficient of the image is performed according to the following formula 2:
l in formula 2totalRepresenting the overall brightness coefficient of the image; i' (x, y) has the same meaning as defined in equation 1;
in the third step, the processing of calculating the overall image adjustment coefficient is performed according to the following formula 3:
αtotal=Ltotal/0.5
α in equation 3totalRepresenting the overall image adjustment coefficient; l istotalHas the same meaning as defined in formula 2;
in the fifth step, the brightness coefficient of each sub-block image is calculated according to a formula 2, and then the adjustment coefficient of each sub-block image is calculated according to a formula 3 to obtain the adjustment coefficient alpha of each image sub-blockpiece;
In the sixth step, the adjustment coefficient of each sub-block image is corrected according to the overall adjustment coefficient of the image, and the adjustment is carried out according to the following modes:
when in useOf no alpha'piece=5*αtotalWhen is coming into contact withOf no alpha'piece=0.2*αtotalWhen is coming into contact withOf no alpha'piece=αtotal;
Wherein alpha ispieceDenotes a pre-correction image sub-block adjustment coefficient of'pieceRepresenting the adjustment coefficient, alpha, of the image subblock after correctiontotalRepresenting the overall image adjustment coefficient;
in the seventh step, the coefficient alpha 'is adjusted according to each corrected sub-block image'pieceCalculating the adjustment coefficient of each pixel point according to the following formula 4:
wherein alpha (x, y) is equal to an image adjusting coefficient alpha 'of the subblock to which the (x, y) pixel point in the image belongs'pieceWherein x, y have the same meaning as defined in formula 1; α' (x, y) represents an adjustment coefficient of the (x, y) pixel point;
a1 represents the lower limit of the horizontal domain of the pixel (x, y), a2 represents the upper limit of the horizontal domain of the pixel (x, y), b1 represents the lower limit of the vertical domain of the pixel (x, y), and b2 represents the upper limit of the vertical domain of the pixel (x, y);
wherein a1 is x-w/(2 × M), a2 is x + w/(2 × M), b1 is y-h/(2 × M), b2 is y + h/(2 × M), h represents an image height, w represents an image width, and M means the same as step four;
n ═ N (a2-a1+1) × (b2-b1+1), and indicates the number of pixel points within the range of shading;
in the step eight, the brightness of each pixel point is adjusted according to the following formula 5:
wherein, I '(x, y) represents the brightness value of the (x, y) pixel after being processed by formula 1, I ″ (x, y) represents the brightness value of the (x, y) pixel after being processed by formula 5, and α' (x, y) has the same meaning as formula 4;
after the adjustment is finished, outputting an image after the enhancement processing;
the image fusion processing is carried out on the overlapping area of two adjacent images in a weighted gradual change mode; the pixel color of the overlapping area image is determined by the color of the adjacent two images in the overlapping area; defining two adjacent images as a first image and a second image, wherein in the images of the overlapping area, the color of the pixel closer to the first image is closer to the color of the first image, and the color of the pixel closer to the second image is closer to the color of the second image; and gradually transiting from the first image to the second image in the overlapping area through position weighted fusion, thereby realizing the fusion of two adjacent images.
2. The image processing method according to claim 1, characterized in that: the image fusion process includes:
defining the closest distance from the overlapping region point P (x, y) to the left boundary as DL (x, y), and the closest distance from the overlapping region point P (x, y) to the right boundary as DR (x, y), wherein x and y respectively represent the horizontal and vertical distances from the pixel point in an image to the coordinate origin of the image;
the weight coefficient beta of that pointl(x, y) is:
when the pixel values of the first image and the second image at the point P are respectively Il(x,y)、Ir(x, y), the pixel value I of the target fusion image at that pointc(x, y) is:
Ic(x,y)=βl(x,y)Il(x,y)+(1-βl(x,y))Ir(x,y)
wherein c ∈ (R, G, B) represents three color channel components of red, green and blue, betalAnd (x, y) is a weight coefficient.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810112289.4A CN108364263B (en) | 2018-02-05 | 2018-02-05 | Vehicle-mounted image processing method for standard definition input and high definition output |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810112289.4A CN108364263B (en) | 2018-02-05 | 2018-02-05 | Vehicle-mounted image processing method for standard definition input and high definition output |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108364263A CN108364263A (en) | 2018-08-03 |
CN108364263B true CN108364263B (en) | 2022-01-07 |
Family
ID=63004612
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810112289.4A Active CN108364263B (en) | 2018-02-05 | 2018-02-05 | Vehicle-mounted image processing method for standard definition input and high definition output |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108364263B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110099268B (en) * | 2019-05-28 | 2021-03-02 | 吉林大学 | Blind area perspective display method with natural color matching and natural display area fusion |
CN110880003B (en) * | 2019-10-12 | 2023-01-17 | 中国第一汽车股份有限公司 | Image matching method and device, storage medium and automobile |
CN110753217B (en) * | 2019-10-28 | 2022-03-01 | 黑芝麻智能科技(上海)有限公司 | Color balance method and device, vehicle-mounted equipment and storage medium |
CN111080519A (en) * | 2019-11-28 | 2020-04-28 | 常州新途软件有限公司 | Automobile panoramic all-around view image fusion method |
CN115514886A (en) * | 2022-09-02 | 2022-12-23 | 扬州航盛科技有限公司 | Vehicle-mounted low-cost reversing image adjusting system and method |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510305A (en) * | 2008-12-15 | 2009-08-19 | 四川虹微技术有限公司 | Improved self-adapting histogram equilibrium method |
CN102881016A (en) * | 2012-09-19 | 2013-01-16 | 中科院微电子研究所昆山分所 | Vehicle 360-degree surrounding reconstruction method based on internet of vehicles |
CN103988499A (en) * | 2011-09-27 | 2014-08-13 | 爱信精机株式会社 | Vehicle surroundings monitoring device |
CN104156921A (en) * | 2014-08-08 | 2014-11-19 | 大连理工大学 | Self-adaptive low-illuminance or non-uniform-brightness image enhancement method |
CN105069749A (en) * | 2015-07-22 | 2015-11-18 | 广东工业大学 | Splicing method for tire mold images |
CN105245785A (en) * | 2015-08-10 | 2016-01-13 | 深圳市达程科技开发有限公司 | Brightness balance adjustment method of vehicle panoramic camera |
US9479706B2 (en) * | 2012-02-15 | 2016-10-25 | Harman Becker Automotive Systems Gmbh | Brightness adjustment system |
CN106940892A (en) * | 2017-03-21 | 2017-07-11 | 深圳智达机械技术有限公司 | A kind of intelligent vehicle control loop based on image registration |
CN107169938A (en) * | 2017-05-24 | 2017-09-15 | 深圳市华星光电技术有限公司 | Brightness control system |
CN107330872A (en) * | 2017-06-29 | 2017-11-07 | 无锡维森智能传感技术有限公司 | Luminance proportion method and apparatus for vehicle-mounted viewing system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080253685A1 (en) * | 2007-02-23 | 2008-10-16 | Intellivision Technologies Corporation | Image and video stitching and viewing method and system |
-
2018
- 2018-02-05 CN CN201810112289.4A patent/CN108364263B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101510305A (en) * | 2008-12-15 | 2009-08-19 | 四川虹微技术有限公司 | Improved self-adapting histogram equilibrium method |
CN103988499A (en) * | 2011-09-27 | 2014-08-13 | 爱信精机株式会社 | Vehicle surroundings monitoring device |
US9479706B2 (en) * | 2012-02-15 | 2016-10-25 | Harman Becker Automotive Systems Gmbh | Brightness adjustment system |
CN102881016A (en) * | 2012-09-19 | 2013-01-16 | 中科院微电子研究所昆山分所 | Vehicle 360-degree surrounding reconstruction method based on internet of vehicles |
CN104156921A (en) * | 2014-08-08 | 2014-11-19 | 大连理工大学 | Self-adaptive low-illuminance or non-uniform-brightness image enhancement method |
CN105069749A (en) * | 2015-07-22 | 2015-11-18 | 广东工业大学 | Splicing method for tire mold images |
CN105245785A (en) * | 2015-08-10 | 2016-01-13 | 深圳市达程科技开发有限公司 | Brightness balance adjustment method of vehicle panoramic camera |
CN106940892A (en) * | 2017-03-21 | 2017-07-11 | 深圳智达机械技术有限公司 | A kind of intelligent vehicle control loop based on image registration |
CN107169938A (en) * | 2017-05-24 | 2017-09-15 | 深圳市华星光电技术有限公司 | Brightness control system |
CN107330872A (en) * | 2017-06-29 | 2017-11-07 | 无锡维森智能传感技术有限公司 | Luminance proportion method and apparatus for vehicle-mounted viewing system |
Non-Patent Citations (2)
Title |
---|
光照不均图像增强方法综述;梁琳等.;《计算机应用研究》;20100531;全文 * |
基于直方图拉伸的图像增强算法及其实现;祝中秋等.;《信息技术》;20090531;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108364263A (en) | 2018-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108364263B (en) | Vehicle-mounted image processing method for standard definition input and high definition output | |
US10525883B2 (en) | Vehicle vision system with panoramic view | |
US11080567B2 (en) | Obstacle detection in vehicle using a wide angle camera and radar sensor fusion | |
US9247217B2 (en) | Vehicle periphery monitoring device and vehicle periphery image display method | |
Zhang et al. | A surround view camera solution for embedded systems | |
US20220368839A1 (en) | System for processing image data for display using backward projection | |
US8928753B2 (en) | Method and apparatus for generating a surrounding image | |
JP4976685B2 (en) | Image processing device | |
US10007853B2 (en) | Image generation device for monitoring surroundings of vehicle | |
CN105354796B (en) | Image processing method and system for auxiliary of driving a vehicle | |
CN109246416A (en) | The panorama mosaic method of vehicle-mounted six road camera | |
WO2002089484A1 (en) | Method and apparatus for synthesizing/displaying images of cameras installed in vehicle | |
JP5178361B2 (en) | Driving assistance device | |
CN112224132A (en) | Vehicle panoramic all-around obstacle early warning method | |
JP3381351B2 (en) | Ambient situation display device for vehicles | |
JP4830380B2 (en) | Vehicle periphery monitoring device and vehicle periphery monitoring method | |
JP5195841B2 (en) | On-vehicle camera device and vehicle | |
JP4862321B2 (en) | In-vehicle camera device | |
JP4801654B2 (en) | Composite image generator | |
KR101657673B1 (en) | Apparatus and method for generating panorama view | |
KR101339127B1 (en) | A method for generating around view of vehicle capable of removing noise caused by output delay | |
Thomas et al. | Development of a cost effective bird's eye view parking assistance system | |
CN112435161A (en) | Panoramic all-around image splicing method and system, electronic equipment and storage medium | |
KR101241012B1 (en) | Method for improving images of around view monitor system | |
CN114092319A (en) | Brightness equalization method for 360-degree panoramic display of automobile |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |