CN102196242A - Self-adaptive scene image auxiliary system with image enhancing function - Google Patents

Self-adaptive scene image auxiliary system with image enhancing function Download PDF

Info

Publication number
CN102196242A
CN102196242A CN2011100387557A CN201110038755A CN102196242A CN 102196242 A CN102196242 A CN 102196242A CN 2011100387557 A CN2011100387557 A CN 2011100387557A CN 201110038755 A CN201110038755 A CN 201110038755A CN 102196242 A CN102196242 A CN 102196242A
Authority
CN
China
Prior art keywords
image
scene
imageing sensor
unit
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011100387557A
Other languages
Chinese (zh)
Inventor
王炳立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN2011100387557A priority Critical patent/CN102196242A/en
Publication of CN102196242A publication Critical patent/CN102196242A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a self-adaptive scene image auxiliary system with an image enhancing function, comprising an image sensor, a scene decision unit, a calibration unit, an image processing unit and an output unit. The image sensor is installed on a monitored object and is used for acquiring image information of the surrounding environment of the monitored object; the scene decision unit is used for generating scene configuration parameters according to movement information of the monitored object or/and operation intention information of an operator; the calibration unit is used for calibrating the image sensor and generating calibration parameters of the image sensor; the image processing unit is respectively connected with the image sensor, the scene decision unit and the calibration unit and is used for receiving and processing the image information transmitted by the image sensor; and the output unit is connected with the image processing unit. The self-adaptive scene image auxiliary system with the image enhancing function, disclosed by the invention, can provide self-adaptive scene change and enhance the images according to the movement trend of the monitored object so as to provide more effective information.

Description

Self adaptation scene image auxiliary system with image enhancement functions
Technical field
The self adaptation scene image auxiliary system that the present invention relates to use in the mechanically moving system with image enhancement functions.
Background technology
In various mechanically moving field operation, as motor vehicles travel, ship entry, loading machines such as tower crane grasp, move or arrangement of goods, particularly when the permission moving range of mobile object is narrower and small, need operating personnel to think over very much surrounding enviroment when mobile, occur the phenomenon of touching, colliding and rubbing when preventing movement of objects.
A common example is, the driver is the driver of oversize vehicle especially, because a series of blind areas can appear in covering of car body, and the deficiency of the viewing angle of rearview mirror and rear-view mirror and have viewing blind zone for example.Because the existence of this type of blind area, vehicle body space sense to the driver proposes high requirement, especially when steering vehicle during by narrow zone and parking, reversing, human pilot be easy to because the existence of vision blind area and when advancing owing to observe not cause and scratch comprehensively, collision and produce dangerous.
Settling mode commonly used is ultrasonic radar for backing car and reversing camera, radar for backing car can be better carries out distance detecting to the barrier of blind area, and the testing result of barrier pointed out by the mode of sound and figure, can carry out limited prompting to the driver like this.But there is a significantly problem directly perceived in ultrasonic radar, simultaneously because the number of range finding probe is limited, can't accomplish that therefore comprehensive all angles all carry out reliable Detection.In addition, because range finding exists certain precision error, this type of distance-finding method that range finding accurately and reliably can't be provided.
The another one solution is the reversing camera, and the method is finished picked-up and demonstration to the rearview picture by a wide-angle imaging head is installed at the tailstock, and this method cooperates range radar etc. can finish collection and demonstration to the subsequent figures picture simultaneously.But there is a tangible drawback in this method: this method is only to vehicle back image real-time acquisition and demonstration, and he can't carry out panoramic picture, therefore this method inefficacy when vehicle is in non-reversing state.
Along with the development of image processing techniques, a kind of backing system of panorama has appearred at present, and this system splices after the image of a plurality of cameras is calibrated through distortion, and is shown to the driver, to help human pilot surrounding enviroment is tested and assessed.The highly effective problem that solves the peripheral vehicle visual field of the method is as the technical scheme of Chinese patent 200710106237.8 disclosures.
But also there are some places to be improved in the technical scheme of above-mentioned Patent publish: technical scheme only adopts 3 road IMAQs, the excessive vehicle of volume or trailer are carried out image when synthetic, can cause the visual field and resolution requirement raising to the optical module of imageing sensor, this patent is not spoken of the distortion in images rectification simultaneously, when adopting the wide-angle imaging head, the fault image of not correcting will make the observable spatial relationship variation of observer; Simultaneously above-mentioned patent also can't be targetedly turns to trend to highlight to travel direction and speed and driver, the function that also image is not strengthened when rainy day in greasy weather or lighting condition are poor is so that the clear effective image of the adaptive acquisition of operating personnel.
Summary of the invention
The objective of the invention is the defective of above-mentioned prior art is improved, a kind of self adaptation scene image auxiliary system with image enhancement functions is provided, so that full side to be provided peripheral image information, this system can provide adaptive scene variation and image is carried out enhancement process according to monitoring object of which movement trend, so that more effective information to be provided.
The objective of the invention is to be achieved through the following technical solutions, self adaptation scene image auxiliary system with image enhancement functions, comprise as lower unit: imageing sensor, be installed on the monitored object, be used to obtain the image information of described monitored object surrounding enviroment, and there is the area of visual field that overlaps each other in adjacent arbitrarily two described imageing sensors, and wherein, described monitored object is meant the equipment that needs native system to carry out the surrounding environment monitoring; The scene decision package or/and the operator operates intent information, generates the scene configuration parameter according to the movable information of described monitored object; Demarcate the unit, be used for described imageing sensor is demarcated, generate the calibrating parameters of described imageing sensor; Graphics processing unit, be connected with described imageing sensor, scene decision package and demarcation unit respectively, receive described imageing sensor image transmitted information, and the calibrating parameters of the described imageing sensor that comes of the scene configuration parameter that transmission comes according to described scene decision package and the transmission of described demarcation unit, described imageing sensor image transmitted information is handled; Output unit is connected with described graphics processing unit, is used to export the picture signal that described graphics processing unit transmission comes.
Above-mentioned self adaptation scene image auxiliary system with image enhancement functions, wherein, intent information comprises rate signal, gear signal, turns to the trend signal movable information of described monitored object or/and the operator operates, acceleration signal and direction indicator lamp signal.
Above-mentioned self adaptation scene image auxiliary system with image enhancement functions, wherein, described scene configuration parameter comprises best scene viewpoint and image scaled.
Above-mentioned self adaptation scene image auxiliary system with image enhancement functions, wherein, described scene decision package adopts adaptive mode to carry out choosing of best scene viewpoint, according to the movement tendency direction to determine viewpoint direction, to determine the distance of viewpoint, constitute adaptive visual angle according to movement velocity by viewpoint direction and viewpoint distance.
Above-mentioned self adaptation scene image auxiliary system with image enhancement functions, wherein, pre-set several fixed positions of described scene decision package are as best scene viewpoint, again according to the direction of motion of monitored object with turn to trend, select wherein certain best scene viewpoint.
Above-mentioned self adaptation scene image auxiliary system with image enhancement functions, wherein, this example of described image is certain value or one and the function of distance dependent.
Above-mentioned self adaptation scene image auxiliary system with image enhancement functions, wherein, described graphics processing unit comprises: image acquisition units, be connected with described imageing sensor, and gather the image information of the monitored object surrounding enviroment that described imageing sensor obtains; The picture distortion correcting unit is connected with described demarcation unit with described image acquisition units, and the calibrating parameters of the described imageing sensor that transmission comes according to described demarcation unit carries out distortion correction to the image information that described imageing sensor transmission comes; Anti-shake and the image enhancing unit of image is connected with described picture distortion correcting unit, adopts electronic flutter-proof and algorithm for image enhancement that the image information that described picture distortion correcting unit transmission comes is handled, to improve synthetic quality of image; The view transformation unit, be connected with image enhancing unit, described scene decision package and described demarcation unit with described image is anti-shake respectively, the calibrating parameters of the described imageing sensor that scene configuration parameter that transmission comes according to described scene decision package and the transmission of described demarcation unit come, graphical information anti-shake to described image and that the image enhancing unit transmission comes is handled, to form the image information of certain viewing angles; The image co-registration unit is connected with described view transformation unit, and the image that each described imageing sensor is obtained and the image of automobile merge, and forms the composograph of automobile and surrounding enviroment.
Above-mentioned self adaptation scene image auxiliary system with image enhancement functions, wherein, described image is anti-shake to comprise anti-shake module of image and Image Enhancement Based piece with image enhancing unit; The anti-shake module of described image is connected with described picture distortion correcting unit, and it is fuzzy to adopt the electronic flutter-proof algorithm to eliminate inter frame image; Described Image Enhancement Based piece is connected with the anti-shake module of described image, adopts algorithm for image enhancement to improve picture quality.
Above-mentioned self adaptation scene image auxiliary system with image enhancement functions, wherein, the anti-shake module of described image comprises motion detection block, motion estimation module and motion compensating module; Described motion detection block is connected with described picture distortion correcting unit, adopts motion detection algorithm to obtain the position that image takes place between different frame and moves; Described motion estimation module is connected with described motion detection block, and effective exercise is estimated; Described motion compensating module is connected with described motion estimation module, according to the result of calculation of described motion estimation module, original image is compensated, with the later image of the randomized jitter that is eliminated.
Above-mentioned self adaptation scene image auxiliary system with image enhancement functions, wherein, described Image Enhancement Based piece comprises that image brightness adjusting module, picture contrast strengthen module and image sketches the contours module; Described image brightness adjusting module is connected with described motion compensating module, is used to correct the inconsistent image brightness difference that causes of photo-character owing to the camera lens of each described imageing sensor; Described image strengthens module to this degree and is connected with described image brightness adjusting module, is used for the image degree of comparing that obtains is strengthened; Described image sketches the contours module and is connected with described picture contrast enhancing module, is used for the image border is sketched the contours and barrier is given prominence to and delineated, so that image has stronger identifiability.
Above-mentioned self adaptation scene image auxiliary system with image enhancement functions, wherein, the calibrating parameters of described imageing sensor comprises the lens optical parameter of described imageing sensor and the distortion parameter of described imageing sensor, and the attitude of described imageing sensor and location parameter.
The present invention has the self adaptation scene image auxiliary system of image enhancement functions a plurality of imageing sensors is installed on monitored object, there is the area of visual field that overlaps each other in the two adjacent images transducer arbitrarily, to guarantee the comprehensive image information of obtaining monitored object surrounding enviroment, there is not blind area, graphics processing unit is handled the image information that each imageing sensor obtains according to scene decision package and demarcation unit transmission parameters, to provide effective information, and by output unit output, directly perceived, concrete.
Description of drawings
Further specify the present invention below in conjunction with the drawings and specific embodiments.
Fig. 1 has the structural representation of the self adaptation scene image auxiliary system of image enhancement functions for the present invention.
Fig. 2 is the position view of the best scene viewpoint of self adaptation scene decision-making among the present invention.
Fig. 3 is the position view of the best scene viewpoint of non-self-adapting scene decision-making among the present invention.
Fig. 4 A~Fig. 4 D is each integration region schematic diagram of the image that obtains among the present invention.
Fig. 5 A~Fig. 5 D is the schematic diagram of calibrating template among the present invention.
Fig. 6 is for demarcating the schematic diagram of imageing sensor embodiment one among the present invention.
Fig. 7 is for demarcating the schematic diagram of imageing sensor embodiment two among the present invention.
The structural representation of Fig. 8 and image enhancing unit anti-shake for image among the present invention.
The structural representation of anti-shake module of image and Image Enhancement Based piece among Fig. 9 the present invention.
Embodiment
Below, introduce specific embodiments of the invention in detail, so that further understand content of the present invention by Fig. 1~Fig. 9.
The self adaptation scene image auxiliary system that the present invention has image enhancement functions comprises:
Imageing sensor, be installed on the monitored object, be used to obtain the image information of described monitored object surrounding enviroment, and there is the area of visual field that overlaps each other in adjacent arbitrarily two described imageing sensors, wherein, described monitored object is meant the equipment that needs native system to carry out the surrounding environment monitoring, for example automobile, boats and ships;
The scene decision package or/and the operator operates intent information, generates the scene configuration parameter according to the movable information of described monitored object;
Demarcate the unit, be used for described imageing sensor is demarcated, generate the calibrating parameters of described imageing sensor;
Graphics processing unit, be connected with described imageing sensor, scene decision package and demarcation unit respectively, receive described imageing sensor image transmitted information, and the calibrating parameters of the described imageing sensor that comes of the scene configuration parameter that transmission comes according to described scene decision package and the transmission of described demarcation unit, described imageing sensor image transmitted information is handled;
Output unit is connected with described graphics processing unit, is used to export the picture signal that described graphics processing unit transmission comes.
The present invention has the self adaptation scene image auxiliary system of image enhancement functions a plurality of imageing sensors is installed on monitored object, there is the area of visual field that overlaps each other in the two adjacent images transducer arbitrarily, to guarantee the comprehensive image information of obtaining monitored object surrounding enviroment, there is not blind area, graphics processing unit is handled the image information that each imageing sensor obtains according to scene decision package and demarcation unit transmission parameters, to provide effective information, and by output unit output, directly perceived, concrete.
With automobile is monitored object, describes the present invention in detail and has the self adaptation scene image auxiliary system of image enhancement functions:
Referring to Fig. 1, the self adaptation scene image auxiliary system with image enhancement functions of present embodiment comprises: a plurality of imageing sensors 1, scene decision package 3, demarcation unit 5, graphics processing unit 2 and output unit 4.
Described a plurality of imageing sensor 1 is installed on the outer surface of car body, and around one week of car body, described imageing sensor 1 is used to obtain the image information of peripheral vehicle environment, and there is the area of visual field that overlaps each other in adjacent arbitrarily two described imageing sensors 1;
The number of described imageing sensor 1 should guarantee the comprehensive image information of obtaining the peripheral vehicle environment, and does not have blind area;
When described imageing sensor 1 is installed, the visual field of described imageing sensor 1 can only comprise the environment of peripheral vehicle, also can both comprise the environment of peripheral vehicle, comprises the part car body again, promptly take the part of car body, to guarantee when image splices, can show the position relation of car body and barrier;
Described imageing sensor 1 can be color image sensor, black and white image transducer, infrared image sensor or their combination, the combination of the combination of combination, color image sensor and the infrared image sensor of for example, the combination of black and white image transducer and color image sensor, black and white image transducer and infrared image sensor, black and white image transducer and color image sensor and infrared image sensor.
Described scene decision package 3 is connected with described graphics processing unit 2, described scene decision package 3 according to the movable information of automobile or/and the operator operates intent information, generate the scene configuration parameter, and described scene configuration parameter is transferred to described graphics processing unit 2;
Described movable information or/and the operator operate rate signal, gear signal that intent information comprises automobile, turn to trend signal, acceleration signal and direction indicator lamp signal;
Described scene configuration parameter comprises best scene viewpoint and image scaled;
If configuration of automobiles speedometer and direction rotary angle transmitter, the movable information of the automobile that described scene decision package 3 can come according to the transmission of speedometer and direction rotary angle transmitter, adopt adaptive mode to choose best scene viewpoint, promptly according to the movement tendency direction to determine viewpoint direction, according to the distance of movement velocity with definite viewpoint, constitute adaptive visual angle by viewpoint direction and viewpoint distance, suppose that it is v that a certain moment automobile speed detects, the distance L place that best scene visual is named a person for a particular job and is positioned at the motor racing direction, L=f (vT s, T in the formula sBe the demonstration time, f (x) is the function of independent variable x, according to the real system needs, this function f is definite value, piecewise function or analytical function, and L can not be greater than the maximum in the visual field of the above imageing sensor 1 of motor racing direction, whether choosing also with in the scene of L exists barrier that relation is arranged simultaneously, as shown in Figure 2;
Several points that described scene decision package 3 also can be selected to fix are as best scene viewpoint, as shown in Figure 3, best scene viewpoint can be selected several constant bearing points of P0~P5 among Fig. 3, described scene decision package 3 is according to the motor racing direction and turn to trend to determine suitable best scene viewpoint: when automobile does not have the trend of turning to and is in forward travel state, and best scene visual point selection P0 point; When automobile does not have the trend of turning to and is in fallback state, best scene visual point selection P4 point; When automobile is in forward travel state, and when right-handed trend is arranged, best scene visual point selection P2 point; When automobile is in forward travel state, and when the trend of turning left is arranged, best scene visual point selection P1 point; When automobile is in fallback state, and when right-handed trend is arranged, best scene visual point selection P5 point; When automobile is in fallback state, and when the trend of turning left is arranged, best scene visual point selection P3 point;
If the unassembled vehicle speed sensor of automobile uses gear position sensor can determine the traffic direction of current automobile: when gear was positioned at drive shift, decidable was that vehicle is in direction of advance or there is the operation intention of advancing in the operator; When gear is positioned at when retreating grade, decidable is that vehicle is in fallback state, and there is the operation intention that retreats in perhaps current operator;
If the unassembled steering wheel angle sensor of automobile, then turn to the trend can be by the state justify of direction indicator lamp:, to judge that automobile storage is in left steering trend when direction indicator lamp is in when opening for the left steering lamp; When direction indicator lamp is in when opening for the right turn lamp, judge that automobile storage is in right turn trend;
If automobile assembling acceleration transducer, and the car body longitudinal axis of the sensitive direction of acceleration transducer and automobile is vertical and be positioned on the horizontal plane, can judge the trend that turns to of automobile this moment by the output of acceleration transducer: the acceleration signal of described acceleration transducer will be through filtering to remove The noise, when filtered acceleration signal detect left to acceleration the time, promptly there is the trend that turns to left in the decidable automobile; When the acceleration signal that detects to the right, then be judged to be automobile and have to the right the trend that turns to; When filtered acceleration signal is lower than certain thresholding, then is judged to automobile and does not exist and effectively turn to trend;
If automobile exists multiple when turning to the trend transducer, as possessing direction indicator lamp transducer, deflection transducer and acceleration transducer simultaneously, comprehensive the sensor information is turned to the trend judgement, also can adopt priority orders to judge, an available priority is: the priority of direction indicator lamp signal is higher than steering wheel angle sensor information, and steering wheel angle sensor information priority level is higher than acceleration transducer;
If in the time of can supplying decision-making without any information, described scene decision package 3 will be selected the top view point, PT as shown in Figure 3 is default top view point, promptly is selected from autocentre point top;
Except the self adaptation shift scene, described scene decision package 3 can be according to operator's (present embodiment middle finger driver) needs and hand-guided and selection;
Described scene decision package 3 can also produce this routine chi information of change except scene is adjusted, this this routine chi is to be definite value, means in the image after fusion, and this example of each integration region A, B, C, D is consistent, shown in Fig. 4 A;
When field range is big, because the size of described output unit 4 is limited, therefore to select be not a definite value to this routine chi, but the function with distance dependent, so that apart from the automobile scope far away more, image compression severe more, and in the nearer scope of distance automobile, image compression is less, shown in Fig. 4 B~Fig. 4 D, thick arrow among Fig. 4 B~Fig. 4 D is represented view directions, and thin arrow is represented the movement tendency of automobile.
Described demarcation unit 5 is connected with described graphics processing unit 2, to the calibrating parameters of the described imageing sensor 1 of described graphics processing unit 2 transmission;
The calibrating parameters of described imageing sensor 1 comprises the lens optical parameter of described imageing sensor 1 and the distortion parameter of described imageing sensor 1, and the attitude of described imageing sensor 1 and location parameter, wherein, the lens optical parameter of imageing sensor and the distortion parameter of imageing sensor are known as inner parameter in field of machine vision, and the attitude and the location parameter of imageing sensor are known as external parameter in field of machine vision;
If the number of the described imageing sensor of installing on the automobile 1 then all will be demarcated each described imageing sensor 1 greater than 1;
After described imageing sensor 1 is installed, before using native system monitoring automobile to move, earlier each described imageing sensor 1 is demarcated, obtain the lens optical parameter of each described imageing sensor 1, the distortion parameter of each described imageing sensor 1, and the attitude of each described imageing sensor 1 and location parameter;
Described imageing sensor 1 demarcated may further comprise the steps:
Step 1 is calculated the lens optical parameter of described imageing sensor 1 and the distortion parameter of described imageing sensor 1;
The described imageing sensor 1 of step 1.1 obtains the image information of demarcating module from different orientation and attitude;
Described calibrating template is provided with the rectangular slab of plane pattern, d pattern or wire pattern for the surface, the pattern on described calibrating template surface can be some polygons, for example some discrete squares, shown in Fig. 5 A and Fig. 5 B, can be the pattern that has straight line or curvilinear characteristic, for example gridiron pattern or grid, shown in Fig. 5 C, also can be the pattern that has the angle point feature, for example some round dots be shown in Fig. 5 D;
The pattern on described calibrating template surface and the size of described calibrating template are pre-set;
At least obtain the image information of demarcating module from 4 different azimuth and attitude;
Step 1.2 according to the image information of the described demarcating module that obtains, adopts the camera calibration method to calculate the lens optical parameter of described imageing sensor 1 and the distortion parameter of described imageing sensor 1;
The camera calibration method that described camera calibration method can be a nonlinear model or the camera calibration method of linear model;
For example, non-linear camera calibration method (RAC) based on radial constraint, list of references is: " A versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses ", Roger Y.Tsai, IEEE Journal of Robotics and Automation, Vol.RA-3, No.4, August 1987, pages 323-344;
The camera calibration algorithm of Zhang Zhengyou based on the 2D target, list of references: " Z.Zhang.A flexible new technique for camera calibration " .I EEE Transactions on Pattern Analysis and Machine Intelligence, 22 (11): 1330-1334,2000;
The scaling method of catadioptric formula camera and fisheye camera, list of references is: Scaramuzza, D. (2008). " Omnidirectional Vision:from Calibration to Robot Motion Estimation " ETH Zurich Thesis no.17635;
Step 2 is calculated the attitude and the location parameter of described imageing sensor 1;
Step 2.1 determines to demarcate the zone;
A described demarcation zone is corresponding to a described imageing sensor 1;
Described demarcation is provided with the benchmark stop position in the zone, described benchmark stop position is stopped reference line by some benchmark and is limited and form, shape and coordinate that this benchmark is stopped reference line are to preestablish, timing signal with automobile stop in this benchmark stop position, the resemblance of described benchmark stop position and automobile matches, and accurately stops easily to guarantee automobile;
Step 2.2, automobile place in the benchmark stop position;
The timing signal automobile stop is in described benchmark stop position, and the holding position is constant;
Step 2.3 is placed described calibrating template in described demarcation zone, described imageing sensor 1 obtains the image information of described calibrating template;
Described calibrating template is placed with predefined attitude and position, as: horizontal positioned, the vertical combination of placing, tilting placement or above-mentioned attitude;
Step 2.4 according to the image information of the described calibrating template that obtains, adopts described camera calibration method to calculate the attitude and the location parameter of described imageing sensor 1;
Promptly adopt described camera calibration method to calculate attitude rotation amount and the displacement of each described imageing sensor 1 with respect to described calibrating template;
Because described calibrating template all is known in described position and attitude of demarcating in the zone, the dimension information of automobile can be measured in advance, the position of each described imageing sensor 1 and attitude all can be calculated like this, so just can obtain the attitude parameter and the displacement parameter of each described imageing sensor 1;
Do not stop reference line if described benchmark is set in the demarcation zone, described demarcating module is placed in the overlapping region of adjacent two described imageing sensors 1, make described two imageing sensors 1 can both photograph the pattern of this calibrating template simultaneously, equally, the attitude of described calibrating template and position all are predefined, timing signal carries out the location in twos of each described imageing sensor 1, then can calculate the attitude parameter and the displacement parameter of each described imageing sensor successively;
Figure 6 shows that the example in first demarcation zone, described benchmark is stopped reference line 61 and is limited described benchmark stop position 62 (intersection point 62 of two benchmark stop reference lines is called reference point among Fig. 6), described benchmark stop position is arranged in the described demarcation zone 6, automobile stop is in described benchmark stop position, respectively place a described demarcating module A1, A2, A3, A4 at the front, rear, left and right of automobile four direction, described demarcating module A1, A2, A3, A4 horizontal positioned, the calibrating template that 1 pair of horizontal plane of the described imageing sensor of timing signal is placed is taken;
Figure 7 shows that second example of demarcating the zone, automobile stop is in described benchmark stop position, respectively place a described demarcating module A1, A2, A3, A4 at the front, rear, left and right of automobile four direction, described demarcating module A1, A2, A3, A4 vertically place, and 1 pair of calibrating template of vertically placing of the described imageing sensor of timing signal is taken;
In fact as required, described calibrating template can take vertically, level or the combination of placing with specific attitude.
Continuation is referring to Fig. 1, and described graphics processing unit 2 comprises:
Image acquisition units 21 is connected with described imageing sensor 1, gathers the image information of the peripheral vehicle environment that described imageing sensor 1 obtains;
Picture distortion correcting unit 22, be connected with described demarcation unit 5 with described image acquisition units 21, the lens optical parameter of the described imageing sensor 1 that 5 transmission come according to described demarcation unit and the distortion parameter of described imageing sensor 1, and the attitude of described imageing sensor 1 and location parameter, the image information that described imageing sensor 1 transmission comes is carried out distortion correction;
Described distortion is because the lens distortion and the 1 undesirable generation of described imageing sensor of described imageing sensor 1, described distortion comprises tangential distortion, radial distortion, thin prism distortion, decentering distortion, and described distortion is described by the lens optical parameter (having comprised the undesirable distortion parameter that causes of optical system) of described imageing sensor 1 and the distortion parameter of described imageing sensor;
The method of described picture distortion correcting unit 22 employing interpolation or convolution is corrected the next image information of described imageing sensor 1 transmission and is carried out distortion correction;
Anti-shake and the image enhancing unit 23 of image is connected with described picture distortion correcting unit 22, adopts electronic flutter-proof and algorithm for image enhancement that the image information that described picture distortion correcting unit 22 transmission come is handled, to improve synthetic quality of image;
View transformation unit 24, be connected with image enhancing unit 23, described scene decision package 3 and described demarcation unit 5 with described image is anti-shake respectively, the attitude parameter and the location parameter of the described imageing sensor 1 that scene configuration parameter that 3 transmission come according to described scene decision package and 5 transmission of described demarcation unit come, graphical information anti-shake to described image and that image enhancing unit 23 transmission come is handled, to form the image information of certain viewing angles;
Best scene viewpoint and image scaled that described view transformation unit 24 comes according to described scene decision package 3 transmission, and described demarcation unit 5 transmits the attitude parameter and the location parameter of the described imageing sensor 1 that comes, graphical information anti-shake to described image and that image enhancing unit 23 transmission come is carried out view transformation and image zoom, to form the image information of certain viewing angles;
With linear camera model (pinhole imaging system model) is that example is described this view transformation;
u ′ v ′ 1 = AR 2 ′ R 1 ′ - 1 A - 1 u v 1
In the formula: [u v] TBe the pictorial element before the view transformation, i.e. image under original visual angle, [u ' v '] TBe the pictorial element behind the view transformation, R 1Be original rotation homography matrix, be the spin matrix and the translation vector formation of imageing sensor position
Figure BSA00000434842600122
r 1And r 2Be spin matrix
Figure BSA00000434842600123
First column vector
Figure BSA00000434842600124
With the secondary series vector
Figure BSA00000434842600125
Figure BSA00000434842600126
Be new viewpoint rotation homography matrix, spin matrix and the translation matrix by new viewpoint position constituted equally;
A is the inner parameter model of camera, is defined as
Figure BSA00000434842600127
α x, α y, u 0, v 0, γ is the linear model inner parameter:
α x, α yBe respectively u, the scale factor of v axle, or be called effective focal length α x=f/dx, α y=f/dy, dx, dy are respectively the pixel spacing of horizontal direction and vertical direction;
u 0, v 0It is optical centre;
γ is the out of plumb factor of u axle and v axle, γ=0 under a lot of situations;
Image co-registration unit 25 is connected with described view transformation unit 24, and the image that each described imageing sensor 1 is obtained and the image of automobile merge, and forms the composograph of automobile and surrounding enviroment;
The fusion method that described image co-registration unit 25 adopts can be that the direct image from two described imageing sensors 1 with registration position splices;
Registration position, i.e. splicing line position determines according to the field range of two described imageing sensors 1 and definition scope, guarantees in the splicing line both sides, is consistent from the image definition of two described imageing sensors 1 that splicing line is camber line or straight line.Can select for use during image co-registration directly the original image of splicing line both sides by moving the image that forms after the fusion, splicing line can be shown as special color and gray scale, be beneficial to the supervisor and determine the splicing line position, also display splicing line not is not to destroy the globality of image;
The fusion method that described image co-registration unit 25 adopts also can be that integration region adopts the image from two adjacent regions to be weighted on average at same merging point, does not just have splicing line in splice region like this, and image correspondence can be better;
To merge the position more accurate in order to make when image co-registration, need carry out registration to the image of integration region, the method of registration is: seek characteristics of image near splicing regions, as angle point, profile, features such as edge, utilize these characteristics of image that the image of two imageing sensors is mated then, to seek best matched line, image merges on this optimum Match line then, to avoid the appearance of situations such as ghost image.Can adopt the mode of variable weight to carry out the variable weight fusion during fusion, in splice region, any image of after the image co-registration certain, more little away from the point of imageing sensor shared weight in fused images more, come then big more from the weight of the image of another one imageing sensor at same position;
In the scene image of described image co-registration unit 25 after fusion, need to increase the plan view from above of automobile, perhaps overlook 3-D view, perhaps with the corresponding to three-dimensional perspective image in visual angle of described decision package 3 outputs, this image has certain transparency, to show the peripheral image information of being covered by equipment, the transparency of monitored object can be set by hand, and the color of the color of monitored object images of while etc. also can be set;
Described image co-registration unit 25 also can increase such as satellite informations such as obstacle distances except the image information with each described imageing sensor 1 merges; The signal of external sensor can be able to be merged in the scene image that is presented at final panorama, as range information, obstacle information waits external parameter to be merged.
Described output unit 4 is connected with described image co-registration 25, is used to export the image information that described image co-registration 25 transmission come.
Referring to Fig. 8, described image is anti-shake to comprise anti-shake module 231 of image and Image Enhancement Based piece 232 with image enhancing unit 23;
The anti-shake module 231 of described image is connected with described picture distortion correcting unit 22, and it is fuzzy to adopt the electronic flutter-proof algorithm to eliminate inter frame image;
Because automobile can unavoidably vibrate, be that described imageing sensor 1 can constantly be shaken, therefore, the image that described imageing sensor 1 is taken might move by occurrence positions between different frames, image is anti-shake will to be caused image blurring and the piece mistake if do not carry out, the anti-shake module 231 of described image is used to finish the detection of shake and eliminates function, to improve the quality of image co-registration and splicing;
Described Image Enhancement Based piece 232 is connected with the anti-shake module 231 of described image, adopts algorithm for image enhancement to improve picture quality.
Referring to Fig. 9, the anti-shake module 231 of described image comprises motion detection block 2311, motion estimation module 2312 and motion compensating module 2313;
Described motion detection block 2311 is connected with described picture distortion correcting unit 22, adopts motion detection algorithm to obtain the position that image takes place between different frame and moves;
Motion detection algorithm commonly used comprises: sciagraphy (PA:Projection Algorithm), representative point matching algorithm (RPM:Representative Point Matching), bit plane matching method (BPM:Bit Plane Matching) etc., adopt said method, can estimate translational movement and rotation amount that image takes place between different frame, perhaps zoom level;
Described motion estimation module 2312 is connected with described motion detection block 2311, and effective exercise is estimated;
The kinematic parameter that exists between the described motion estimation module 2312 estimation consecutive frames by the influence of filtering random motion, obtains effective global motion vector, and obtains the movement tendency of sequence frame, obtains actual image shift value, rotation value and scale value etc.;
Described motion compensating module 2313 is connected with described motion estimation module 2312, and skew, rotation and the convergent-divergent of the image that calculates according to described motion estimation module 2312 compensate original image, with the later image of the randomized jitter that is eliminated;
When the vibration of each described imageing sensor 1 can be ignored, can save the anti-shake module 231 of described image.
Continuation is referring to Fig. 9, and described Image Enhancement Based piece 232 comprises image brightness adjusting module 2321, this degree is strengthened module 2322 to image and image sketches the contours module 2323;
Described image brightness adjusting module 2321 is connected with described motion compensating module 2313, be used to correct the inconsistent image brightness difference that causes of photo-character owing to the camera lens of each described imageing sensor 1, this difference will cause having the inconsistent zone of obvious brightness in the image after fusion;
The brightness regulation coefficient can pass through to obtain in the brightness measurement of demarcating scene when native system uses for the first time, perhaps calculates in real time when native system moves, and through the image that brightness is adjusted, the zone of different brightness can not occur when splicing;
Described picture contrast strengthens module 2322 and is connected with described image brightness adjusting module 2321, is used for the image that obtains is carried out this degree is strengthened;
When raining, can causing described imageing sensor 1 to photograph fuzzy degraded image when greasy weather or lighting condition difference, it is exactly to be used to handle the image deterioration that causes owing to this type of reason that described picture contrast strengthens module 2322, to improve the visual effect of image;
Common picture contrast enhancement algorithms comprises: histogram equalization method (HE), local histogram's equalization (AHE), non overlapping blocks histogram equalization method (POSHE), the interpolation adaptive histogram equalization, and broad sense histogram equalization method, all can realize the self adaptation contrast of the local message of image is strengthened by above-mentioned algorithm, to increase the contrast of image, improve the naked eyes identifiability of degraded image, improve the visual effect of image;
Described image sketches the contours module 2323 and with described image this degree enhancing module 2322 is connected, be used for the image border is sketched the contours and barrier is given prominence to and delineated, so that image has stronger identifiability, effectively point out guide to observer's sight;
In the monitoring operating process, operation needs to use the surplus light of eyes to observe native system sometimes, therefore need to adopt appropriate method that image is given prominence to, can adopt numerous image border operators that the edge of image feature is extracted, and can carry out feature detection to special pattern according to the feature of barrier, to detect special object, after detecting barrier, position according to its edge, to sketching the contours or adopt special shape (circle, rectangle etc.) in the edge with in barrier sign and the image.The thickness etc. that whether adopts image to sketch the contours the color edges of the boundary line that function or image sketch the contours can be controlled by operating personnel, can as data such as obstacle distances, special area be delineated according to the sensor external data simultaneously;
Described Image Enhancement Based piece 232 can save part of module as required, for example, when not needing that image is carried out the edge when sketching the contours, can save described image and sketch the contours module 2323, similarly, described brightness adjusting module 2321 and image also can be by selectively saving to this degree enhancing module 2322.
Self adaptation scene image auxiliary system with image enhancement functions provided by the invention, by adaptive environment around the monitored object is adjudicated, make operating personnel can see effective information directly perceived as much as possible, with effective raising fail safe, in various movable machineries field, also can enhance productivity for the operator provides effective information as much as possible.
More than show and described basic principle of the present invention, principal character and advantage of the present invention.The technical staff of the industry should understand; the present invention is not restricted to the described embodiments; that describes in the foregoing description and the specification just illustrates principle of the present invention; the present invention also has various changes and modifications without departing from the spirit and scope of the present invention, and these changes and improvements all fall in the claimed scope of the invention.

Claims (11)

1. the self adaptation scene image auxiliary system with image enhancement functions is characterized in that, comprising:
Imageing sensor, be installed on the monitored object, be used to obtain the image information of described monitored object surrounding enviroment, and there is the area of visual field that overlaps each other in adjacent arbitrarily two described imageing sensors, wherein, described monitored object is meant the equipment that needs native system to carry out the surrounding environment monitoring;
The scene decision package or/and the operator operates intent information, generates the scene configuration parameter according to the movable information of described monitored object;
Demarcate the unit, be used for described imageing sensor is demarcated, generate the calibrating parameters of described imageing sensor;
Graphics processing unit, be connected with described imageing sensor, scene decision package and demarcation unit respectively, receive described imageing sensor image transmitted information, and the calibrating parameters of the described imageing sensor that comes of the scene configuration parameter that transmission comes according to described scene decision package and the transmission of described demarcation unit, described imageing sensor image transmitted information is handled;
Output unit is connected with described graphics processing unit, is used to export the picture signal that described graphics processing unit transmission comes.
2. the self adaptation scene image auxiliary system with image enhancement functions as claimed in claim 1, it is characterized in that intent information comprises rate signal, gear signal, turns to the trend signal movable information of described monitored object or/and the operator operates, acceleration signal and direction indicator lamp signal.
3. the self adaptation scene image auxiliary system with image enhancement functions as claimed in claim 1 is characterized in that described scene configuration parameter comprises best scene viewpoint and image scaled.
4. the self adaptation scene image auxiliary system with image enhancement functions as claimed in claim 3, it is characterized in that, described scene decision package adopts adaptive mode to carry out choosing of best scene viewpoint, according to the movement tendency direction to determine viewpoint direction, to determine the distance of viewpoint, constitute adaptive visual angle according to movement velocity by viewpoint direction and viewpoint distance.
5. the self adaptation scene image auxiliary system with image enhancement functions as claimed in claim 3, it is characterized in that, pre-set several fixed positions of described scene decision package are as best scene viewpoint, according to the direction of motion of monitored object with turn to trend, select wherein certain best scene viewpoint again.
6. the self adaptation scene image auxiliary system with image enhancement functions as claimed in claim 3 is characterized in that, this example of described image is certain value or one and the function of distance dependent.
7. the self adaptation scene image auxiliary system with image enhancement functions as claimed in claim 1 is characterized in that described graphics processing unit comprises;
Image acquisition units is connected with described imageing sensor, gathers the image information of the monitored object surrounding enviroment that described imageing sensor obtains;
The picture distortion correcting unit is connected with described demarcation unit with described image acquisition units, and the calibrating parameters of the described imageing sensor that transmission comes according to described demarcation unit carries out distortion correction to the image information that described imageing sensor transmission comes;
Anti-shake and the image enhancing unit of image is connected with described picture distortion correcting unit, adopts electronic flutter-proof and algorithm for image enhancement that the image information that described picture distortion correcting unit transmission comes is handled, to improve synthetic quality of image;
The view transformation unit, be connected with image enhancing unit, described scene decision package and described demarcation unit with described image is anti-shake respectively, the calibrating parameters of the described imageing sensor that scene configuration parameter that transmission comes according to described scene decision package and the transmission of described demarcation unit come, graphical information anti-shake to described image and that the image enhancing unit transmission comes is handled, to form the image information of certain viewing angles;
The image co-registration unit is connected with described view transformation unit, and the image that each described imageing sensor is obtained and the image of automobile merge, and forms the composograph of automobile and surrounding enviroment.
8. the self adaptation scene image auxiliary system with image enhancement functions as claimed in claim 7 is characterized in that, described image is anti-shake to comprise anti-shake module of image and Image Enhancement Based piece with image enhancing unit;
The anti-shake module of described image is connected with described picture distortion correcting unit, and it is fuzzy to adopt the electronic flutter-proof algorithm to eliminate inter frame image;
Described Image Enhancement Based piece is connected with the anti-shake module of described image, adopts algorithm for image enhancement to improve picture quality.
9. the self adaptation scene image auxiliary system with image enhancement functions as claimed in claim 8 is characterized in that the anti-shake module of described image comprises motion detection block, motion estimation module and motion compensating module;
Described motion detection block is connected with described picture distortion correcting unit, adopts motion detection algorithm to obtain the position that image takes place between different frame and moves;
Described motion estimation module is connected with described motion detection block, and effective exercise is estimated;
Described motion compensating module is connected with described motion estimation module, according to the result of calculation of described motion estimation module, original image is compensated, with the later image of the randomized jitter that is eliminated.
10. the self adaptation scene image auxiliary system with image enhancement functions as claimed in claim 8 is characterized in that, described Image Enhancement Based piece comprises that image brightness adjusting module, picture contrast strengthen module and image sketches the contours module;
Described image brightness adjusting module is connected with described motion compensating module, is used to correct the inconsistent image brightness difference that causes of photo-character owing to the camera lens of each described imageing sensor;
Described image strengthens module to this degree and is connected with described image brightness adjusting module, is used for the image that obtains is carried out this degree is strengthened;
Described image sketches the contours module and is connected with described picture contrast enhancing module, is used for the image border is sketched the contours and barrier is given prominence to and delineated, so that image has stronger identifiability.
11. as claim 1 or 7 described self adaptation scene image auxiliary systems with image enhancement functions, it is characterized in that, the calibrating parameters of described imageing sensor comprises the lens optical parameter of described imageing sensor and the distortion parameter of described imageing sensor, and the attitude of described imageing sensor and location parameter.
CN2011100387557A 2010-02-12 2011-02-11 Self-adaptive scene image auxiliary system with image enhancing function Pending CN102196242A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100387557A CN102196242A (en) 2010-02-12 2011-02-11 Self-adaptive scene image auxiliary system with image enhancing function

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201020114851.6 2010-02-12
CN201020114851.6 2010-02-12
CN2011100387557A CN102196242A (en) 2010-02-12 2011-02-11 Self-adaptive scene image auxiliary system with image enhancing function

Publications (1)

Publication Number Publication Date
CN102196242A true CN102196242A (en) 2011-09-21

Family

ID=44603532

Family Applications (2)

Application Number Title Priority Date Filing Date
CN2011200394965U Expired - Fee Related CN202035096U (en) 2010-02-12 2011-02-11 Mobile operation monitoring system for mobile machine
CN2011100387557A Pending CN102196242A (en) 2010-02-12 2011-02-11 Self-adaptive scene image auxiliary system with image enhancing function

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN2011200394965U Expired - Fee Related CN202035096U (en) 2010-02-12 2011-02-11 Mobile operation monitoring system for mobile machine

Country Status (1)

Country Link
CN (2) CN202035096U (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103562675A (en) * 2011-05-19 2014-02-05 赫克斯冈技术中心 Optical measurement method and measurement system for determining 3d coordinates on a measurement object surface
CN104065926A (en) * 2014-06-25 2014-09-24 中国移动通信集团广东有限公司 Image enhancement method and system based on wireless high-definition video monitor system
CN105216715A (en) * 2015-10-13 2016-01-06 湖南七迪视觉科技有限公司 A kind of motorist vision assists enhancing system
CN105721793A (en) * 2016-05-05 2016-06-29 深圳市歌美迪电子技术发展有限公司 Driving distance correction method and device
CN105894511A (en) * 2016-03-31 2016-08-24 乐视控股(北京)有限公司 Calibration target setting method and device and parking auxiliary system
CN107115679A (en) * 2017-05-29 2017-09-01 深圳市七布创新科技有限公司 A kind of autonomous system and its control method
CN109190527A (en) * 2018-08-20 2019-01-11 合肥智圣新创信息技术有限公司 A kind of garden personnel track portrait system monitored based on block chain and screen
CN109685746A (en) * 2019-01-04 2019-04-26 Oppo广东移动通信有限公司 Brightness of image method of adjustment, device, storage medium and terminal
CN109767473A (en) * 2018-12-30 2019-05-17 惠州华阳通用电子有限公司 A kind of panorama parking apparatus scaling method and device
CN110519529A (en) * 2019-05-14 2019-11-29 南开大学 A kind of same viewpoint based on optic splice looks around imaging system and imaging method
CN111008985A (en) * 2019-11-07 2020-04-14 贝壳技术有限公司 Panorama picture seam detection method and device, readable storage medium and electronic equipment
CN111435972A (en) * 2019-01-15 2020-07-21 杭州海康威视数字技术股份有限公司 Image processing method and device
CN113506362A (en) * 2021-06-02 2021-10-15 湖南大学 Method for synthesizing new view of single-view transparent object based on coding and decoding network

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105459904B (en) * 2015-11-30 2017-12-22 深圳市灵动飞扬科技有限公司 Turn inside diameter display methods and system
CN109084509A (en) * 2018-06-29 2018-12-25 昆明金域医学检验所有限公司 A kind of medical test stained specimens preservation refrigerator-freezer

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103562675A (en) * 2011-05-19 2014-02-05 赫克斯冈技术中心 Optical measurement method and measurement system for determining 3d coordinates on a measurement object surface
US9628779B2 (en) 2011-05-19 2017-04-18 Hexagon Technology Center Gmbh Optical measurement method and measurement system for determining 3D coordinates on a measurement object surface
CN103562675B (en) * 2011-05-19 2017-04-26 赫克斯冈技术中心 Optical measurement method and measurement system for determining 3d coordinates on a measurement object surface
CN104065926A (en) * 2014-06-25 2014-09-24 中国移动通信集团广东有限公司 Image enhancement method and system based on wireless high-definition video monitor system
CN105216715A (en) * 2015-10-13 2016-01-06 湖南七迪视觉科技有限公司 A kind of motorist vision assists enhancing system
CN105894511B (en) * 2016-03-31 2019-02-12 恒大法拉第未来智能汽车(广东)有限公司 Demarcate target setting method, device and parking assistance system
CN105894511A (en) * 2016-03-31 2016-08-24 乐视控股(北京)有限公司 Calibration target setting method and device and parking auxiliary system
CN105721793B (en) * 2016-05-05 2019-03-12 深圳市歌美迪电子技术发展有限公司 A kind of driving distance bearing calibration and device
CN105721793A (en) * 2016-05-05 2016-06-29 深圳市歌美迪电子技术发展有限公司 Driving distance correction method and device
CN107115679A (en) * 2017-05-29 2017-09-01 深圳市七布创新科技有限公司 A kind of autonomous system and its control method
CN109190527A (en) * 2018-08-20 2019-01-11 合肥智圣新创信息技术有限公司 A kind of garden personnel track portrait system monitored based on block chain and screen
CN109767473A (en) * 2018-12-30 2019-05-17 惠州华阳通用电子有限公司 A kind of panorama parking apparatus scaling method and device
CN109767473B (en) * 2018-12-30 2022-10-28 惠州华阳通用电子有限公司 Panoramic parking device calibration method and device
CN109685746A (en) * 2019-01-04 2019-04-26 Oppo广东移动通信有限公司 Brightness of image method of adjustment, device, storage medium and terminal
CN111435972B (en) * 2019-01-15 2021-03-23 杭州海康威视数字技术股份有限公司 Image processing method and device
CN111435972A (en) * 2019-01-15 2020-07-21 杭州海康威视数字技术股份有限公司 Image processing method and device
CN110519529B (en) * 2019-05-14 2021-08-06 南开大学 Optical splicing-based same-viewpoint all-round-looking imaging system and imaging method
CN110519529A (en) * 2019-05-14 2019-11-29 南开大学 A kind of same viewpoint based on optic splice looks around imaging system and imaging method
CN111008985A (en) * 2019-11-07 2020-04-14 贝壳技术有限公司 Panorama picture seam detection method and device, readable storage medium and electronic equipment
CN113506362A (en) * 2021-06-02 2021-10-15 湖南大学 Method for synthesizing new view of single-view transparent object based on coding and decoding network
CN113506362B (en) * 2021-06-02 2024-03-19 湖南大学 Method for synthesizing new view of single-view transparent object based on coding and decoding network

Also Published As

Publication number Publication date
CN202035096U (en) 2011-11-09

Similar Documents

Publication Publication Date Title
CN202035096U (en) Mobile operation monitoring system for mobile machine
CN102158684A (en) Self-adapting scene image auxiliary system with image enhancement function
CN102163331A (en) Image-assisting system using calibration method
JP7245295B2 (en) METHOD AND DEVICE FOR DISPLAYING SURROUNDING SCENE OF VEHICLE-TOUCHED VEHICLE COMBINATION
US9738223B2 (en) Dynamic guideline overlay with image cropping
JP4695167B2 (en) Method and apparatus for correcting distortion and enhancing an image in a vehicle rear view system
US9280824B2 (en) Vehicle-surroundings monitoring device
CN111046743B (en) Barrier information labeling method and device, electronic equipment and storage medium
US10183621B2 (en) Vehicular image processing apparatus and vehicular image processing system
US8842181B2 (en) Camera calibration apparatus
US8199975B2 (en) System and method for side vision detection of obstacles for vehicles
US20150042799A1 (en) Object highlighting and sensing in vehicle image display systems
CN101763640B (en) Online calibration processing method for vehicle-mounted multi-view camera viewing system
US20150109444A1 (en) Vision-based object sensing and highlighting in vehicle image display systems
US20110169957A1 (en) Vehicle Image Processing Method
US20140114534A1 (en) Dynamic rearview mirror display features
US20100245573A1 (en) Image processing method and image processing apparatus
US20100259372A1 (en) System for displaying views of vehicle and its surroundings
JP2018531530A5 (en)
CN103728727A (en) Information display system capable of automatically adjusting visual range and display method of information display system
JP7247173B2 (en) Image processing method and apparatus
US8477191B2 (en) On-vehicle image pickup apparatus
US20090179916A1 (en) Method and apparatus for calibrating a video display overlay
CN111739101B (en) Device and method for eliminating dead zone of vehicle A column
US20180056873A1 (en) Apparatus and method of generating top-view image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
DD01 Delivery of document by public notice

Addressee: Wang Bingli

Document name: Notification of before Expiration of Request of Examination as to Substance

DD01 Delivery of document by public notice

Addressee: Wang Bingli

Document name: Notification that Application Deemed to be Withdrawn

C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110921