CN202035096U - Mobile operation monitoring system for mobile machine - Google Patents
Mobile operation monitoring system for mobile machine Download PDFInfo
- Publication number
- CN202035096U CN202035096U CN2011200394965U CN201120039496U CN202035096U CN 202035096 U CN202035096 U CN 202035096U CN 2011200394965 U CN2011200394965 U CN 2011200394965U CN 201120039496 U CN201120039496 U CN 201120039496U CN 202035096 U CN202035096 U CN 202035096U
- Authority
- CN
- China
- Prior art keywords
- image
- imageing sensor
- unit
- module
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
A mobile operation monitoring system for a mobile machine provided in the utility model comprises an image sensor arranged on a monitored object and used for obtaining the image information of the periphery environment of the monitored object; a scene decision-making unit used for generating a scene configuration parameter according to the moving information of the monitored object or/and the operation intention information of an operator; a calibration unit used for calibrating the image sensor and generating the calibration parameter of the image sensor; an image processing unit respectively connected with the image sensor, the scene decision-making unit and the calibration unit, and used for receiving and processing the image information transmitted by the image sensor; and an output unit connected with the image processing unit. The mobile operation monitoring system for the mobile machine can provide self-adapting scene change according to the moving trend of the monitored object and carry out image enhancement processing so as to provide more effective information.
Description
Technical field
The utility model relates to the move operation supervisory control system of the mechanically moving that uses in the mechanically moving system.
Background technology
In various mechanically moving field operation, as motor vehicles travel, ship entry, loading machines such as tower crane grasp, move or arrangement of goods, particularly when the permission moving range of mobile object is narrower and small, need operating personnel to think over very much surrounding enviroment when mobile, occur the phenomenon of touching, colliding and rubbing when preventing movement of objects.
A common example is that the driver is the driver of oversize vehicle especially. because a series of blind areas can appear in covering of car body, and the deficiency of the viewing angle of rearview mirror and rear-view mirror and have viewing blind zone for example.Because the existence of this type of blind area, vehicle body space sense to the driver proposes high requirement, especially when steering vehicle during by narrow zone and parking, reversing, human pilot be easy to because the existence of vision blind area and when advancing owing to observe not cause and scratch comprehensively, collision and produce dangerous.
Settling mode commonly used is ultrasonic radar for backing car and reversing camera, and radar for backing car can be better carries out distance detecting to the barrier of blind area, and
The testing result of barrier is pointed out by the mode of sound and figure, can carry out limited prompting to the driver like this.But there is a significantly problem directly perceived in ultrasonic radar, simultaneously because the number of range finding probe is limited, can't accomplish that therefore comprehensive all angles all carry out reliable Detection.In addition, because range finding exists certain precision error, this type of distance-finding method that range finding accurately and reliably can't be provided.
The another one solution is the reversing camera, and the method is finished picked-up and demonstration to the rearview picture by a wide-angle imaging head is installed at the tailstock, and this method cooperates range radar etc. can finish collection and demonstration to the subsequent figures picture simultaneously.But there is a tangible drawback in this method: this method is only to vehicle back image real-time acquisition and demonstration, and he can't carry out panoramic picture, therefore this method inefficacy when vehicle is in non-reversing state.
Along with the development of image processing techniques, a kind of backing system of panorama has appearred at present, and this system splices after the image of a plurality of cameras is calibrated through distortion, and is shown to the driver, to help human pilot surrounding enviroment is tested and assessed.The highly effective problem that solves the peripheral vehicle visual field of the method is as the technical scheme of Chinese patent 200710106237.8 disclosures.
But also there are some places to be improved in the technical scheme of above-mentioned Patent publish: technical scheme only adopts 3 road IMAQs, the excessive vehicle of volume or trailer are carried out image when synthetic, can cause the visual field and resolution requirement raising to the optical module of imageing sensor, this patent is not spoken of the distortion in images rectification simultaneously, when adopting the wide-angle imaging head, the fault image of not correcting
Make the observable spatial relationship variation of observer; Simultaneously above-mentioned patent also can't be targetedly turns to trend to highlight to travel direction and speed and driver, the function that also image is not strengthened when rainy day in greasy weather or lighting condition are poor is so that the clear effective image of the adaptive acquisition of operating personnel.
The utility model content
The purpose of this utility model is that the defective of above-mentioned prior art is improved, a kind of move operation supervisory control system of mechanically moving is provided, so that full side to be provided peripheral image information, this system can provide adaptive scene variation and image is carried out enhancement process according to monitoring object of which movement trend, so that more effective information to be provided.
The purpose of this utility model is achieved through the following technical solutions, the move operation supervisory control system of mechanically moving, comprise as lower unit: imageing sensor, be installed on the monitored object, be used to obtain the image information of described monitored object surrounding enviroment, and there is the area of visual field that overlaps each other in adjacent arbitrarily two described imageing sensors, and wherein, described monitored object is meant the equipment that needs native system to carry out the surrounding environment monitoring; The scene decision package or/and the operator operates intent information, generates the scene configuration parameter according to the movable information of described monitored object; Demarcate the unit, be used for described imageing sensor is demarcated, generate the calibrating parameters of described imageing sensor; Graphics processing unit, be connected with described imageing sensor, scene decision package and demarcation unit respectively, receive described imageing sensor image transmitted information, and the calibrating parameters of the described imageing sensor that comes of the scene configuration parameter that transmission comes according to described scene decision package and the transmission of described demarcation unit, described imageing sensor image transmitted information is handled; Output unit is connected with described graphics processing unit, is used to export the picture signal that described graphics processing unit transmission comes.
The move operation supervisory control system of above-mentioned mechanically moving, wherein, described graphics processing unit comprises: image acquisition units, be connected with described imageing sensor, gather the image information of the monitored object surrounding enviroment that described imageing sensor obtains; The picture distortion correcting unit is connected with described demarcation unit with described image acquisition units, and the calibrating parameters of the described imageing sensor that transmission comes according to described demarcation unit carries out distortion correction to the image information that described imageing sensor transmission comes; Anti-shake and the image enhancing unit of image is connected with described picture distortion correcting unit, adopts electronic flutter-proof and algorithm for image enhancement that the image information that described picture distortion correcting unit transmission comes is handled, to improve synthetic quality of image; The view transformation unit, be connected with image enhancing unit, described scene decision package and described demarcation unit with described image is anti-shake respectively, the calibrating parameters of the described imageing sensor that scene configuration parameter that transmission comes according to described scene decision package and the transmission of described demarcation unit come, graphical information anti-shake to described image and that the image enhancing unit transmission comes is handled, to form the image information of certain viewing angles; The image co-registration unit is connected with described view transformation unit, and the image that each described imageing sensor is obtained and the image of automobile merge, and forms the composograph of automobile and surrounding enviroment.
The move operation supervisory control system of above-mentioned mechanically moving, wherein, described image is anti-shake to comprise anti-shake module of image and Image Enhancement Based piece with image enhancing unit; The anti-shake module of described image is connected with described picture distortion correcting unit, and it is fuzzy to adopt the electronic flutter-proof algorithm to eliminate inter frame image; Described Image Enhancement Based piece is connected with the anti-shake module of described image, adopts algorithm for image enhancement to improve picture quality.
The move operation supervisory control system of above-mentioned mechanically moving, wherein, the anti-shake module of described image comprises motion detection block, motion estimation module and motion compensating module; Described motion detection block is connected with described picture distortion correcting unit, adopts motion detection algorithm to obtain the position that image takes place between different frame and moves; Described motion estimation module is connected with described motion detection block, and effective exercise is estimated; Described motion compensating module is connected with described motion estimation module, according to the result of calculation of described motion estimation module, original image is compensated, with the later image of the randomized jitter that is eliminated.
The move operation supervisory control system of above-mentioned mechanically moving, wherein, described Image Enhancement Based piece comprises that image brightness adjusting module, picture contrast strengthen module and image sketches the contours module; Described image brightness adjusting module is connected with described motion compensating module, is used to correct the inconsistent image brightness difference that causes of photo-character owing to the camera lens of each described imageing sensor; Described picture contrast strengthens module and is connected with described image brightness adjusting module, is used for the image degree of comparing that obtains is strengthened; Described image sketches the contours module and is connected with described picture contrast enhancing module, is used for the image border is sketched the contours and barrier is given prominence to and delineated, so that image has stronger identifiability.
The move operation supervisory control system of the utility model mechanically moving is installed a plurality of imageing sensors on monitored object, there is the area of visual field that overlaps each other in the two adjacent images transducer arbitrarily, to guarantee the comprehensive image information of obtaining monitored object surrounding enviroment, there is not blind area, graphics processing unit is handled the image information that each imageing sensor obtains according to scene decision package and demarcation unit transmission parameters, to provide effective information, and by output unit output, directly perceived, concrete.
Description of drawings
Further specify the utility model below in conjunction with the drawings and specific embodiments.
Fig. 1 is the structural representation of the move operation supervisory control system of the utility model mechanically moving.
Fig. 2 is the position view of the best scene viewpoint of self adaptation scene decision-making in the utility model.
Fig. 3 is the position view of the best scene viewpoint of non-self-adapting scene decision-making in the utility model.
Fig. 4 A~Fig. 4 D is each integration region schematic diagram of the image that obtains in the utility model.
Fig. 5 A~Fig. 5 D is the schematic diagram of calibrating template in the utility model.
Fig. 6 is a schematic diagram of demarcating imageing sensor embodiment one in the utility model.
Fig. 7 is a schematic diagram of demarcating imageing sensor embodiment two in the utility model.
Fig. 8 is the structural representation of the anti-shake and image enhancing unit of image in the utility model.
The structural representation of anti-shake module of image and Image Enhancement Based piece in Fig. 9 the utility model.
Embodiment
Below, introduce specific embodiment of the utility model in detail, so that further understand content of the present utility model by Fig. 1~Fig. 9.
The move operation supervisory control system of the utility model mechanically moving comprises:
Imageing sensor, be installed on the monitored object, be used to obtain the image information of described monitored object surrounding enviroment, and there is the area of visual field that overlaps each other in adjacent arbitrarily two described imageing sensors, wherein, described monitored object is meant the equipment that needs native system to carry out the surrounding environment monitoring, for example automobile, boats and ships;
The scene decision package or/and the operator operates intent information, generates the scene configuration parameter according to the movable information of described monitored object;
Demarcate the unit, be used for described imageing sensor is demarcated, generate the calibrating parameters of described imageing sensor;
Graphics processing unit, be connected with described imageing sensor, scene decision package and demarcation unit respectively, receive described imageing sensor image transmitted information, and the calibrating parameters of the described imageing sensor that comes of the scene configuration parameter that transmission comes according to described scene decision package and the transmission of described demarcation unit, described imageing sensor image transmitted information is handled;
Output unit is connected with described graphics processing unit, is used to export the picture signal that described graphics processing unit transmission comes.
The move operation supervisory control system of the utility model mechanically moving is installed a plurality of imageing sensors on monitored object, there is the area of visual field that overlaps each other in the two adjacent images transducer arbitrarily, to guarantee the comprehensive image information of obtaining monitored object surrounding enviroment, there is not blind area, graphics processing unit is handled the image information that each imageing sensor obtains according to scene decision package and demarcation unit transmission parameters, to provide effective information, and by output unit output, directly perceived, concrete.
With automobile is monitored object, describes the move operation supervisory control system of the utility model mechanically moving in detail:
Referring to Fig. 1, the move operation supervisory control system of the mechanically moving of present embodiment comprises: a plurality of imageing sensors 1, scene decision package 3, demarcation unit 5, graphics processing unit 2 and output unit 4.
Described a plurality of imageing sensor 1 is installed on the outer surface of car body, and around one week of car body, described imageing sensor 1 is used to obtain the image information of peripheral vehicle environment, and there is the area of visual field that overlaps each other in adjacent arbitrarily two described imageing sensors 1;
The number of described imageing sensor 1 should guarantee the comprehensive image information of obtaining the peripheral vehicle environment, and does not have blind area;
When described imageing sensor 1 is installed, the visual field of described imageing sensor 1 can only comprise the environment of peripheral vehicle, also can both comprise the environment of peripheral vehicle, comprises the part car body again, promptly take the part of car body, to guarantee when image splices, can show the position relation of car body and barrier;
Described imageing sensor 1 can be color image sensor, black and white image transducer, infrared image sensor or their combination, the combination of the combination of combination, color image sensor and the infrared image sensor of for example, the combination of black and white image transducer and color image sensor, black and white image transducer and infrared image sensor, black and white image transducer and color image sensor and infrared image sensor.
Described scene decision package 3 is connected with described graphics processing unit 2, and described scene decision package 3 or/and the operator operates intent information, generates the scene configuration parameter according to the movable information of automobile, and
Described scene configuration parameter is transferred to described graphics processing unit 2;
Described movable information or/and the operator operate rate signal, gear signal that intent information comprises automobile, turn to trend signal, acceleration signal and direction indicator lamp signal;
Described scene configuration parameter comprises best scene viewpoint and image scaled;
If configuration of automobiles speedometer and direction rotary angle transmitter, the movable information of the automobile that described scene decision package 3 can come according to the transmission of speedometer and direction rotary angle transmitter, adopt adaptive mode to choose best scene viewpoint, promptly according to the movement tendency direction to determine viewpoint direction, according to the distance of movement velocity with definite viewpoint, constitute adaptive visual angle by viewpoint direction and viewpoint distance, suppose that it is v that a certain moment automobile speed detects, best scene viewpoint
Be positioned at the distance L place of motor racing direction, L=f (vT
s), T in the formula
sBe the demonstration time, f (x) is the function of independent variable x, according to the real system needs, this function f is definite value, piecewise function or analytical function, and L can not be greater than the maximum in the visual field of the above imageing sensor 1 of motor racing direction, whether choosing also with in the scene of L exists barrier that relation is arranged simultaneously, as shown in Figure 2;
Several points that described scene decision package 3 also can be selected to fix are as best scene viewpoint, as shown in Figure 3, best scene viewpoint can be selected several constant bearing points of P0~P5 among Fig. 3, described scene decision package 3 is according to the motor racing direction and turn to trend to determine suitable best scene viewpoint: when automobile does not have the trend of turning to and is in forward travel state, and best scene visual point selection P0 point; When automobile does not have the trend of turning to and is in fallback state, best scene visual point selection P4 point; When automobile is in forward travel state, and when right-handed trend is arranged, best scene visual point selection P2 point; When automobile is in forward travel state, and when the trend of turning left is arranged, best scene visual point selection P1 point; When automobile is in fallback state, and when right-handed trend is arranged, best scene visual point selection P5 point; When automobile is in fallback state, and when the trend of turning left is arranged, best scene visual point selection P3 point;
If the unassembled vehicle speed sensor of automobile uses gear position sensor can determine the traffic direction of current automobile: when gear was positioned at drive shift, decidable was that vehicle is in direction of advance or there is the operation intention of advancing in the operator; When gear is positioned at when retreating grade, decidable is that vehicle is in fallback state, and there is the operation intention that retreats in perhaps current operator;
If the unassembled steering wheel angle sensor of automobile, then turn to the trend can be by the state justify of direction indicator lamp:, to judge that automobile storage is in left steering trend when direction indicator lamp is in when opening for the left steering lamp; When direction indicator lamp is in when opening for the right turn lamp, judge that automobile storage is in right turn trend;
If automobile assembling acceleration transducer, and the car body longitudinal axis of the sensitive direction of acceleration transducer and automobile is vertical and be positioned on the horizontal plane, can judge the trend that turns to of automobile this moment by the output of acceleration transducer: the acceleration signal of described acceleration transducer will be through filtering to remove The noise, when filtered acceleration signal detect left to acceleration the time, promptly there is the trend that turns to left in the decidable automobile; When the acceleration signal that detects to the right, then be judged to be automobile and have to the right the trend that turns to; When filtered acceleration signal is lower than certain thresholding, then is judged to automobile and does not exist and effectively turn to trend;
If automobile exists multiple when turning to the trend transducer, as possessing direction indicator lamp transducer, deflection transducer and acceleration transducer simultaneously,
Comprehensive the sensor information turns to the trend judgement, also can adopt priority orders to judge, an available priority is: the priority of direction indicator lamp signal is higher than steering wheel angle sensor information, and steering wheel angle sensor information priority level is higher than acceleration transducer;
If in the time of can supplying decision-making without any information, described scene decision package 3
Select the top view point, PT as shown in Figure 3 is default top view point, promptly is selected from autocentre point top;
Except the self adaptation shift scene, described scene decision package 3 can be according to operator's (present embodiment middle finger driver) needs and hand-guided and selection;
Described scene decision package 3 can also produce control with changed scale chi information except scene is adjusted, this engineer's scale is to be definite value, means in the image after fusion, and the ratio of each integration region A, B, C, D is consistent, shown in Fig. 4 A;
When field range is big, because the size of described output unit 4 is limited, therefore to select be not a definite value to engineer's scale, but the function with distance dependent, so that apart from the automobile scope far away more, image compression severe more, and in the nearer scope of distance automobile, image compression is less, shown in Fig. 4 B~Fig. 4 D, thick arrow among Fig. 4 B~Fig. 4 D is represented view directions, and thin arrow is represented the movement tendency of automobile.
Described demarcation unit 5 is connected with described graphics processing unit 2, to the calibrating parameters of the described imageing sensor 1 of described graphics processing unit 2 transmission;
The calibrating parameters of described imageing sensor 1 comprises the lens optical parameter of described imageing sensor 1 and the distortion parameter of described imageing sensor 1, and the attitude of described imageing sensor 1 and location parameter, wherein, the lens optical parameter of imageing sensor and the distortion parameter of imageing sensor are known as inner parameter in field of machine vision, and the attitude and the location parameter of imageing sensor are known as external parameter in field of machine vision;
If the number of the described imageing sensor of installing on the automobile 1 then all will be demarcated each described imageing sensor 1 greater than 1;
After described imageing sensor 1 is installed, before using native system monitoring automobile to move, earlier each described imageing sensor 1 is demarcated, obtain the lens optical parameter of each described imageing sensor 1, the distortion parameter of each described imageing sensor 1, and the attitude of each described imageing sensor 1 and location parameter;
Described imageing sensor 1 demarcated may further comprise the steps:
The described imageing sensor 1 of step 1.1 obtains the image information of demarcating module from different orientation and attitude;
Described calibrating template is provided with the rectangular slab of plane pattern, d pattern or wire pattern for the surface, the pattern on described calibrating template surface can be some polygons, for example some discrete squares, shown in Fig. 5 A and Fig. 5 B, can be the pattern that has straight line or curvilinear characteristic, for example gridiron pattern or grid, shown in Fig. 5 C, also can be the pattern that has the angle point feature, for example some round dots be shown in Fig. 5 D;
The pattern on described calibrating template surface and the size of described calibrating template are pre-set;
At least obtain the image information of demarcating module from 4 different azimuth and attitude;
Step 1.2 according to the image information of the described demarcating module that obtains, adopts the camera calibration method to calculate the lens optical parameter of described imageing sensor 1 and the distortion parameter of described imageing sensor 1;
The camera calibration method that described camera calibration method can be a nonlinear model or the camera calibration method of linear model;
For example, non-linear camera calibration method (RAC) based on radial constraint, list of references is: " Aversatile Camera Calibration Technique for High-Accuracy 3D MachineVision Metrology Using Off-the-Shelf TV Cameras and Lenses ", Roger Y.Tsai, IEEE Journal of Robotics and Automation, Vol.RA-3, No.4, August1987, pages 323-344;
The camera calibration algorithm of Zhang Zhengyou based on the 2D target, list of references: " Z.Zhang.Aflexible new technique for camera calibration " .IEEE Transactions onPattern Analysis and Machine Intelligence, 22 (11): 1330-1334,2000;
The scaling method of catadioptric formula camera and fisheye camera, list of references is: Scaramuzza, D. (2008). " Omnidirectional Vision:from Calibration to Robot Motion Estimation " ETH Zurich Thesis no.17635;
Step 2 is calculated the attitude and the location parameter of described imageing sensor 1;
Step 2.1 determines to demarcate the zone;
A described demarcation zone is corresponding to a described imageing sensor 1;
Described demarcation is provided with the benchmark stop position in the zone, and described benchmark stop position is stopped reference line by some benchmark and limited and form, and shape and coordinate that this benchmark is stopped reference line are to preestablish, at timing signal
Automobile stop is in this benchmark stop position, and the resemblance of described benchmark stop position and automobile matches, and accurately stops easily to guarantee automobile;
Step 2.2, automobile place in the benchmark stop position;
The timing signal automobile stop is in described benchmark stop position, and the holding position is constant;
Step 2.3 is placed described calibrating template in described demarcation zone, described imageing sensor 1 obtains the image information of described calibrating template;
Described calibrating template is placed with predefined attitude and position, as: horizontal positioned, the vertical combination of placing, tilting placement or above-mentioned attitude;
Step 2.4 according to the image information of the described calibrating template that obtains, adopts described camera calibration method to calculate the attitude and the location parameter of described imageing sensor 1;
Promptly adopt described camera calibration method to calculate attitude rotation amount and the displacement of each described imageing sensor 1 with respect to described calibrating template;
Because described calibrating template all is known in described position and attitude of demarcating in the zone, the dimension information of automobile can be measured in advance, the position of each described imageing sensor 1 and attitude all can be calculated like this, so just can obtain the attitude parameter and the displacement parameter of each described imageing sensor 1;
Do not stop reference line if described benchmark is set in the demarcation zone,
Described demarcating module is placed in the overlapping region of adjacent two described imageing sensors 1, make described two imageing sensors 1 can both photograph the pattern of this calibrating template simultaneously, equally, the attitude of described calibrating template and position all are predefined, timing signal carries out the location in twos of each described imageing sensor 1, then can calculate the attitude parameter and the displacement parameter of each described imageing sensor successively;
Figure 6 shows that the example in first demarcation zone, described benchmark is stopped reference line 61 and is limited described benchmark stop position 62 (intersection point 62 of two benchmark stop reference lines is called reference point among Fig. 6), described benchmark stop position is arranged in the described demarcation zone 6, automobile stop is in described benchmark stop position, respectively place a described demarcating module A1, A2, A3, A4 at the front, rear, left and right of automobile four direction, described demarcating module A1, A2, A3, A4 horizontal positioned, the calibrating template that 1 pair of horizontal plane of the described imageing sensor of timing signal is placed is taken;
Figure 7 shows that second example of demarcating the zone, automobile stop is in described benchmark stop position, respectively place a described demarcating module A1, A2, A3, A4 at the front, rear, left and right of automobile four direction, described demarcating module A1, A2, A3, A4 vertically place, and 1 pair of calibrating template of vertically placing of the described imageing sensor of timing signal is taken;
In fact as required, described calibrating template can take vertically, level or the combination of placing with specific attitude.
Continuation is referring to Fig. 1, and described graphics processing unit 2 comprises:
Picture distortion correcting unit 22, be connected with described demarcation unit 5 with described image acquisition units 21, the lens optical parameter of the described imageing sensor 1 that 5 transmission come according to described demarcation unit and the distortion parameter of described imageing sensor 1, and the attitude of described imageing sensor 1 and location parameter, the image information that described imageing sensor 1 transmission comes is carried out distortion correction;
Described distortion is because the lens distortion and the 1 undesirable generation of described imageing sensor of described imageing sensor 1, described distortion comprises tangential distortion, radial distortion, thin prism distortion, decentering distortion, and described distortion is described by the lens optical parameter (having comprised the undesirable distortion parameter that causes of optical system) of described imageing sensor 1 and the distortion parameter of described imageing sensor;
The method of described picture distortion correcting unit 22 employing interpolation or convolution is corrected the next image information of described imageing sensor 1 transmission and is carried out distortion correction;
Anti-shake and the image enhancing unit 23 of image is connected with described picture distortion correcting unit 22, adopts electronic flutter-proof and algorithm for image enhancement that the image information that described picture distortion correcting unit 22 transmission come is handled, to improve synthetic quality of image;
Best scene viewpoint and image scaled that described view transformation unit 24 comes according to described scene decision package 3 transmission, and described demarcation unit 5 transmits the attitude parameter and the location parameter of the described imageing sensor 1 that comes, graphical information anti-shake to described image and that image enhancing unit 23 transmission come is carried out view transformation and image zoom, to form the image information of certain viewing angles;
With linear camera model (pinhole imaging system model) is that example is described this view transformation:
In the formula: [u v]
TBe the pictorial element before the view transformation, i.e. image under original visual angle, [u ' v ']
TBe the pictorial element behind the view transformation, R
1Be original rotation homography matrix, be the spin matrix and the translation vector formation R ' of imageing sensor position
1=[r
1, r
2, t '], r
1And r
2Be spin matrix
First column vector
With the secondary series vector
R '
2Be new viewpoint rotation homography matrix, spin matrix and the translation matrix by new viewpoint position constituted equally;
α
x, α
y, u
0, v
0, γ is the linear model inner parameter:
α
x, α
yBe respectively u, the scale factor of v axle, or be called effective focal length α
x=f/dx, α
y=f/dy, dx, dy are respectively the pixel spacing of horizontal direction and vertical direction;
u
0, v
0It is optical centre;
γ is the out of plumb factor of u axle and v axle, γ=0 under a lot of situations;
The fusion method that described image co-registration unit 25 adopts can be direct
The image from two described imageing sensors 1 of registration position splices;
Registration position, i.e. splicing line position determines according to the field range of two described imageing sensors 1 and definition scope, guarantees in the splicing line both sides, is consistent from the image definition of two described imageing sensors 1 that splicing line is camber line or straight line.Can select for use during image co-registration directly
The original image of splicing line both sides is by moving the image that forms after the fusion, and splicing line can be shown as special color and gray scale, is beneficial to the supervisor and determines the splicing line position, and also display splicing line not is not to destroy the globality of image;
The fusion method that described image co-registration unit 25 adopts also can be that integration region adopts the image from two adjacent regions to be weighted on average at same merging point, does not just have splicing line in splice region like this, and image correspondence can be better;
To merge the position more accurate in order to make when image co-registration, need carry out registration to the image of integration region, the method of registration is: seek characteristics of image near splicing regions, as angle point, profile, features such as edge, utilize these characteristics of image that the image of two imageing sensors is mated then, to seek best matched line, image merges on this optimum Match line then, to avoid the appearance of situations such as ghost image.Can adopt the mode of variable weight to carry out the variable weight fusion during fusion, in splice region, any image of after the image co-registration certain, more little away from the point of imageing sensor shared weight in fused images more, come then big more from the weight of the image of another one imageing sensor at same position;
In the scene image of described image co-registration unit 25 after fusion, need to increase the plan view from above of automobile, perhaps overlook 3-D view, perhaps with the corresponding to three-dimensional perspective image in visual angle of described decision package 3 outputs, this image has certain transparency, to show the peripheral image information of being covered by equipment, the transparency of monitored object can be set by hand, and the color of the color of monitored object images of while etc. also can be set;
Described image co-registration unit 25 except
Outside the image information of each described imageing sensor 1 merges, also can increase such as satellite informations such as obstacle distances; Can
The signal of external sensor can merge in the scene image that is presented at final panorama, and as range information, obstacle information waits external parameter to be merged.
Described output unit 4 is connected with described image co-registration 25, is used to export the image information that described image co-registration 25 transmission come.
Referring to Fig. 8, described image is anti-shake to comprise anti-shake module 231 of image and Image Enhancement Based piece 232 with image enhancing unit 23;
The anti-shake module 231 of described image is connected with described picture distortion correcting unit 22, and it is fuzzy to adopt the electronic flutter-proof algorithm to eliminate inter frame image;
Because automobile can unavoidably vibrate, be that described imageing sensor 1 can constantly be shaken, therefore, the image that described imageing sensor 1 is taken might move by occurrence positions between different frames, image is anti-shake will to be caused image blurring and the piece mistake if do not carry out, the anti-shake module 231 of described image is used to finish the detection of shake and eliminates function, to improve the quality of image co-registration and splicing;
Described Image Enhancement Based piece 232 is connected with the anti-shake module 231 of described image, adopts algorithm for image enhancement to improve picture quality.
Referring to Fig. 9, the anti-shake module 231 of described image comprises motion detection block 2311, motion estimation module 2312 and motion compensating module 2313;
Described motion detection block 2311 is connected with described picture distortion correcting unit 22, adopts motion detection algorithm to obtain the position that image takes place between different frame and moves;
Motion detection algorithm commonly used comprises: sciagraphy (PA:Projection Algorithm), representative point matching algorithm (RPM:Representative Point Matching), bit plane matching method (BPM:BitPlane Matching) etc., adopt said method, can estimate translational movement and rotation amount that image takes place between different frame, perhaps zoom level;
Described motion estimation module 2312 is connected with described motion detection block 2311, and effective exercise is estimated;
The kinematic parameter that exists between the described motion estimation module 2312 estimation consecutive frames by the influence of filtering random motion, obtains effective global motion vector, and obtains the movement tendency of sequence frame, obtains actual image shift value, rotation value and scale value etc.;
Described motion compensating module 2313 is connected with described motion estimation module 2312, and skew, rotation and the convergent-divergent of the image that calculates according to described motion estimation module 2312 compensate original image, with the later image of the randomized jitter that is eliminated;
When the vibration of each described imageing sensor 1 can be ignored, can save the anti-shake module 231 of described image.
Continuation is referring to Fig. 9, and described Image Enhancement Based piece 232 comprises that image brightness adjusting module 2321, picture contrast strengthen module 2322 and image sketches the contours module 2323;
Described image brightness adjusting module 2321 is connected with described motion compensating module 2313, is used to correct the inconsistent image brightness difference that causes of photo-character owing to the camera lens of each described imageing sensor 1, this difference
Cause existing in the image after fusion the inconsistent zone of obvious brightness;
The brightness regulation coefficient can pass through to obtain in the brightness measurement of demarcating scene when native system uses for the first time, perhaps calculates in real time when native system moves, and through the image that brightness is adjusted, the zone of different brightness can not occur when splicing;
Described picture contrast strengthens module 2322 and is connected with described image brightness adjusting module 2321, is used for the image degree of comparing that obtains is strengthened;
When raining, can causing described imageing sensor 1 to photograph fuzzy degraded image when greasy weather or lighting condition difference, it is exactly to be used to handle the image deterioration that causes owing to this type of reason that described picture contrast strengthens module 2322, to improve the visual effect of image;
Common picture contrast enhancement algorithms comprises: histogram equalization method (HE), local histogram's equalization (AHE), non overlapping blocks histogram equalization method (POSHE), the interpolation adaptive histogram equalization, and broad sense histogram equalization method, all can realize the self adaptation contrast of the local message of image is strengthened by above-mentioned algorithm, to increase the contrast of image, improve the naked eyes identifiability of degraded image, improve the visual effect of image;
Described image sketches the contours module 2323 and is connected with described picture contrast enhancing module 2322, be used for the image border is sketched the contours and barrier is given prominence to and delineated, so that image has stronger identifiability, effectively point out guide to observer's sight;
In the monitoring operating process, operation needs to use the surplus light of eyes to observe native system sometimes, therefore need to adopt appropriate method that image is given prominence to, can adopt numerous image border operators that the edge of image feature is extracted, and can carry out feature detection to special pattern, to detect special object according to the feature of barrier, after detecting barrier, according to the position at its edge, the edge is sketched the contours or adopts special shape (circle, rectangle etc.)
In barrier sign and the image.The thickness etc. that whether adopts image to sketch the contours the color edges of the boundary line that function or image sketch the contours can be controlled by operating personnel, can as data such as obstacle distances, special area be delineated according to the sensor external data simultaneously;
Described Image Enhancement Based piece 232 can save part of module as required, for example, when not needing that image is carried out the edge when sketching the contours, can save described image and sketch the contours module 2323, similarly, described brightness adjusting module 2321 and picture contrast enhancing module 2322 also can be by selectively saving.
The move operation supervisory control system of the mechanically moving that the utility model provides, by adaptive environment around the monitored object is adjudicated, make operating personnel can see effective information directly perceived as much as possible, with effective raising fail safe, in various movable machineries field, also can enhance productivity for the operator provides effective information as much as possible.
More than show and described basic principle of the present utility model, principal character and advantage of the present utility model.The technical staff of the industry should understand; the utility model is not restricted to the described embodiments; that describes in the foregoing description and the specification just illustrates principle of the present utility model; the utility model also has various changes and modifications under the prerequisite that does not break away from the utility model spirit and scope, and these changes and improvements all fall in claimed the utility model scope.
Claims (5)
1. the move operation supervisory control system of a mechanically moving is characterized in that, comprising:
Imageing sensor, be installed on the monitored object, be used to obtain the image information of described monitored object surrounding enviroment, and there is the area of visual field that overlaps each other in adjacent arbitrarily two described imageing sensors, wherein, described monitored object is meant the equipment that needs native system to carry out the surrounding environment monitoring;
The scene decision package or/and the operator operates intent information, generates the scene configuration parameter according to the movable information of described monitored object;
Demarcate the unit, be used for described imageing sensor is demarcated, generate the calibrating parameters of described imageing sensor;
Graphics processing unit, be connected with described imageing sensor, scene decision package and demarcation unit respectively, receive described imageing sensor image transmitted information, and the calibrating parameters of the described imageing sensor that comes of the scene configuration parameter that transmission comes according to described scene decision package and the transmission of described demarcation unit, described imageing sensor image transmitted information is handled;
Output unit is connected with described graphics processing unit, is used to export the picture signal that described graphics processing unit transmission comes.
2. the move operation supervisory control system of mechanically moving as claimed in claim 1 is characterized in that, described graphics processing unit comprises:
Image acquisition units is connected with described imageing sensor, gathers the image information of the monitored object surrounding enviroment that described imageing sensor obtains;
The picture distortion correcting unit is connected with described demarcation unit with described image acquisition units, and the calibrating parameters of the described imageing sensor that transmission comes according to described demarcation unit carries out distortion correction to the image information that described imageing sensor transmission comes;
Anti-shake and the image enhancing unit of image is connected with described picture distortion correcting unit, adopts electronic flutter-proof and algorithm for image enhancement that the image information that described picture distortion correcting unit transmission comes is handled, to improve synthetic quality of image;
The view transformation unit, be connected with image enhancing unit, described scene decision package and described demarcation unit with described image is anti-shake respectively, the calibrating parameters of the described imageing sensor that scene configuration parameter that transmission comes according to described scene decision package and the transmission of described demarcation unit come, graphical information anti-shake to described image and that the image enhancing unit transmission comes is handled, to form the image information of certain viewing angles;
The image co-registration unit is connected with described view transformation unit, and the image that each described imageing sensor is obtained and the image of automobile merge, and forms the composograph of automobile and surrounding enviroment.
3. the move operation supervisory control system of mechanically moving as claimed in claim 2 is characterized in that, described image is anti-shake to comprise anti-shake module of image and Image Enhancement Based piece with image enhancing unit;
The anti-shake module of described image is connected with described picture distortion correcting unit, and it is fuzzy to adopt the electronic flutter-proof algorithm to eliminate inter frame image;
Described Image Enhancement Based piece is connected with the anti-shake module of described image, adopts algorithm for image enhancement to improve picture quality.
4. the move operation supervisory control system of mechanically moving as claimed in claim 3 is characterized in that, the anti-shake module of described image comprises motion detection block, motion estimation module and motion compensating module;
Described motion detection block is connected with described picture distortion correcting unit, adopts motion detection algorithm to obtain the position that image takes place between different frame and moves;
Described motion estimation module is connected with described motion detection block, and effective exercise is estimated;
Described motion compensating module is connected with described motion estimation module, according to the result of calculation of described motion estimation module, original image is compensated, with the later image of the randomized jitter that is eliminated.
5. the move operation supervisory control system of mechanically moving as claimed in claim 3 is characterized in that, described Image Enhancement Based piece comprises that image brightness adjusting module, picture contrast strengthen module and image sketches the contours module;
Described image brightness adjusting module is connected with described motion compensating module, is used to correct the inconsistent image brightness difference that causes of photo-character owing to the camera lens of each described imageing sensor;
Described picture contrast strengthens module and is connected with described image brightness adjusting module, is used for the image degree of comparing that obtains is strengthened;
Described image sketches the contours module and is connected with described picture contrast enhancing module, is used for the image border is sketched the contours and barrier is given prominence to and delineated, so that image has stronger identifiability.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011200394965U CN202035096U (en) | 2010-02-12 | 2011-02-11 | Mobile operation monitoring system for mobile machine |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201020114851.6 | 2010-02-12 | ||
CN201020114851.6 | 2010-02-12 | ||
CN2011200394965U CN202035096U (en) | 2010-02-12 | 2011-02-11 | Mobile operation monitoring system for mobile machine |
Publications (1)
Publication Number | Publication Date |
---|---|
CN202035096U true CN202035096U (en) | 2011-11-09 |
Family
ID=44603532
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011200394965U Expired - Fee Related CN202035096U (en) | 2010-02-12 | 2011-02-11 | Mobile operation monitoring system for mobile machine |
CN2011100387557A Pending CN102196242A (en) | 2010-02-12 | 2011-02-11 | Self-adaptive scene image auxiliary system with image enhancing function |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011100387557A Pending CN102196242A (en) | 2010-02-12 | 2011-02-11 | Self-adaptive scene image auxiliary system with image enhancing function |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN202035096U (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105459904A (en) * | 2015-11-30 | 2016-04-06 | 深圳市灵动飞扬科技有限公司 | Display method and system for vehicle turning |
CN109084509A (en) * | 2018-06-29 | 2018-12-25 | 昆明金域医学检验所有限公司 | A kind of medical test stained specimens preservation refrigerator-freezer |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2527784A1 (en) * | 2011-05-19 | 2012-11-28 | Hexagon Technology Center GmbH | Optical measurement method and system for determining 3D coordinates of a measured object surface |
CN104065926A (en) * | 2014-06-25 | 2014-09-24 | 中国移动通信集团广东有限公司 | Image enhancement method and system based on wireless high-definition video monitor system |
CN105216715A (en) * | 2015-10-13 | 2016-01-06 | 湖南七迪视觉科技有限公司 | A kind of motorist vision assists enhancing system |
CN105894511B (en) * | 2016-03-31 | 2019-02-12 | 恒大法拉第未来智能汽车(广东)有限公司 | Demarcate target setting method, device and parking assistance system |
CN105721793B (en) * | 2016-05-05 | 2019-03-12 | 深圳市歌美迪电子技术发展有限公司 | A kind of driving distance bearing calibration and device |
CN107115679A (en) * | 2017-05-29 | 2017-09-01 | 深圳市七布创新科技有限公司 | A kind of autonomous system and its control method |
CN109190527A (en) * | 2018-08-20 | 2019-01-11 | 合肥智圣新创信息技术有限公司 | A kind of garden personnel track portrait system monitored based on block chain and screen |
CN109767473B (en) * | 2018-12-30 | 2022-10-28 | 惠州华阳通用电子有限公司 | Panoramic parking device calibration method and device |
CN109685746B (en) * | 2019-01-04 | 2021-03-05 | Oppo广东移动通信有限公司 | Image brightness adjusting method and device, storage medium and terminal |
CN111435972B (en) * | 2019-01-15 | 2021-03-23 | 杭州海康威视数字技术股份有限公司 | Image processing method and device |
CN110519529B (en) * | 2019-05-14 | 2021-08-06 | 南开大学 | Optical splicing-based same-viewpoint all-round-looking imaging system and imaging method |
CN111008985B (en) * | 2019-11-07 | 2021-08-17 | 贝壳找房(北京)科技有限公司 | Panorama picture seam detection method and device, readable storage medium and electronic equipment |
CN113506362B (en) * | 2021-06-02 | 2024-03-19 | 湖南大学 | Method for synthesizing new view of single-view transparent object based on coding and decoding network |
-
2011
- 2011-02-11 CN CN2011200394965U patent/CN202035096U/en not_active Expired - Fee Related
- 2011-02-11 CN CN2011100387557A patent/CN102196242A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105459904A (en) * | 2015-11-30 | 2016-04-06 | 深圳市灵动飞扬科技有限公司 | Display method and system for vehicle turning |
CN105459904B (en) * | 2015-11-30 | 2017-12-22 | 深圳市灵动飞扬科技有限公司 | Turn inside diameter display methods and system |
CN109084509A (en) * | 2018-06-29 | 2018-12-25 | 昆明金域医学检验所有限公司 | A kind of medical test stained specimens preservation refrigerator-freezer |
Also Published As
Publication number | Publication date |
---|---|
CN102196242A (en) | 2011-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN202035096U (en) | Mobile operation monitoring system for mobile machine | |
CN102158684A (en) | Self-adapting scene image auxiliary system with image enhancement function | |
CN102163331A (en) | Image-assisting system using calibration method | |
JP7245295B2 (en) | METHOD AND DEVICE FOR DISPLAYING SURROUNDING SCENE OF VEHICLE-TOUCHED VEHICLE COMBINATION | |
JP4695167B2 (en) | Method and apparatus for correcting distortion and enhancing an image in a vehicle rear view system | |
US9738223B2 (en) | Dynamic guideline overlay with image cropping | |
CN103600707B (en) | A kind of parking position detection device and method of Intelligent parking system | |
CN104442567B (en) | Object Highlighting And Sensing In Vehicle Image Display Systems | |
US8199975B2 (en) | System and method for side vision detection of obstacles for vehicles | |
CN101763640B (en) | Online calibration processing method for vehicle-mounted multi-view camera viewing system | |
US20170140542A1 (en) | Vehicular image processing apparatus and vehicular image processing system | |
US20150109444A1 (en) | Vision-based object sensing and highlighting in vehicle image display systems | |
US20110169957A1 (en) | Vehicle Image Processing Method | |
CN108269235A (en) | A kind of vehicle-mounted based on OPENGL looks around various visual angles panorama generation method | |
JP2018531530A5 (en) | ||
US20140114534A1 (en) | Dynamic rearview mirror display features | |
US20140267415A1 (en) | Road marking illuminattion system and method | |
US20100245573A1 (en) | Image processing method and image processing apparatus | |
CN103728727A (en) | Information display system capable of automatically adjusting visual range and display method of information display system | |
CN105678787A (en) | Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera | |
US8477191B2 (en) | On-vehicle image pickup apparatus | |
CN103802725A (en) | New method for generating vehicle-mounted driving assisting image | |
CN107229906A (en) | A kind of automobile overtaking's method for early warning based on units of variance model algorithm | |
CN107027329A (en) | The topography of the surrounding environment of traveling instrument is spliced into an image | |
CN109345591B (en) | Vehicle posture detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20111109 Termination date: 20150211 |
|
EXPY | Termination of patent right or utility model |